The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping

  • Zwanenburg, Alex
  • Vallieres, Martin
  • Abdalah, Mahmoud A
  • Aerts, Hugo J W L
  • Andrearczyk, Vincent
  • Apte, Aditya
  • Ashrafinia, Saeed
  • Bakas, Spyridon
  • Beukinga, Roelof J
  • Boellaard, Ronald
  • Bogowicz, Marta
  • Boldrini, Luca
  • Buvat, Irene
  • Cook, Gary J R
  • Davatzikos, Christos
  • Depeursinge, Adrien
  • Desseroit, Marie-Charlotte
  • Dinapoli, Nicola
  • Dinh, Cuong Viet
  • Echegaray, Sebastian
  • El Naqa, Issam
  • Fedorov, Andriy Y
  • Gatta, Roberto
  • Gillies, Robert J
  • Goh, Vicky
  • Gotz, Michael
  • Guckenberger, Matthias
  • Ha, Sung Min
  • Hatt, Mathieu
  • Isensee, Fabian
  • Lambin, Philippe
  • Leger, Stefan
  • Leijenaar, Ralph T H
  • Lenkowicz, Jacopo
  • Lippert, Fiona
  • Losnegard, Are
  • Maier-Hein, Klaus H
  • Morin, Olivier
  • Muller, Henning
  • Napel, Sandy
  • Nioche, Christophe
  • Orlhac, Fanny
  • Pati, Sarthak
  • Pfaehler, Elisabeth A G
  • Rahmim, Arman
  • Rao, Arvind U K
  • Scherer, Jonas
  • Siddique, Muhammad Musib
  • Sijtsema, Nanna M
  • Socarras Fernandez, Jairo
  • Spezi, Emiliano
  • Steenbakkers, Roel J H M
  • Tanadini-Lang, Stephanie
  • Thorwarth, Daniela
  • Troost, Esther G C
  • Upadhaya, Taman
  • Valentini, Vincenzo
  • van Dijk, Lisanne V
  • van Griethuysen, Joost
  • van Velden, Floris H P
  • Whybra, Philip
  • Richter, Christian
  • Lock, Steffen
RadiologyRadiology 2020 Journal Article, cited 247 times
Website

Assessing robustness of radiomic features by image perturbation

  • Zwanenburg, Alex
  • Leger, Stefan
  • Agolli, Linda
  • Pilz, Karoline
  • Troost, Esther G C
  • Richter, Christian
  • Löck, Steffen
2019 Journal Article, cited 0 times
Website
Image features need to be robust against differences in positioning, acquisition and segmentation to ensure reproducibility. Radiomic models that only include robust features can be used to analyse new images, whereas models with non-robust features may fail to predict the outcome of interest accurately. Test-retest imaging is recommended to assess robustness, but may not be available for the phenotype of interest. We therefore investigated 18 combinations of image perturbations to determine feature robustness, based on noise addition (N), translation (T), rotation (R), volume growth/shrinkage (V) and supervoxel-based contour randomisation (C). Test-retest and perturbation robustness were compared for combined total of 4032 morphological, statistical and texture features that were computed from the gross tumour volume in two cohorts with computed tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19 head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was determined using the 95% confidence interval (CI) of the intraclass correlation coefficient (1, 1). Features with CI >/= 0:90 were considered robust. The NTCV, TCV, RNCV and RCV perturbation chain produced similar results and identified the fewest false positive robust features (NSCLC: 0.2-0.9%; HNSCC: 1.7-1.9%). Thus, these perturbation chains may be used as an alternative to test-retest imaging to assess feature robustness.

Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network

  • Zuo, Wangxia
  • Zhou, Fuqiang
  • He, Yuzhu
  • Li, Xiaosong
Med Phys 2019 Journal Article, cited 0 times
Website
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.

Prognostic value of baseline [18F]-fluorodeoxyglucose positron emission tomography parameters MTV, TLG and asphericity in an international multicenter cohort of nasopharyngeal carcinoma patients

  • Zschaeck, S.
  • Li, Y.
  • Lin, Q.
  • Beck, M.
  • Amthauer, H.
  • Bauersachs, L.
  • Hajiyianni, M.
  • Rogasch, J.
  • Ehrhardt, V. H.
  • Kalinauskaite, G.
  • Weingartner, J.
  • Hartmann, V.
  • van den Hoff, J.
  • Budach, V.
  • Stromberger, C.
  • Hofheinz, F.
PLoS One 2020 Journal Article, cited 1 times
Website
PURPOSE: [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET) parameters have shown prognostic value in nasopharyngeal carcinomas (NPC), mostly in monocenter studies. The aim of this study was to assess the prognostic impact of standard and novel PET parameters in a multicenter cohort of patients. METHODS: The established PET parameters metabolic tumor volume (MTV), total lesion glycolysis (TLG) and maximal standardized uptake value (SUVmax) as well as the novel parameter tumor asphericity (ASP) were evaluated in a retrospective multicenter cohort of 114 NPC patients with FDG-PET staging, treated with (chemo)radiation at 8 international institutions. Uni- and multivariable Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), event-free survival (EFS), distant metastases-free survival (FFDM), and locoregional control (LRC) was performed for clinical and PET parameters. RESULTS: When analyzing metric PET parameters, ASP showed a significant association with EFS (p = 0.035) and a trend for OS (p = 0.058). MTV was significantly associated with EFS (p = 0.026), OS (p = 0.008) and LRC (p = 0.012) and TLG with LRC (p = 0.019). TLG and MTV showed a very high correlation (Spearman's rho = 0.95), therefore TLG was subesequently not further analysed. Optimal cutoff values for defining high and low risk groups were determined by maximization of the p-value in univariate Cox regression considering all possible cutoff values. Generation of stable cutoff values was feasible for MTV (p<0.001), ASP (p = 0.023) and combination of both (MTV+ASP = occurrence of one or both risk factors, p<0.001) for OS and for MTV regarding the endpoints OS (p<0.001) and LRC (p<0.001). In multivariable Cox (age >55 years + one binarized PET parameter), MTV >11.1ml (hazard ratio (HR): 3.57, p<0.001) and ASP > 14.4% (HR: 3.2, p = 0.031) remained prognostic for OS. MTV additionally remained prognostic for LRC (HR: 4.86 p<0.001) and EFS (HR: 2.51 p = 0.004). Bootstrapping analyses showed that a combination of high MTV and ASP improved prognostic value for OS compared to each single variable significantly (p = 0.005 and p = 0.04, respectively). When using the cohort from China (n = 57 patients) for establishment of prognostic parameters and all other patients for validation (n = 57 patients), MTV could be successfully validated as prognostic parameter regarding OS, EFS and LRC (all p-values <0.05 for both cohorts). CONCLUSIONS: In this analysis, PET parameters were associated with outcome of NPC patients. MTV showed a robust association with OS, EFS and LRC. Our data suggest that combination of MTV and ASP may potentially further improve the risk stratification of NPC patients.

Combination of tumor asphericity and an extracellular matrix-related prognostic gene signature in non-small cell lung cancer patients

  • Zschaeck, S.
  • Klinger, B.
  • van den Hoff, J.
  • Cegla, P.
  • Apostolova, I.
  • Kreissl, M. C.
  • Cholewinski, W.
  • Kukuk, E.
  • Strobel, H.
  • Amthauer, H.
  • Bluthgen, N.
  • Zips, D.
  • Hofheinz, F.
2023 Journal Article, cited 0 times
Website
One important aim of precision oncology is a personalized treatment of patients. This can be achieved by various biomarkers, especially imaging parameters and gene expression signatures are commonly used. So far, combination approaches are sparse. The aim of the study was to independently validate the prognostic value of the novel positron emission tomography (PET) parameter tumor asphericity (ASP) in non small cell lung cancer (NSCLC) patients and to investigate associations between published gene expression profiles and ASP. This was a retrospective evaluation of PET imaging and gene expression data from three public databases and two institutional datasets. The whole cohort comprised 253 NSCLC patients, all treated with curative intent surgery. Clinical parameters, standard PET parameters and ASP were evaluated in all patients. Additional gene expression data were available for 120 patients. Univariate Cox regression and Kaplan-Meier analysis was performed for the primary endpoint progression-free survival (PFS) and additional endpoints. Furthermore, multivariate cox regression testing was performed including clinically significant parameters, ASP, and the extracellular matrix-related prognostic gene signature (EPPI). In the whole cohort, a significant association with PFS was observed for ASP (p < 0.001) and EPPI (p = 0.012). Upon multivariate testing, EPPI remained significantly associated with PFS (p = 0.018) in the subgroup of patients with additional gene expression data, while ASP was significantly associated with PFS in the whole cohort (p = 0.012). In stage II patients, ASP was significantly associated with PFS (p = 0.009), and a previously published cutoff value for ASP (19.5%) was successfully validated (p = 0.008). In patients with additional gene expression data, EPPI showed a significant association with PFS, too (p = 0.033). The exploratory combination of ASP and EPPI showed that the combinatory approach has potential to further improve patient stratification compared to the use of only one parameter. We report the first successful validation of EPPI and ASP in stage II NSCLC patients. The combination of both parameters seems to be a very promising approach for improvement of risk stratification in a group of patients with urgent need for a more personalized treatment approach.

Glioma Segmentation with 3D U-Net Backed with Energy-Based Post-Processing

  • Zsamboki, Richard
  • Takacs, Petra
  • Deak-Karancsi, Borbala
2021 Book Section, cited 0 times
This paper proposes a glioma segmentation method based on neural networks. The base of the network is a UNet, expanded by residual blocks. Several preprocessing steps were applied before training, such as intensity normalization, high intensity cutting, cropping, and random flips. 2D and 3D solutions are implemented and tested, and results show that the 3D network outperforms 2D directions, therefore we stayed with 3D directions. The novelty of the method is the energy-based post-processing. Snakes [10], and conditional random fields (CRF) [11] were applied to the neural network’s predictions. Snake or active contour needs an initial outline around the object – e.g. the network’s prediction outline - and it can correct the contours of the tumor based on calculating the energy minimum, based on the intensity values at a given area. CRF is a specific type of graphical model, it uses the network’s prediction and the raw image features to estimate the posterior distribution (the tumor contour) using energy function minimization. The proposed methods are evaluated within the framework of the BRATS 2020 challenge. Measured on the test dataset the mean dice scores of the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) are 86.9%, 83.2% and 81.8% respectively. The results show high performance and promising future work in tumor segmentation, even outside of the brain.

Comparison of Active Learning Strategies Applied to Lung Nodule Segmentation in CT Scans

  • Zotova, Daria
  • Lisowska, Aneta
  • Anderson, Owen
  • Dilys, Vismantas
  • O’Neil, Alison
2019 Book Section, cited 0 times
Supervised machine learning techniques require large amounts of annotated training data to attain good performance. Active learning aims to ease the data collection process by automatically detecting which instances an expert should annotate in order to train a model as quickly and effectively as possible. Such strategies have been previously reported for medical imaging, but for other tasks than focal pathologies where there is high class imbalance and heterogeneous background appearance. In this study we evaluate different data selection approaches (random, uncertain, and representative sampling) and a semi-supervised model training procedure (pseudo-labelling), in the context of lung nodule segmentation in CT volumes from the publicly available LIDC-IDRI dataset. We find that active learning strategies allow us to train a model with equal performance but less than half of the annotation effort; data selection by uncertainty sampling offers the most gain, with the incorporation of representativeness or the addition of pseudo-labelling giving further small improvements. We conclude that active learning is a valuable tool and that further development of these strategies can play a key role in making diagnostic algorithms viable.

Generative Adversarial Networks for Brain MRI Synthesis: Impact of Training Set Size on Clinical Application

  • Zoghby, M. M.
  • Erickson, B. J.
  • Conte, G. M.
2024 Journal Article, cited 0 times
Website
We evaluated the impact of training set size on generative adversarial networks (GANs) to synthesize brain MRI sequences. We compared three sets of GANs trained to generate pre-contrast T1 (gT1) from post-contrast T1 and FLAIR (gFLAIR) from T2. The baseline models were trained on 135 cases; for this study, we used the same model architecture but a larger cohort of 1251 cases and two stopping rules, an early checkpoint (early models) and one after 50 epochs (late models). We tested all models on an independent dataset of 485 newly diagnosed gliomas. We compared the generated MRIs with the original ones using the structural similarity index (SSI) and mean squared error (MSE). We simulated scenarios where either the original T1, FLAIR, or both were missing and used their synthesized version as inputs for a segmentation model with the original post-contrast T1 and T2. We compared the segmentations using the dice similarity coefficient (DSC) for the contrast-enhancing area, non-enhancing area, and the whole lesion. For the baseline, early, and late models on the test set, for the gT1, median SSI was .957, .918, and .947; median MSE was .006, .014, and .008. For the gFLAIR, median SSI was .924, .908, and .915; median MSE was .016, .016, and .019. The range DSC was .625-.955, .420-.952, and .610-.954. Overall, GANs trained on a relatively small cohort performed similarly to those trained on a cohort ten times larger, making them a viable option for rare diseases or institutions with limited resources.

Upright walking has driven unique vascular specialization of the hominin ilium

  • Zirkle, Dexter
  • Meindl, Richard S
  • Lovejoy, C Owen
PeerJ 2021 Journal Article, cited 0 times
Website

New Diagnostics for Bipedality: The hominin ilium displays landmarks of a modified growth trajectory

  • Zirkle, Dexter
2022 Thesis, cited 0 times
Website

Distinct Radiomic Phenotypes Define Glioblastoma TP53-PTEN-EGFR Mutational Landscape

  • Zinn, Pascal O
  • Singh, Sanjay K
  • Kotrotsou, Aikaterini
  • Abrol, Srishti
  • Thomas, Ginu
  • Mosley, Jennifer
  • Elakkad, Ahmed
  • Hassan, Islam
  • Kumar, Ashok
  • Colen, Rivka R
Neurosurgery 2017 Journal Article, cited 3 times
Website

A novel volume-age-KPS (VAK) glioblastoma classification identifies a prognostic cognate microRNA-gene signature

  • Zinn, Pascal O
  • Sathyan, Pratheesh
  • Mahajan, Bhanu
  • Bruyere, John
  • Hegi, Monika
  • Majumder, Sadhan
  • Colen, Rivka R
PLoS One 2012 Journal Article, cited 63 times
Website
BACKGROUND: Several studies have established Glioblastoma Multiforme (GBM) prognostic and predictive models based on age and Karnofsky Performance Status (KPS), while very few studies evaluated the prognostic and predictive significance of preoperative MR-imaging. However, to date, there is no simple preoperative GBM classification that also correlates with a highly prognostic genomic signature. Thus, we present for the first time a biologically relevant, and clinically applicable tumor Volume, patient Age, and KPS (VAK) GBM classification that can easily and non-invasively be determined upon patient admission. METHODS: We quantitatively analyzed the volumes of 78 GBM patient MRIs present in The Cancer Imaging Archive (TCIA) corresponding to patients in The Cancer Genome Atlas (TCGA) with VAK annotation. The variables were then combined using a simple 3-point scoring system to form the VAK classification. A validation set (N = 64) from both the TCGA and Rembrandt databases was used to confirm the classification. Transcription factor and genomic correlations were performed using the gene pattern suite and Ingenuity Pathway Analysis. RESULTS: VAK-A and VAK-B classes showed significant median survival differences in discovery (P = 0.007) and validation sets (P = 0.008). VAK-A is significantly associated with P53 activation, while VAK-B shows significant P53 inhibition. Furthermore, a molecular gene signature comprised of a total of 25 genes and microRNAs was significantly associated with the classes and predicted survival in an independent validation set (P = 0.001). A favorable MGMT promoter methylation status resulted in a 10.5 months additional survival benefit for VAK-A compared to VAK-B patients. CONCLUSIONS: The non-invasively determined VAK classification with its implication of VAK-specific molecular regulatory networks, can serve as a very robust initial prognostic tool, clinical trial selection criteria, and important step toward the refinement of genomics-based personalized therapy for GBM patients.

Radiogenomic mapping of edema/cellular invasion MRI-phenotypes in glioblastoma multiforme

  • Zinn, Pascal O
  • Majadan, Bhanu
  • Sathyan, Pratheesh
  • Singh, Sanjay K
  • Majumder, Sadhan
  • Jolesz, Ferenc A
  • Colen, Rivka R
PLoS One 2011 Journal Article, cited 192 times
Website
BACKGROUND: Despite recent discoveries of new molecular targets and pathways, the search for an effective therapy for Glioblastoma Multiforme (GBM) continues. A newly emerged field, radiogenomics, links gene expression profiles with MRI phenotypes. MRI-FLAIR is a noninvasive diagnostic modality and was previously found to correlate with cellular invasion in GBM. Thus, our radiogenomic screen has the potential to reveal novel molecular determinants of invasion. Here, we present the first comprehensive radiogenomic analysis using quantitative MRI volumetrics and large-scale gene- and microRNA expression profiling in GBM. METHODS: Based on The Cancer Genome Atlas (TCGA), discovery and validation sets with gene, microRNA, and quantitative MR-imaging data were created. Top concordant genes and microRNAs correlated with high FLAIR volumes from both sets were further characterized by Kaplan Meier survival statistics, microRNA-gene correlation analyses, and GBM molecular subtype-specific distribution. RESULTS: The top upregulated gene in both the discovery (4 fold) and validation (11 fold) sets was PERIOSTIN (POSTN). The top downregulated microRNA in both sets was miR-219, which is predicted to bind to POSTN. Kaplan Meier analysis demonstrated that above median expression of POSTN resulted in significantly decreased survival and shorter time to disease progression (P<0.001). High POSTN and low miR-219 expression were significantly associated with the mesenchymal GBM subtype (P<0.0001). CONCLUSION: Here, we propose a novel diagnostic method to screen for molecular cancer subtypes and genomic correlates of cellular invasion. Our findings also have potential therapeutic significance since successful molecular inhibition of invasion will improve therapy and patient survival in GBM.

Diffusion Weighted Magnetic Resonance Imaging Radiophenotypes and Associated Molecular Pathways in Glioblastoma

  • Zinn, Pascal O
  • Hatami, Masumeh
  • Youssef, Eslam
  • Thomas, Ginu A
  • Luedi, Markus M
  • Singh, Sanjay K
  • Colen, Rivka R
Neurosurgery 2016 Journal Article, cited 2 times
Website

The Utilization of Consignable Multi-Model in Detection and Classification of Pulmonary Nodules

  • Zia, Muhammad Bilal
  • Juan, Zhao Juan
  • Rehman, Zia Ur
  • Javed, Kamran
  • Rauf, Saad Abdul
  • Khan, Arooj
International Journal of Computer Applications 2019 Journal Article, cited 2 times
Website
Early stage Detection and Classification of pulmonary nodule diagnostics from CT images is a complicated task. The risk assessment for malignancy is usually used to assist the physician in assessing the cancer stage and creating a follow-up prediction strategy. Due to the difference in size, structure, and location of the nodules, the classification of nodules in the computer-assisted diagnostic system has been a great challenge. While deep learning is currently the most effective solution in terms of image detection and classification, there are many training information required, typically not readily accessible in most routine frameworks of medical imaging. Though, it is complicated for radiologists to recognize the inexplicability of deep neural networks. In this paper, a Consignable Multi-Model (CMM) is proposed for the detection and classification of a lung nodule, which first detect the lung nodule from CT images by different detection algorithms and then classify the lung nodules using Multi-Output DenseNet (MOD) technique. In order to enhance the interpretability of the proposed CMM, two inputs with multiple early outputs have been introduced in dense blocks. MOD accepts the detect patches into its two inputs which were identified from the detection phase and then classified it between benign and malignant using early outputs to gain more knowledge of a tumor. In addition, the experimental results on the LIDC-IDRI dataset demonstrate a 92.10% accuracy of CMM for the lung nodule classification, respectively. CMM made substantial progress in the diagnosis of nodules in contrast to the existing methods.

A Prediction Model for Deciphering Intratumoral Heterogeneity Derived from the Microglia/Macrophages of Glioma Using Non-Invasive Radiogenomics

  • Zhu, Yunyang
  • Song, Zhaoming
  • Wang, Zhong
Brain Sciences 2023 Journal Article, cited 0 times
Microglia and macrophages play a major role in glioma immune responses within the glioma microenvironment. We aimed to construct a prognostic prediction model for glioma based on microglia/macrophage-correlated genes. Additionally, we sought to develop a non-invasive radiogenomics approach for risk stratification evaluation. Microglia/macrophage-correlated genes were identified from four single-cell datasets. Hub genes were selected via lasso–Cox regression, and risk scores were calculated. The immunological characteristics of different risk stratifications were assessed, and radiomics models were constructed using corresponding MRI imaging to predict risk stratification. We identified eight hub genes and developed a relevant risk score formula. The risk score emerged as a significant prognostic predictor correlated with immune checkpoints, and a relevant nomogram was drawn. High-risk groups displayed an active microenvironment associated with microglia/macrophages. Furthermore, differences in somatic mutation rates, such as IDH1 missense variant and TP53 missense variant, were observed between high- and low-risk groups. Lastly, a radiogenomics model utilizing five features from magnetic resonance imaging (MRI) T2 fluid-attenuated inversion recovery (Flair) effectively predicted the risk groups under a random forest model. Our findings demonstrate that risk stratification based on microglia/macrophages can effectively predict prognosis and immune functions in glioma. Moreover, we have shown that risk stratification can be non-invasively predicted using an MRI-T2 Flair-based radiogenomics model.

Deciphering Genomic Underpinnings of Quantitative MRI-based Radiomic Phenotypes of Invasive Breast Carcinoma

  • Zhu, Yitan
  • Li, Hui
  • Guo, Wentian
  • Drukker, Karen
  • Lan, Li
  • Giger, Maryellen L
  • Ji, Yuan
Sci RepScientific reports 2015 Journal Article, cited 52 times
Website
Magnetic Resonance Imaging (MRI) has been routinely used for the diagnosis and treatment of breast cancer. However, the relationship between the MRI tumor phenotypes and the underlying genetic mechanisms remains under-explored. We integrated multi-omics molecular data from The Cancer Genome Atlas (TCGA) with MRI data from The Cancer Imaging Archive (TCIA) for 91 breast invasive carcinomas. Quantitative MRI phenotypes of tumors (such as tumor size, shape, margin, and blood flow kinetics) were associated with their corresponding molecular profiles (including DNA mutation, miRNA expression, protein expression, pathway gene expression and copy number variation). We found that transcriptional activities of various genetic pathways were positively associated with tumor size, blurred tumor margin, and irregular tumor shape and that miRNA expressions were associated with the tumor size and enhancement texture, but not with other types of radiomic phenotypes. We provide all the association findings as a resource for the research community (available at http://compgenome.org/Radiogenomics/). These findings pave potential paths for the discovery of genetic mechanisms regulating specific tumor phenotypes and for improving MRI techniques as potential non-invasive approaches to probe the cancer molecular status.

AnatomyNet: Deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy

  • Zhu, Wentao
  • Huang, Yufang
  • Zeng, Liang
  • Chen, Xuming
  • Liu, Yong
  • Qian, Zhen
  • Du, Nan
  • Fan, Wei
  • Xie, Xiaohui
Medical Physics 2018 Journal Article, cited 4 times
Website

Deep Learning for Automated Medical Image Analysis

  • Wentao Zhu
2019 Thesis, cited 0 times
Website
Medical imaging is an essential tool in many areas of medical applications, used for both diagnosis and treatment. However, reading medical images and making diagnosis or treatment recommendations require specially trained medical specialists. The current practice of reading medical images is labor-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations. Recent advances in deep learning enable us to rethink the ways of clinician diagnosis based on medical images. Early detection has proven to be critical to give patients the best chance of recovery and survival. Advanced computer-aided diagnosis systems are expected to have high sensitivities and small low positive rates. How to provide accurate diagnosis results and explore different types of clinical data is an important topic in the current computer-aided diagnosis research. In this thesis, we will introduce 1) mammograms for detecting breast cancers, the most frequently diagnosed solid cancer for U.S. women, 2) lung Computed Tomography (CT) images for detecting lung cancers, the most frequently diagnosed malignant cancer, and 3) head and neck CT images for automated delineation of organs at risk in radiotherapy. First, we will show how to employ the adversarial concept to generate the hard examples improving mammogram mass segmentation. Second, we will demonstrate how to use the weakly labelled data for the mammogram breast cancer diagnosis by efficiently design deep learning for multiinstance learning. Third, the thesis will walk through DeepLung system which combines deep 3D ConvNets and Gradient Boosting Machine (GBM) for automated lung nodule detection and classification. Fourth, we will show how to use weakly labelled data to improve existing lung nodule detection system by integrating deep learning with a probabilistic graphic model. Lastly, we will demonstrate the AnatomyNet which is thousands of times faster and more accurate than previous methods on automated anatomy segmentation.

Multi-task Learning-Driven Volume and Slice Level Contrastive Learning for 3D Medical Image Classification

  • Zhu, Jiayuan
  • Wang, Shujun
  • He, Jinzheng
  • Schönlieb, Carola-Bibiane
  • Yu, Lequan
2022 Conference Proceedings, cited 0 times
Website

Preliminary Clinical Study of the Differences Between Interobserver Evaluation and Deep Convolutional Neural Network-Based Segmentation of Multiple Organs at Risk in CT Images of Lung Cancer

  • Zhu, Jinhan
  • Liu, Yimei
  • Zhang, Jun
  • Wang, Yixuan
  • Chen, Lixin
Frontiers in Oncology 2019 Journal Article, cited 0 times
Website
Background: In this study, publicly datasets with organs at risk (OAR) structures were used as reference data to compare the differences of several observers. Convolutional neural network (CNN)-based auto-contouring was also used in the analysis. We evaluated the variations among observers and the effect of CNN-based auto-contouring in clinical applications. Materials and methods: A total of 60 publicly available lung cancer CT with structures were used; 48 cases were used for training, and the other 12 cases were used for testing. The structures of the datasets were used as reference data. Three observers and a CNN-based program performed contouring for 12 testing cases, and the 3D dice similarity coefficient (DSC) and mean surface distance (MSD) were used to evaluate differences from the reference data. The three observers edited the CNN-based contours, and the results were compared to those of manual contouring. A value of P<0.05 was considered statistically significant. Results: Compared to the reference data, no statistically significant differences were observed for the DSCs and MSDs among the manual contouring performed by the three observers at the same institution for the heart, esophagus, spinal cord, and left and right lungs. The 95% confidence interval (CI) and P-values of the CNN-based auto-contouring results comparing to the manual results for the heart, esophagus, spinal cord, and left and right lungs were as follows: the DSCs were CNN vs. A: 0.914~0.939(P = 0.004), 0.746~0.808(P = 0.002), 0.866~0.887(P = 0.136), 0.952~0.966(P = 0.158) and 0.960~0.972 (P = 0.136); CNN vs. B: 0.913~0.936 (P = 0.002), 0.745~0.807 (P = 0.005), 0.864~0.894 (P = 0.239), 0.952~0.964 (P = 0.308), and 0.959~0.971 (P = 0.272); and CNN vs. C: 0.912~0.933 (P = 0.004), 0.748~0.804(P = 0.002), 0.867~0.890 (P = 0.530), 0.952~0.964 (P = 0.308), and 0.958~0.970 (P = 0.480), respectively. The P-values of MSDs are similar to DSCs. The P-values of heart and esophagus is smaller than 0.05. No significant differences were found between the edited CNN-based auto-contouring results and the manual results. Conclusion: For the spinal cord, both lungs, no statistically significant differences were found between CNN-based auto-contouring and manual contouring. Further modifications to contouring of the heart and esophagus are necessary. Overall, editing based on CNN-based auto-contouring can effectively shorten the contouring time without affecting the results. CNNs have considerable potential for automatic contouring applications.

Identifying molecular genetic features and oncogenic pathways of clear cell renal cell carcinoma through the anatomical (PADUA) scoring system

  • Zhu, H
  • Chen, H
  • Lin, Z
  • Shi, G
  • Lin, X
  • Wu, Z
  • Zhang, X
  • Zhang, X
OncotargetOncotarget 2016 Journal Article, cited 3 times
Website
Although the preoperative aspects and dimensions used for the PADUA scoring system were successfully applied in macroscopic clinical practice for renal tumor, the relevant molecular genetic basis remained unclear. To uncover meaningful correlations between the genetic aberrations and radiological features, we enrolled 112 patients with clear cell renal cell carcinoma (ccRCC) whose clinicopathological data, genomics data and CT data were obtained from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). Overall PADUA score and several radiological features included in the PADUA system were assigned for each ccRCC. Despite having observed no significant association between the gene mutation frequency and the overall PADUA score, correlations between gene mutations and a few radiological features (tumor rim location and tumor size) were identified. A significant association between rim location and miRNA molecular subtypes was also observed. Survival analysis revealed that tumor size > 7 cm was significantly associated with poor survival. In addition, Gene Set Enrichment Analysis (GSEA) on mRNA expression revealed that the high PADUA score was related to numerous cancer-related networks, especially epithelial to mesenchymal transition (EMT) related pathways. This preliminary analysis of ccRCC revealed meaningful correlations between PADUA anatomical features and molecular basis including genomic aberrations and molecular subtypes.

Data sharing in clinical trials: An experience with two large cancer screening trials

  • Zhu, Claire S
  • Pinsky, Paul F
  • Moler, James E
  • Kukwa, Andrew
  • Mabie, Jerome
  • Rathmell, Joshua M
  • Riley, Tom
  • Prorok, Philip C
  • Berg, Christine D
PLoS medicine 2017 Journal Article, cited 1 times
Website

Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation

  • Zhou, Yuyin
  • Li, Zhe
  • Bai, Song
  • Wang, Chong
  • Chen, Xinlei
  • Han, Mei
  • Fishman, Elliot
  • Yuille, Alan L.
2019 Conference Paper, cited 0 times
Website
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computeraided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the “background” usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent, we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”, a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97%, surpassing the prior art by a large margin of 3.27%.

MRLA-Net: A tumor segmentation network embedded with a multiple receptive-field lesion attention module in PET-CT images

  • Zhou, Y.
  • Jiang, H.
  • Diao, Z.
  • Tong, G.
  • Luan, Q.
  • Li, Y.
  • Li, X.
Comput Biol Med 2023 Journal Article, cited 0 times
Website
The tumor image segmentation is an important basis for doctors to diagnose and formulate treatment planning. PET-CT is an extremely important technology for recognizing the systemic situation of diseases due to the complementary advantages of their respective modal information. However, current PET-CT tumor segmentation methods generally focus on the fusion of PET and CT features. The fusion of features will weaken the characteristics of the modality itself. Therefore, enhancing the modal features of the lesions can obtain optimized feature sets, which is extremely necessary to improve the segmentation results. This paper proposed an attention module that integrates the PET-CT diagnostic visual field and the modality characteristics of the lesion, that is, the multiple receptive-field lesion attention module. This paper made full use of the spatial domain, frequency domain, and channel attention, and proposed a large receptive-field lesion localization module and a small receptive-field lesion enhancement module, which together constitute the multiple receptive-field lesion attention module. In addition, a network embedded with a multiple receptive-field lesion attention module has been proposed for tumor segmentation. This paper conducted experiments on a private liver tumor dataset as well as two publicly available datasets, the soft tissue sarcoma dataset, and the head and neck tumor segmentation dataset. The experimental results showed that the proposed method achieves excellent performance on multiple datasets, and has a significant improvement compared with DenseUNet, and the tumor segmentation results on the above three PET/CT datasets were improved by 7.25%, 6.5%, 5.29% in Dice per case. Compared with the latest PET-CT liver tumor segmentation research, the proposed method improves by 8.32%.

Improving Classification with CNNs using Wavelet Pooling with Nesterov-Accelerated Adam

  • Zhou, Wenjin
  • Rossetto, Allison
2019 Conference Proceedings, cited 0 times
Website
Wavelet pooling methods can improve the classification accuracy of Convolutional Neural Networks (CNNs). Combining wavelet pooling with the Nesterov-accelerated Adam (NAdam) gradient calculation method can improve both the accuracy of the CNN. We have implemented wavelet pooling with NAdam in this work using both a Haar wavelet (WavPool-NH) and a Shannon wavelet (WavPool-NS). The WavPool-NH and WavPool- NS methods are most accurate of the methods we considered for the MNIST and LIDC- IDRI lung tumor data-sets. The WavPool-NH and WavPool-NS implementations have an accuracy of 95.92% and 95.52%, respectively, on the LIDC-IDRI data-set. This is an improvement from the 92.93% accuracy obtained on this data-set with the max pooling method. The WavPool methods also avoid overfitting which is a concern with max pool- ing. We also found WavPool performed fairly well on the CIFAR-10 data-set, however, overfitting was an issue with all the methods we considered. Wavelet pooling, especially when combined with an adaptive gradient and wavelets chosen specifically for the data, has the potential to outperform current methods.

Multiple-instance ensemble for construction of deep heterogeneous committees for high-dimensional low-sample-size data

  • Zhou, Q.
  • Wang, S.
  • Zhu, H.
  • Zhang, X.
  • Zhang, Y.
2023 Journal Article, cited 0 times
Website
Deep ensemble learning, where we combine knowledge learned from multiple individual neural networks, has been widely adopted to improve the performance of neural networks in deep learning. This field can be encompassed by committee learning, which includes the construction of neural network cascades. This study focuses on the high-dimensional low-sample-size (HDLS) domain and introduces multiple instance ensemble (MIE) as a novel stacking method for ensembles and cascades. In this study, our proposed approach reformulates the ensemble learning process as a multiple-instance learning problem. We utilise the multiple-instance learning solution of pooling operations to associate feature representations of base neural networks into joint representations as a method of stacking. This study explores various attention mechanisms and proposes two novel committee learning strategies with MIE. In addition, we utilise the capability of MIE to generate pseudo-base neural networks to provide a proof-of-concept for a "growing" neural network cascade that is unbounded by the number of base neural networks. We have shown that our approach provides (1) a class of alternative ensemble methods that performs comparably with various stacking ensemble methods and (2) a novel method for the generation of high-performing "growing" cascades. The approach has also been verified across multiple HDLS datasets, achieving high performance for binary classification tasks in the low-sample size regime.

WVALE: Weak variational autoencoder for localisation and enhancement of COVID-19 lung infections

  • Zhou, Q.
  • Wang, S.
  • Zhang, X.
  • Zhang, Y. D.
Comput Methods Programs Biomed 2022 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: The COVID-19 pandemic is a major global health crisis of this century. The use of neural networks with CT imaging can potentially improve clinicians' efficiency in diagnosis. Previous studies in this field have primarily focused on classifying the disease on CT images, while few studies targeted the localisation of disease regions. Developing neural networks for automating the latter task is impeded by limited CT images with pixel-level annotations available to the research community. METHODS: This paper proposes a weakly-supervised framework named "Weak Variational Autoencoder for Localisation and Enhancement" (WVALE) to address this challenge for COVID-19 CT images. This framework includes two components: anomaly localisation with a novel WVAE model and enhancement of supervised segmentation models with WVALE. RESULTS: The WVAE model have been shown to produce high-quality post-hoc attention maps with fine borders around infection regions, while weak supervision segmentation shows results comparable to conventional supervised segmentation models. The WVALE framework can enhance the performance of a range of supervised segmentation models, including state-of-art models for the segmentation of COVID-19 lung infection. CONCLUSIONS: Our study provides a proof-of-concept for weakly supervised segmentation and an alternative approach to alleviate the lack of annotation, while its independence from classification & segmentation frameworks makes it easily integrable with existing systems.

Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches

  • Zhou, M
  • Scott, J
  • Chaudhury, B
  • Hall, L
  • Goldgof, D
  • Yeom, KW
  • Iv, M
  • Ou, Y
  • Kalpathy-Cramer, J
  • Napel, S
American Journal of Neuroradiology 2017 Journal Article, cited 20 times
Website

HLA-DQA1 expression is associated with prognosis and predictable with radiomics in breast cancer

  • Zhou, J.
  • Xie, T.
  • Shan, H.
  • Cheng, G.
Radiat Oncol 2023 Journal Article, cited 0 times
Website
BACKGROUND: High HLA-DQA1 expression is associated with a better prognosis in many cancers. However, the association between HLA-DQA1 expression and prognosis of breast cancer and the noninvasive assessment of HLA-DQA1 expression are still unclear. This study aimed to reveal the association and investigate the potential of radiomics to predict HLA-DQA1 expression in breast cancer. METHODS: In this retrospective study, transcriptome sequencing data, medical imaging data, clinical and follow-up data were downloaded from the TCIA ( https://www.cancerimagingarchive.net/ ) and TCGA ( https://portal.gdc.cancer.gov/ ) databases. The clinical characteristic differences between the high HLA-DQA1 expression group (HHD group) and the low HLA-DQA1 expression group were explored. Gene set enrichment analysis, Kaplan‒Meier survival analysis and Cox regression were performed. Then, 107 dynamic contrast-enhanced magnetic resonance imaging features were extracted, including size, shape and texture. Using recursive feature elimination and gradient boosting machine, a radiomics model was established to predict HLA-DQA1 expression. Receiver operating characteristic (ROC) curves, precision-recall curves, calibration curves, and decision curves were used for model evaluation. RESULTS: The HHD group had better survival outcomes. The differentially expressed genes in the HHD group were significantly enriched in oxidative phosphorylation (OXPHOS) and estrogen response early and late signalling pathways. The radiomic score (RS) output from the model was associated with HLA-DQA1 expression. The area under the ROC curves (95% CI), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the radiomic model were 0.866 (0.775-0.956), 0.825, 0.939, 0.7, 0.775, and 0.913 in the training set and 0.780 (0.629-0.931), 0.659, 0.81, 0.5, 0.63, and 0.714 in the validation set, respectively, showing a good prediction effect. CONCLUSIONS: High HLA-DQA1 expression is associated with a better prognosis in breast cancer. Quantitative radiomics as a noninvasive imaging biomarker has potential value for predicting HLA-DQA1 expression.

MRI features predict survival and molecular markers in diffuse lower-grade gliomas

  • Zhou, Hao
  • Vallieres, Martin
  • Bai, Harrison X
  • Su, Chang
  • Tang, Haiyun
  • Oldridge, Derek
  • Zhang, Zishu
  • Xiao, Bo
  • Liao, Weihua
  • Tao, Yongguang
  • Zhou, Jianhua
  • Zhang, Paul
  • Yang, Li
2017 Journal Article, cited 41 times
Website
Background: Previous studies have shown that MR imaging features can be used to predict survival and molecular profile of glioblastoma. However, no study of a similar type has been performed on lower-grade gliomas (LGGs). Methods: Presurgical MRIs of 165 patients with diffuse low- and intermediate-grade gliomas (histological grades II and III) were scored according to the Visually Accessible Rembrandt Images (VASARI) annotations. Radiomic models using automated texture analysis and VASARI features were built to predict isocitrate dehydrogenase 1 (IDH1) mutation, 1p/19q codeletion status, histological grade, and tumor progression. Results: Interrater analysis showed significant agreement in all imaging features scored (k = 0.703-1.000). On multivariate Cox regression analysis, no enhancement and a smooth non-enhancing margin were associated with longer progression-free survival (PFS), while a smooth non-enhancing margin was associated with longer overall survival (OS) after taking into account age, grade, tumor location, histology, extent of resection, and IDH1 1p/19q subtype. Using logistic regression and bootstrap testing evaluations, texture models were found to possess higher prediction potential for IDH1 mutation, 1p/19q codeletion status, histological grade, and progression of LGGs than VASARI features, with areas under the receiver-operating characteristic curves of 0.86 +/- 0.01, 0.96 +/- 0.01, 0.86 +/- 0.01, and 0.80 +/- 0.01, respectively. Conclusion: No enhancement and a smooth non-enhancing margin on MRI were predictive of longer PFS, while a smooth non-enhancing margin was a significant predictor of longer OS in LGGs. Textural analyses of MR imaging data predicted IDH1 mutation, 1p/19q codeletion, histological grade, and tumor progression with high accuracy.

Machine learning reveals multimodal MRI patterns predictive of isocitrate dehydrogenase and 1p/19q status in diffuse low-and high-grade gliomas.

  • Zhou, H.
  • Chang, K.
  • Bai, H. X.
  • Xiao, B.
  • Su, C.
  • Bi, W. L.
  • Zhang, P. J.
  • Senders, J. T.
  • Vallieres, M.
  • Kavouridis, V. K.
  • Boaro, A.
  • Arnaout, O.
  • Yang, L.
  • Huang, R. Y.
Journal of Neuro-Oncology 2019 Journal Article, cited 0 times
Website
PURPOSE: Isocitrate dehydrogenase (IDH) and 1p19q codeletion status are importantin providing prognostic information as well as prediction of treatment response in gliomas. Accurate determination of the IDH mutation status and 1p19q co-deletion prior to surgery may complement invasive tissue sampling and guide treatment decisions. METHODS: Preoperative MRIs of 538 glioma patients from three institutions were used as a training cohort. Histogram, shape, and texture features were extracted from preoperative MRIs of T1 contrast enhanced and T2-FLAIR sequences. The extracted features were then integrated with age using a random forest algorithm to generate a model predictive of IDH mutation status and 1p19q codeletion. The model was then validated using MRIs from glioma patients in the Cancer Imaging Archive. RESULTS: Our model predictive of IDH achieved an area under the receiver operating characteristic curve (AUC) of 0.921 in the training cohort and 0.919 in the validation cohort. Age offered the highest predictive value, followed by shape features. Based on the top 15 features, the AUC was 0.917 and 0.916 for the training and validation cohort, respectively. The overall accuracy for 3 group prediction (IDH-wild type, IDH-mutant and 1p19q co-deletion, IDH-mutant and 1p19q non-codeletion) was 78.2% (155 correctly predicted out of 198). CONCLUSION: Using machine-learning algorithms, high accuracy was achieved in the prediction of IDH genotype in gliomas and moderate accuracy in a three-group prediction including IDH genotype and 1p19q codeletion.

Deep Learning for Prediction of N2 Metastasis and Survival for Clinical Stage I Non-Small Cell Lung Cancer

  • Zhong, Y.
  • She, Y.
  • Deng, J.
  • Chen, S.
  • Wang, T.
  • Yang, M.
  • Ma, M.
  • Song, Y.
  • Qi, H.
  • Wang, Y.
  • Shi, J.
  • Wu, C.
  • Xie, D.
  • Chen, C.
  • Multi-omics Classifier for Pulmonary Nodules (MISSION) Collaborative Group
RadiologyRadiology 2022 Journal Article, cited 0 times
Website
Background Preoperative mediastinal staging is crucial for the optimal management of clinical stage I non-small cell lung cancer (NSCLC). Purpose To develop a deep learning signature for N2 metastasis prediction and prognosis stratification in clinical stage I NSCLC. Materials and Methods In this retrospective study conducted from May 2020 to October 2020 in a population with clinical stage I NSCLC, an internal cohort was adopted to establish a deep learning signature. Subsequently, the predictive efficacy and biologic basis of the proposed signature were investigated in an external cohort. A multicenter diagnostic trial (registration number: ChiCTR2000041310) was also performed to evaluate its clinical utility. Finally, on the basis of the N2 risk scores, the instructive significance of the signature in prognostic stratification was explored. The diagnostic efficiency was quantified with the area under the receiver operating characteristic curve (AUC), and the survival outcomes were assessed using the Cox proportional hazards model. Results A total of 3096 patients (mean age +/- standard deviation, 60 years +/- 9; 1703 men) were included in the study. The proposed signature achieved AUCs of 0.82, 0.81, and 0.81 in an internal test set (n = 266), external test cohort (n = 133), and prospective test cohort (n = 300), respectively. In addition, higher deep learning scores were associated with a lower frequency of EGFR mutation (P = .04), higher rate of ALK fusion (P = .02), and more activation of pathways of tumor proliferation (P < .001). Furthermore, in the internal test set and external cohort, higher deep learning scores were predictive of poorer overall survival (adjusted hazard ratio, 2.9; 95% CI: 1.2, 6.9; P = .02) and recurrence-free survival (adjusted hazard ratio, 3.2; 95% CI: 1.4, 7.4; P = .007). Conclusion The deep learning signature could accurately predict N2 disease and stratify prognosis in clinical stage I non-small cell lung cancer. (c) RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Park and Lee in this issue.

Prediction of Human Papillomavirus (HPV) Status in Oropharyngeal Squamous Cell Carcinoma Based on Radiomics and Machine Learning Algorithms: A Multi-Cohort Study

  • Zhinan, Liang
  • Wei, Zhang
  • Yudi, You
  • Yabing, Dong
  • Yuanzhe, Xiao
  • Xiulan, Liu
2022 Journal Article, cited 0 times
Website
Background: Human Papillomavirus status has significant implications for prognostic evaluation and clinical decision-making for Oropharyngeal Squamous Cell Carcinoma patients. As a novel method, radiomics provides a possibility for non-invasive diagnosis. The aim of this study was to examine whether Computed Tomography (CT) radiomics and machine learning classifiers can effectively predict Human Papillomavirus types and be validated in external data in patients with Oropharyngeal Squamous Cell Carcinoma based on imaging data from multi-institutional and multi-national cohorts. Materials and methods: 651 patients from three multi-institutional and multi-national cohorts are collected in this retrospective study: OPC-Radiomics cohort (n=497), MAASTRO cohort (n=74), and SNPH cohort (n=80). OPC-Radiomics cohort was randomized into training cohort and validation cohort with a ratio of 2:1. MAASTRO cohort and SNPH cohort were used as independent external testing cohorts. 1316 quantitative features were extracted from the Computed Tomography images of primary tumors. After feature selection by using Logistic Regression and Recursive Feature Elimination algorithms, 10 different machine- learning classifiers were trained and compared in different cohorts. Results: By comparing 10 kinds of machine-learning classifiers, we found that the best performance was achieved when using a Random Forest-based model, with the Area Under the Receiver Operating Characteristic (ROC) Curves(AUCs) of 0.97, 0.72, 0.63, and 0.78 in the training cohort, validation cohort, testing cohort 1 (MAASTRO cohort), and testing cohort 2 (SNPH cohort), respectively. Conclusion: The Random Forest-based radiomics model was effective in differentiating Human Papillomavirus status of Oropharyngeal Squamous Cell Carcinoma in multi-national population, which provides the possibility for this non-invasive method to be widely applied in clinical practice.

Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model

  • Zheng, Y.
  • Zhang, J.
  • Huang, D.
  • Hao, X.
  • Qin, W.
  • Liu, Y.
2024 Journal Article, cited 0 times
Website
BACKGROUND: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas. METHODS: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (>/=7) from known systematic biopsy results. RESULTS: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy. CONCLUSIONS: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.

Spatial cellular architecture predicts prognosis in glioblastoma

  • Zheng, Y.
  • Carrillo-Perez, F.
  • Pizurica, M.
  • Heiland, D. H.
  • Gevaert, O.
2023 Journal Article, cited 0 times
Website
Intra-tumoral heterogeneity and cell-state plasticity are key drivers for the therapeutic resistance of glioblastoma. Here, we investigate the association between spatial cellular organization and glioblastoma prognosis. Leveraging single-cell RNA-seq and spatial transcriptomics data, we develop a deep learning model to predict transcriptional subtypes of glioblastoma cells from histology images. Employing this model, we phenotypically analyze 40 million tissue spots from 410 patients and identify consistent associations between tumor architecture and prognosis across two independent cohorts. Patients with poor prognosis exhibit higher proportions of tumor cells expressing a hypoxia-induced transcriptional program. Furthermore, a clustering pattern of astrocyte-like tumor cells is associated with worse prognosis, while dispersion and connection of the astrocytes with other transcriptional subtypes correlate with decreased risk. To validate these results, we develop a separate deep learning model that utilizes histology images to predict prognosis. Applying this model to spatial transcriptomics data reveal survival-associated regional gene expression programs. Overall, our study presents a scalable approach to unravel the transcriptional heterogeneity of glioblastoma and establishes a critical connection between spatial cellular architecture and clinical outcomes.

Identification of Novel Transcriptome Signature as a Potential Prognostic Biomarker for Anti-Angiogenic Therapy in Glioblastoma Multiforme

  • Zheng, S.
  • Tao, W.
Cancers (Basel) 2021 Journal Article, cited 3 times
Website
Glioblastoma multiforme (GBM) is the most common and devastating type of primary brain tumor, with a median survival time of only 15 months. Having a clinically applicable genetic biomarker would lead to a paradigm shift in precise diagnosis, personalized therapeutic decisions, and prognostic prediction for GBM. Radiogenomic profiling connecting radiological imaging features with molecular alterations will offer a noninvasive method for genomic studies of GBM. To this end, we analyzed over 3800 glioma and GBM cases across four independent datasets. The Chinese Glioma Genome Atlas (CGGA) and The Cancer Genome Atlas (TCGA) databases were employed for RNA-Seq analysis, whereas the Ivy Glioblastoma Atlas Project (Ivy-GAP) and The Cancer Imaging Archive (TCIA) provided clinicopathological data. The Clinical Proteomic Tumor Analysis Consortium Glioblastoma Multiforme (CPTAC-GBM) was used for proteomic analysis. We identified a simple three-gene transcriptome signature-SOCS3, VEGFA, and TEK-that can connect GBM's overall prognosis with genes' expression and simultaneously correlate radiographical features of perfusion imaging with SOCS3 expression levels. More importantly, the rampant development of neovascularization in GBM offers a promising target for therapeutic intervention. However, treatment with bevacizumab failed to improve overall survival. We identified SOCS3 expression levels as a potential selection marker for patients who may benefit from early initiation of angiogenesis inhibitors.

Age-related copy number variations and expression levels of F-box protein FBXL20 predict ovarian cancer prognosis

  • Zheng, S.
  • Fu, Y.
Translational oncologyTransl Oncol 2020 Journal Article, cited 0 times
Website
About 70% of ovarian cancer (OvCa) cases are diagnosed at advanced stages (stage III/IV) with only 20-40% of them survive over 5years after diagnosis. A reliably screening marker could enable a paradigm shift in OvCa early diagnosis and risk stratification. Age is one of the most significant risk factors for OvCa. Older women have much higher rates of OvCa diagnosis and poorer clinical outcomes. In this article, we studied the correlation between aging and genetic alterations in The Cancer Genome Atlas Ovarian Cancer dataset. We demonstrated that copy number variations (CNVs) and expression levels of the F-Box and Leucine-Rich Repeat Protein 20 (FBXL20), a substrate recognizing protein in the SKP1-Cullin1-F-box-protein E3 ligase, can predict OvCa overall survival, disease-free survival and progression-free survival. More importantly, FBXL20 copy number loss predicts the diagnosis of OvCa at a younger age, with over 60% of patients in that subgroup have OvCa diagnosed at age less than 60years. Clinicopathological studies further demonstrated malignant histological and radiographical features associated with elevated FBXL20 expression levels. This study has thus identified a potential biomarker for OvCa prognosis.

Topology guided demons registration with local rigidity preservation

  • Zheng, Chaojie
  • Wang, Xiuying
  • Feng, Dagan
2016 Conference Proceedings, cited 1 times
Website

A statistical method for lung tumor segmentation uncertainty in PET images based on user inference

  • Zheng, Chaojie
  • Wang, Xiuying
  • Feng, Dagan
2015 Conference Proceedings, cited 0 times
Website

Bag of Tricks for 3D MRI Brain Tumor Segmentation

  • Zhao, Yuan-Xing
  • Zhang, Yan-Ming
  • Liu, Cheng-Lin
2020 Book Section, cited 0 times
3D brain tumor segmentation is essential for the diagnosis, monitoring, and treatment planning of brain diseases. In recent studies, the Deep Convolution Neural Network (DCNN) is one of the most potent methods for medical image segmentation. In this paper, we review the different kinds of tricks applied to 3D brain tumor segmentation with DNN. We divide such tricks into three main categories: data processing methods including data sampling, random patch-size training, and semi-supervised learning, model devising methods including architecture devising and result fusing, and optimizing processes including warming-up learning and multi-task learning. Most of these approaches are not particular to brain tumor segmentation, but applicable to other medical image segmentation problems as well. Evaluated on the BraTS 2019 online testing set, we obtain Dice scores of 0.810, 0.883 and 0.861, and Hausdorff Distances (95th percentile) of 2.447, 4.792, and 5.581 for enhanced tumor core, whole tumor, and tumor core, respectively. Our method won the second place of the BraTS 2019 Challenge for the tumor segmentation.

Recurrent Multi-Fiber Network for 3D MRI Brain Tumor Segmentation

  • Zhao, Yue
  • Ren, Xiaoqiang
  • Hou, Kun
  • Li, Wentao
Symmetry 2021 Journal Article, cited 0 times
Website
Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder-decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model's ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.

Agile convolutional neural network for pulmonary nodule classification using CT images

  • Zhao, X.
  • Liu, L.
  • Qi, S.
  • Teng, Y.
  • Li, J.
  • Qian, W.
Int J Comput Assist Radiol Surg 2018 Journal Article, cited 6 times
Website
OBJECTIVE: To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. METHODS: A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. RESULTS: After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. CONCLUSIONS: This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

Bronchus Segmentation and Classification by Neural Networks and Linear Programming

  • Zhao, Tianyi
  • Yin, Zhaozheng
  • Wang, Jiao
  • Gao, Dashan
  • Chen, Yunqiang
  • Mao, Yunxiang
2019 Book Section, cited 0 times
Airway segmentation is a critical problem for lung disease analysis. However, building a complete airway tree is still a challenging problem because of the complex tree structure, and tracing the deep bronchi is not trivial in CT images because there are numerous small airways with various directions. In this paper, we develop two-stage 2D+3D neural networks and a linear programming based tracking algorithm for airway segmentation. Furthermore, we propose a bronchus classification algorithm based on the segmentation results. Our algorithm is evaluated on a dataset collected from 4 resources. We achieved the dice coefficient of 0.94 and F1 score of 0.86 by a centerline based evaluation metric, compared to the ground-truth manually labeled by our radiologists.

Airway Anomaly Detection by Prototype-Based Graph Neural Network

  • Zhao, Tianyi
  • Yin, Zhaozheng
2021 Conference Proceedings, cited 0 times
Website

Two-stage fusion set selection in multi-atlas-based image segmentation

  • Zhao, Tingting
  • Ruan, Dan
2015 Conference Proceedings, cited 0 times
Website
Conventional multi-atlas-based segmentation demands pairwise full-fledged registration between each atlas image and the target image, which leads to high computational cost and poses great challenge in the new era of big data. On the other hand, only the most relevant atlases should contribute to final label fusion. In this work, we introduce a two-stage fusion set selection method by first trimming the atlas collection into an augmented subset based on a low-cost registration and the preliminary relevance metric, followed by a further refinement based on a full-fledged registration and the corresponding relevance metric. A statistical inference model is established to relate the preliminary and the refined relevance metrics, and a proper augmented subset size is derived based on it. Empirical evidence supported the inference model, and end-to-end performance assessment demonstrated the proposed scheme to be computationally efficient without compromising segmentation accuracy.

Improving Brain Tumor Segmentation in Multi-sequence MR Images Using Cross-Sequence MR Image Generation

  • Zhao, Guojing
  • Zhang, Jianpeng
  • Xia, Yong
2020 Book Section, cited 0 times
Accurate brain tumor segmentation using multi-sequence magnetic resonance (MR) imaging plays a pivotal role in clinical practice and research settings. Despite their prevalence, deep learning-based segmentation methods, which usually use multiple MR sequences as input, still have limited performance, partly due to their insufficient ability to image representation. In this paper, we propose a brain tumor segmentation (BraTSeg) model, which uses cross-sequence MR image generation as a self-supervision tool to improve the segmentation accuracy. This model is an ensemble of three image segmentation and generation (ImgSG) models, which are designed for simultaneous segmentation of brain tumors and generation of T1, T2, and Flair sequences, respectively. We evaluated the proposed BraTSeg model on the BraTS 2019 dataset and achieved an average Dice similarity coefficient (DSC) of 81.93%, 87.80%, and 83.44% in the segmentation of enhancing tumor, whole tumor, and tumor score on the testing set, respectively. Our results suggest that using cross-sequence MR image generation is an effective self-supervision method that can improve the accuracy of brain tumor segmentation and the proposed BraTSeg model can produce satisfactory segmentation of brain tumors and intra-tumor structures.

Segmentation then Prediction: A Multi-task Solution to Brain Tumor Segmentation and Survival Prediction

  • Zhao, Guojing
  • Jiang, Bowen
  • Zhang, Jianpeng
  • Xia, Yong
2021 Book Section, cited 0 times
Accurate brain tumor segmentation and survival prediction are two fundamental but challenging tasks in the computer aided diagnosis of gliomas. Traditionally, these two tasks were performed independently, without considering the correlation between them. We believe that both tasks should be performed under a unified framework so as to enable them mutually benefit each other. In this paper, we propose a multi-task deep learning model called segmentation then prediction (STP), to segment brain tumors and predict patient overall survival time. The STP model is composed of a segmentation module and a survival prediction module. The former uses 3D U-Net as its backbone, and the latter uses both local and global features. The local features are extracted by the last layer of the segmentation encoder, while the global features are produced by a global branch, which uses 3D ResNet-50 as its backbone. The STP model is jointly optimized for two tasks. We evaluated the proposed STP model on the BraTS 2020 validation dataset and achieved an average Dice similarity coefficient (DSC) of 0.790, 0.910, 0.851 for the segmentation of enhanced tumor core, whole tumor, and tumor core, respectively, and an accuracy of 65.5% for survival prediction.

MVP U-Net: Multi-View Pointwise U-Net for Brain Tumor Segmentation

  • Zhao, Changchen
  • Zhao, Zhiming
  • Zeng, Qingrun
  • Feng, Yuanjing
2021 Book Section, cited 0 times
It is a challenging task to segment brain tumors from multi-modality MRI scans. How to segment and reconstruct brain tumors more accurately and faster remains an open question. The key is to effectively model spatial-temporal information that resides in the input volumetric data. In this paper, we propose Multi-View Pointwise U-Net (MVP U-Net) for brain tumor segmentation. Our segmentation approach follows encoder-decoder based 3D U-Net architecture, among which, the 3D convolution is replaced by three 2D multi-view convolutions in three orthogonal views (axial, sagittal, coronal) of the input data to learn spatial features and one pointwise convolution to learn channel features. Further, we modify the Squeeze-and-Excitation (SE) block properly and introduce it into our original MVP U-Net after the concatenation section. In this way, the generalization ability of the model can be improved while the number of parameters can be reduced. In BraTS 2020 testing dataset, the mean Dice scores of the proposed method were 0.715, 0.839, and 0.768 for enhanced tumor, whole tumor, and tumor core, respectively. The results show the effectiveness of the proposed MVP U-Net with the SE block for multi-modal brain tumor segmentation.

Contour interpolation by deep learning approach

  • Zhao, C.
  • Duan, Y.
  • Yang, D.
J Med Imaging (Bellingham) 2022 Journal Article, cited 0 times
Website
PURPOSE: Contour interpolation is an important tool for expediting manual segmentation of anatomical structures. The process allows users to manually contour on discontinuous slices and then automatically fill in the gaps, therefore saving time and efforts. The most used conventional shape-based interpolation (SBI) algorithm, which operates on shape information, often performs suboptimally near the superior and inferior borders of organs and for the gastrointestinal structures. In this study, we present a generic deep learning solution to improve the robustness and accuracy for contour interpolation, especially for these historically difficult cases. APPROACH: A generic deep contour interpolation model was developed and trained using 16,796 publicly available cases from 5 different data libraries, covering 15 organs. The network inputs were a 128 x 128 x 5 image patch and the two-dimensional contour masks for the top and bottom slices of the patch. The outputs were the organ masks for the three middle slices. The performance was evaluated on both dice scores and distance-to-agreement (DTA) values. RESULTS: The deep contour interpolation model achieved a dice score of 0.95 +/- 0.05 and a mean DTA value of 1.09 +/- 2.30 mm , averaged on 3167 testing cases of all 15 organs. In a comparison, the results by the conventional SBI method were 0.94 +/- 0.08 and 1.50 +/- 3.63 mm , respectively. For the difficult cases, the dice score and DTA value were 0.91 +/- 0.09 and 1.68 +/- 2.28 mm by the deep interpolator, compared with 0.86 +/- 0.13 and 3.43 +/- 5.89 mm by SBI. The t-test results confirmed that the performance improvements were statistically significant ( p < 0.05 ) for all cases in dice scores and for small organs and difficult cases in DTA values. Ablation studies were also performed. CONCLUSIONS: A deep learning method was developed to enhance the process of contour interpolation. It could be useful for expediting the tasks of manual segmentation of organs and structures in the medical images.

Reproducibility of radiomics for deciphering tumor phenotype with imaging

  • Zhao, Binsheng
  • Tan, Yongqiang
  • Tsai, Wei-Yann
  • Qi, Jing
  • Xie, Chuanmiao
  • Lu, Lin
  • Schwartz, Lawrence H
Sci RepScientific reports 2016 Journal Article, cited 91 times
Website

CNN-Based Fully Automatic Glioma Classification with Multi-modal Medical Images

  • Zhao, Bingchao
  • Huang, Jia
  • Liang, Changhong
  • Liu, Zaiyi
  • Han, Chu
2021 Book Section, cited 0 times
The accurate classification of gliomas is essential in clinical practice. It is valuable for clinical practitioners and patients to choose the appropriate management accordingly, promoting the development of personalized medicine. In the MICCAI 2020 Combined Radiology and Pathology Classification Challenge, 4 MRI sequences and a WSI image are provided for each patient. Participants are required to use the multi-modal images to predict the subtypes of glioma. In this paper, we proposed a fully automated pipeline for glioma classification. Our proposed model consists of two parts: feature extraction and feature fusion, which are respectively responsible for extracting representative features of images and making prediction. In specific, we proposed a segmentation-free self-supervised feature extraction network for 3D MRI volume. And a feature extraction model is designed for the H&E stained WSI by associating traditional image processing methods with convolutional neural network. Finally, we fuse the extracted features from multi-modal images and use a densely connected neural network to predict the final classification results. We evaluate the proposed model with F1-Score, Cohen’s Kappa, and Balanced Accuracy on the validation set, which achieves 0.943, 0.903, and 0.889 respectively.

Improving the fidelity of CT image colorization based on pseudo-intensity model and tumor metabolism enhancement

  • Zhang, Z.
  • Jiang, H.
  • Liu, J.
  • Shi, T.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
BACKGROUND: Subject to the principle of imaging, most medical images are gray-scale images. Human eyes are more sensitive to color images compared to gray-scale images. The state-of-the-art medical image colorization results are unnatural and unrealistic, especially in some organs, such as the lung field. METHOD: We propose a CT image colorization network that consists of a pseudo-intensity model, tumor metabolic enhancement, and MemoPainter-cGAN colorization network. First, the distributions of both the density of CT images and the intensity of anatomical images are analyzed with the aim of building a pseudo-intensity model. Then, the PET images, which are sensitive to tumor metabolism, are used to highlight the tumor regions. Finally, the MemoPainter-cGAN is used to generate colorized anatomical images. RESULTS: Our experiment verified that the mean structural similarity between the colorized images and the original color images is 0.995, which indicates that the colorized image maintains the features of the original images enormously. The average image information entropy is 6.62, which is 13.4% higher than that of the images before metabolism enhancement and colorization. It indicates that the image fidelity is significantly improved. CONCLUSIONS: Our method can generate vivid and fresh anatomical images based on prior knowledge of tissue or organ intensity. The colorized PET/CT images with abundant anatomical knowledge and high sensitivity of metabolic information provide radiologists with access to a new modality that offers additional reference information.

Utility of Brain Parcellation in Enhancing Brain Tumor Segmentation and Survival Prediction

  • Zhang, Yue
  • Wu, Jiewei
  • Huang, Weikai
  • Chen, Yifan
  • Wu, Ed X.
  • Tang, Xiaoying
2021 Book Section, cited 0 times
In this paper, we proposed a UNet-based brain tumor segmentation method and a linear model-based survival prediction method. The effectiveness of UNet has been validated in automatically segmenting brain tumors from multimodal magnetic resonance (MR) images. Rather than network architecture, we focused more on making use of additional information (brain parcellation), training and testing strategy (coarse-to-fine), and ensemble technique to improve the segmentation performance. We then developed a linear classification model for survival prediction. Different from previous studies that mainly employ features from brain tumor segmentation, we also extracted features from brain parcellation, which further improved the prediction accuracy. On the challenge testing dataset, the proposed approach yielded average Dice scores of 88.43%, 84.51%, and 78.93% for the whole tumor, tumor core, and enhancing tumor in the segmentation task and an overall accuracy of 0.533 in the survival prediction task.

GenU-Net++: An Automatic Intracranial Brain Tumors Segmentation Algorithm on 3D Image Series with High Performance

  • Zhang, Yan
  • Liu, Xi
  • Wa, Shiyun
  • Liu, Yutong
  • Kang, Jiali
  • Lv, Chunli
Symmetry 2021 Journal Article, cited 0 times
Website
Automatic segmentation of intracranial brain tumors in three-dimensional (3D) image series is critical in screening and diagnosing related diseases. However, there are various challenges in intracranial brain tumor images: (1) Multiple brain tumor categories hold particular pathological features. (2) It is a thorny issue to locate and discern brain tumors from other non-brain regions due to their complicated structure. (3) Traditional segmentation requires a noticeable difference in the brightness of the interest target relative to the background. (4) Brain tumor magnetic resonance images (MRI) have blurred boundaries, similar gray values, and low image contrast. (5) Image information details would be dropped while suppressing noise. Existing methods and algorithms do not perform satisfactorily in overcoming these obstacles mentioned above. Most of them share an inadequate accuracy in brain tumor segmentation. Considering that the image segmentation task is a symmetric process in which downsampling and upsampling are performed sequentially, this paper proposes a segmentation algorithm based on U-Net++, aiming to address the aforementioned problems. This paper uses the BraTS 2018 dataset, which contains MR images of 245 patients. We suggest the generative mask sub-network, which can generate feature maps. This paper also uses the BiCubic interpolation method for upsampling to obtain segmentation results different from U-Net++. Subsequently, pixel-weighted fusion is adopted to fuse the two segmentation results, thereby, improving the robustness and segmentation performance of the model. At the same time, we propose an auto pruning mechanism in terms of the architectural features of U-Net++ itself. This mechanism deactivates the sub-network by zeroing the input. It also automatically prunes GenU-Net++ during the inference process, increasing the inference speed and improving the network performance by preventing overfitting. Our algorithm's PA, MIoU, P, and R are tested on the validation dataset, reaching 0.9737, 0.9745, 0.9646, and 0.9527, respectively. The experimental results demonstrate that the proposed model outperformed the contrast models. Additionally, we encapsulate the model and develop a corresponding application based on the MacOS platform to make the model further applicable.

DDTNet: A dense dual-task network for tumor-infiltrating lymphocyte detection and segmentation in histopathological images of breast cancer

  • Zhang, Xiaoxuan
  • Zhu, Xiongfeng
  • Tang, Kai
  • Zhao, Yinghua
  • Lu, Zixiao
  • Feng, Qianjin
Med Image Anal 2022 Journal Article, cited 1 times
Website
The morphological evaluation of tumor-infiltrating lymphocytes (TILs) in hematoxylin and eosin (H& E)-stained histopathological images is the key to breast cancer (BCa) diagnosis, prognosis, and therapeutic response prediction. For now, the qualitative assessment of TILs is carried out by pathologists, and computer-aided automatic lymphocyte measurement is still a great challenge because of the small size and complex distribution of lymphocytes. In this paper, we propose a novel dense dual-task network (DDTNet) to simultaneously achieve automatic TIL detection and segmentation in histopathological images. DDTNet consists of a backbone network (i.e., feature pyramid network) for extracting multi-scale morphological characteristics of TILs, a detection module for the localization of TIL centers, and a segmentation module for the delineation of TIL boundaries, where a boundary-aware branch is further used to provide a shape prior to segmentation. An effective feature fusion strategy is utilized to introduce multi-scale features with lymphocyte location information from highly correlated branches for precise segmentation. Experiments on three independent lymphocyte datasets of BCa demonstrate that DDTNet outperforms other advanced methods in detection and segmentation metrics. As part of this work, we also propose a semi-automatic method (TILAnno) to generate high-quality boundary annotations for TILs in H& E-stained histopathological images. TILAnno is used to produce a new lymphocyte dataset that contains 5029 annotated lymphocyte boundaries, which have been released to facilitate computational histopathology in the future.

Spline curve deformation model with prior shapes for identifying adhesion boundaries between large lung tumors and tissues around lungs in CT images

  • Zhang, Xin
  • Wang, Jie
  • Yang, Ying
  • Wang, Bing
  • Gu, Lixu
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Automated segmentation of lung tumors attached to anatomic structures such as the chest wall or mediastinum remains a technical challenge because of the similar Hounsfield units of these structures. To address this challenge, we propose herein a spline curve deformation model that combines prior shapes to correct large spatially contiguous errors (LSCEs) in input shapes derived from image-appearance cues.The model is then used to identify the adhesion boundaries between large lung tumors and tissue around the lungs. METHODS: The deformation of the whole curve is driven by the transformation of the control points (CPs) of the spline curve, which are influenced by external and internal forces. The external force drives the model to fit the positions of the non-LSCEs of the input shapes while the internal force ensures the local similarity of the displacements of the neighboring CPs. The proposed model corrects the gross errors in the lung input shape caused by large lung tumors, where the initial lung shape for the model is inferred from the training shapes by shape group-based sparse prior information and the input lung shape is inferred by adaptive-thresholding-based segmentation followed by morphological refinement. RESULTS: The accuracy of the proposed model is verified by applying it to images of lungs with either moderate large-sized (ML) tumors or giant large-sized (GL) tumors. The quantitative results in terms of the averages of the dice similarity coefficient (DSC) and the Jaccard similarity index (SI) are 0.982 +/- 0.006 and 0.965 +/- 0.012 for segmentation of lungs adhered by ML tumors, and 0.952 +/- 0.048 and 0.926 +/- 0.059 for segmentation of lungs adhered by GL tumors, which give 0.943 +/- 0.021 and 0.897 +/- 0.041 for segmentation of the ML tumors, and 0.907 +/- 0.057 and 0.888 +/- 0.091 for segmentation of the GL tumors, respectively. In addition, the bidirectional Hausdorff distances are 5.7 +/- 1.4 and 11.3 +/- 2.5 mm for segmentation of lungs with ML and GL tumors, respectively. CONCLUSIONS: When combined with prior shapes, the proposed spline curve deformation can deal with large spatially consecutive errors in object shapes obtained from image-appearance information. We verified this method by applying it to the segmentation of lungs with large tumors adhered to the tissue around the lungs and the large tumors. Both the qualitative and quantitative results are more accurate and repeatable than results obtained with current state-of-the-art techniques.

Radiomics Strategy for Molecular Subtype Stratification of Lower‐Grade Glioma: Detecting IDH and TP53 Mutations Based on Multimodal MRI

  • Zhang, Xi
  • Tian, Qiang
  • Wang, Liang
  • Liu, Yang
  • Li, Baojuan
  • Liang, Zhengrong
  • Gao, Peng
  • Zheng, Kaizhong
  • Zhao, Bofeng
  • Lu, Hongbing
Journal of Magnetic Resonance Imaging 2018 Journal Article, cited 5 times
Website

A radiomics nomogram based on multiparametric MRI might stratify glioblastoma patients according to survival

  • Zhang, Xi
  • Lu, Hongbing
  • Tian, Qiang
  • Feng, Na
  • Yin, Lulu
  • Xu, Xiaopan
  • Du, Peng
  • Liu, Yang
European Radiology 2019 Journal Article, cited 0 times

Magnetic resonance imaging-based radiomic features for extrapolating infiltration levels of immune cells in lower-grade gliomas

  • Zhang, X.
  • Liu, S.
  • Zhao, X.
  • Shi, X.
  • Li, J.
  • Guo, J.
  • Niedermann, G.
  • Luo, R.
  • Zhang, X.
Strahlentherapie und Onkologie 2020 Journal Article, cited 3 times
Website
PURPOSE: To extrapolate the infiltration levels of immune cells in patients with lower-grade gliomas (LGGs) using magnetic resonance imaging (MRI)-based radiomic features. METHODS: A retrospective dataset of 516 patients with LGGs from The Cancer Genome Atlas (TCGA) database was analysed for the infiltration levels of six types of immune cells using Tumor IMmune Estimation Resource (TIMER) based on RNA sequencing data. Radiomic features were extracted from 107 patients whose pre-operative MRI data are available in The Cancer Imaging Archive; 85 and 22 of these patients were assigned to the training and testing cohort, respectively. The least absolute shrinkage and selection operator (LASSO) was applied to select optimal radiomic features to build the radiomic signatures for extrapolating the infiltration levels of immune cells in the training cohort. The developed radiomic signatures were examined in the testing cohort using Pearson's correlation. RESULTS: The infiltration levels of B cells, CD4+ T cells, CD8+ T cells, macrophages, neutrophils and dendritic cells negatively correlated with overall survival in the 516 patient cohort when using univariate Cox's regression. Age, Karnofsky Performance Scale, WHO grade, isocitrate dehydrogenase mutant status and the infiltration of neutrophils correlated with survival using multivariate Cox's regression analysis. The infiltration levels of the 6 cell types could be estimated by radiomic features in the training cohort, and their corresponding radiomic signatures were built. The infiltration levels of B cells, CD8+ T cells, neutrophils and macrophages estimated by radiomics correlated with those estimated by TIMER in the testing cohort. Combining clinical/genomic features with the radiomic signatures only slightly improved the prediction of immune cell infiltrations. CONCLUSION: We developed MRI-based radiomic models for extrapolating the infiltration levels of immune cells in LGGs. Our results may have implications for treatment planning.

Ability of (18)F-FDG Positron Emission Tomography Radiomics and Machine Learning in Predicting KRAS Mutation Status in Therapy-Naive Lung Adenocarcinoma

  • Zhang, R.
  • Shi, K.
  • Hohenforst-Schmidt, W.
  • Steppert, C.
  • Sziklavari, Z.
  • Schmidkonz, C.
  • Atzinger, A.
  • Hartmann, A.
  • Vieth, M.
  • Forster, S.
Cancers (Basel) 2023 Journal Article, cited 0 times
Website
OBJECTIVE: Considering the essential role of KRAS mutation in NSCLC and the limited experience of PET radiomic features in KRAS mutation, a prediction model was built in our current analysis. Our model aims to evaluate the status of KRAS mutants in lung adenocarcinoma by combining PET radiomics and machine learning. METHOD: Patients were retrospectively selected from our database and screened from the NSCLC radiogenomic dataset from TCIA. The dataset was randomly divided into three subgroups. Two open-source software programs, 3D Slicer and Python, were used to segment lung tumours and extract radiomic features from (18)F-FDG-PET images. Feature selection was performed by the Mann-Whitney U test, Spearman's rank correlation coefficient, and RFE. Logistic regression was used to build the prediction models. AUCs from ROCs were used to compare the predictive abilities of the models. Calibration plots were obtained to examine the agreements of observed and predictive values in the validation and testing groups. DCA curves were performed to check the clinical impact of the best model. Finally, a nomogram was obtained to present the selected model. RESULTS: One hundred and nineteen patients with lung adenocarcinoma were included in our study. The whole group was divided into three datasets: a training set (n = 96), a validation set (n = 11), and a testing set (n = 12). In total, 1781 radiomic features were extracted from PET images. One hundred sixty-three predictive models were established according to each original feature group and their combinations. After model comparison and selection, one model, including wHLH_fo_IR, wHLH_glrlm_SRHGLE, wHLH_glszm_SAHGLE, and smoking habits, was validated with the highest predictive value. The model obtained AUCs of 0.731 (95% CI: 0.619~0.843), 0.750 (95% CI: 0.248~1.000), and 0.750 (95% CI: 0.448~1.000) in the training set, the validation set and the testing set, respectively. Results from calibration plots in validation and testing groups indicated that there was no departure between observed and predictive values in the two datasets (p = 0.377 and 0.861, respectively). CONCLUSIONS: Our model combining (18)F-FDG-PET radiomics and machine learning indicated a good predictive ability of KRAS status in lung adenocarcinoma. It may be a helpful non-invasive method to screen the KRAS mutation status of heterogenous lung adenocarcinoma before selected biopsy sampling.

A region-adaptive non-local denoising algorithm for low-dose computed tomography images

  • Zhang, Pengcheng
  • Liu, Yi
  • Gui, Zhiguo
  • Chen, Yang
  • Jia, Lina
Mathematical Biosciences and Engineering 2022 Journal Article, cited 0 times
Low-dose computed tomography (LDCT) can effectively reduce radiation exposure in patients. However, with such dose reductions, large increases in speckled noise and streak artifacts occur, resulting in seriously degraded reconstructed images. The non-local means (NLM) method has shown potential for improving the quality of LDCT images. In the NLM method, similar blocks are obtained using fixed directions over a fixed range. However, the denoising performance of this method is limited. In this paper, a region-adaptive NLM method is proposed for LDCT image denoising. In the proposed method, pixels are classified into different regions according to the edge information of the image. Based on the classification results, the adaptive searching window, block size and filter smoothing parameter could be modified in different regions. Furthermore, the candidate pixels in the searching window could be filtered based on the classification results. In addition, the filter parameter could be adjusted adaptively based on intuitionistic fuzzy divergence (IFD). The experimental results showed that the proposed method performed better in LDCT image denoising than several of the related denoising methods in terms of numerical results and visual quality.

Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation

  • Zhang, Ling
  • Xu, Daguang
  • Xu, Ziyue
  • Wang, Xiaosong
  • Yang, Dong
  • Sanford, Thomas
  • Harmon, Stephanie
  • Turkbey, Baris
  • Wood, Bradford J
  • Roth, Holger
  • Myronenko, Andriy
IEEE Trans Med Imaging 2020 Journal Article, cited 0 times
Website
Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the “expected” domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented “big” data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n = 9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n=10~32 volumes, depending on the size of the available datasets) from a single source domain: (i) BigAug models degrade an average of 11% (Dice score change) from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than “shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n=465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.

Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model

  • Zhang, L.
  • Ren, Z.
Clin Radiol 2019 Journal Article, cited 0 times
Website
AIM: To realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model. MATERIALS AND METHODS: MRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model. RESULTS: For STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis. CONCLUSION: MRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment.

A Deep Generative Model-Integrated Framework for Three-Dimensional Time-Difference Electrical Impedance Tomography

  • Zhang, Ke
  • Wang, Lu
  • Guo, Rui
  • Lin, Zhichao
  • Li, Maokun
  • Yang, Fan
  • Xu, Shenheng
  • Abubakar, Aria
2022 Journal Article, cited 0 times
The time-difference image reconstruction problem of electrical impedance tomography (EIT) refers to reconstructing the conductivity change in a human body part between two time points using the boundary impedance measurements. Conventionally, the problem can be formulated as a linear inverse problem. However, due to the physical property of the forward process, the inverse problem is seriously ill-posed. As a result, traditional regularized least-squares-based methods usually produce low-resolution images that are difficult to interpret. This work proposes a framework that uses a deep generative model to constrain the unknown conductivity. Specifically, this framework allows the inclusion of a constraint that describes a mathematical relationship between the generative model and the unknown conductivity. The resultant constrained minimization problem is solved using an extended alternating direction method of multipliers (ADMM). The effectiveness of the framework is demonstrated by the example of three-dimensional time-difference chest EIT imaging. Numerical experiment shows a significant improvement of the image quality compared with total variation-regularized least-squares method (PSNR is improved by 4.3% for 10% noise and 4.6% for 30% noise; SSIM is improved by 4.8% for 10% noise and 6.0% for 30% noise). Human experiments show improved correlation between the reconstructed images and images from reference techniques.

AResU-Net: Attention Residual U-Net for Brain Tumor Segmentation

  • Zhang, J. X.
  • Lv, X. G.
  • Zhang, H. B.
  • Liu, B.
2020 Journal Article, cited 0 times
Automatic segmentation of brain tumors from magnetic resonance imaging (MRI) is a challenging task due to the uneven, irregular and unstructured size and shape of tumors. Recently, brain tumor segmentation methods based on the symmetric U-Net architecture have achieved favorable performance. Meanwhile, the effectiveness of enhancing local responses for feature extraction and restoration has also been shown in recent works, which may encourage the better performance of the brain tumor segmentation problem. Inspired by this, we try to introduce the attention mechanism into the existing U-Net architecture to explore the effects of local important responses on this task. More specifically, we propose an end-to-end 2D brain tumor segmentation network, i.e., attention residual U-Net (AResU-Net), which simultaneously embeds attention mechanism and residual units into U-Net for the further performance improvement of brain tumor segmentation. AResU-Net adds a series of attention units among corresponding down-sampling and up-sampling processes, and it adaptively rescales features to effectively enhance local responses of down-sampling residual features utilized for the feature recovery of the following up-sampling process. We extensively evaluate AResU-Net on two MRI brain tumor segmentation benchmarks of BraTS 2017 and BraTS 2018 datasets. Experiment results illustrate that the proposed AResU-Net outperforms its baselines and achieves comparable performance with typical brain tumor segmentation methods.

Comparing effectiveness of image perturbation and test retest imaging in improving radiomic model reliability

  • Zhang, J.
  • Teng, X.
  • Zhang, X.
  • Lam, S. K.
  • Lin, Z.
  • Liang, Y.
  • Yu, H.
  • Siu, S. W. K.
  • Chang, A. T. Y.
  • Zhang, H.
  • Kong, F. M.
  • Yang, R.
  • Cai, J.
2023 Journal Article, cited 0 times
Website
Image perturbation is a promising technique to assess radiomic feature repeatability, but whether it can achieve the same effect as test-retest imaging on model reliability is unknown. This study aimed to compare radiomic model reliability based on repeatable features determined by the two methods using four different classifiers. A 191-patient public breast cancer dataset with 71 test-retest scans was used with pre-determined 117 training and 74 testing samples. We collected apparent diffusion coefficient images and manual tumor segmentations for radiomic feature extraction. Random translations, rotations, and contour randomizations were performed on the training images, and intra-class correlation coefficient (ICC) was used to filter high repeatable features. We evaluated model reliability in both internal generalizability and robustness, which were quantified by training and testing AUC and prediction ICC. Higher testing performance was found at higher feature ICC thresholds, but it dropped significantly at ICC = 0.95 for the test-retest model. Similar optimal reliability can be achieved with testing AUC = 0.7-0.8 and prediction ICC > 0.9 at the ICC threshold of 0.9. It is recommended to include feature repeatability analysis using image perturbation in any radiomic study when test-retest is not feasible, but care should be taken when deciding the optimal feature repeatability criteria.

A fully automatic extraction of magnetic resonance image features in glioblastoma patients

  • Zhang, Jing
  • Barboriak, Daniel P
  • Hobbs, Hasan
  • Mazurowski, Maciej A
Medical Physics 2014 Journal Article, cited 21 times
Website
PURPOSE: Glioblastoma is the most common malignant brain tumor. It is characterized by low median survival time and high survival variability. Survival prognosis for glioblastoma is very important for optimized treatment planning. Imaging features observed in magnetic resonance (MR) images were shown to be a good predictor of survival. However, manual assessment of MR features is time-consuming and can be associated with a high inter-reader variability as well as inaccuracies in the assessment. In response to this limitation, the authors proposed and evaluated a computer algorithm that extracts important MR image features in a fully automatic manner. METHODS: The algorithm first automatically segmented the available volumes into a background region and four tumor regions. Then, it extracted ten features from the segmented MR imaging volumes, some of which were previously indicated as predictive of clinical outcomes. To evaluate the algorithm, the authors compared the extracted features for 73 glioblastoma patients to the reference standard established by manual segmentation of the tumors. RESULTS: The experiments showed that their algorithm was able to extract most of the image features with moderate to high accuracy. High correlation coefficients between the automatically extracted value and reference standard were observed for the tumor location, minor and major axis length as well as tumor volume. Moderately high correlation coefficients were also observed for proportion of enhancing tumor, proportion of necrosis, and thickness of enhancing margin. The correlation coefficients for all these features were statistically significant (p < 0.0001). CONCLUSIONS: The authors proposed and evaluated an algorithm that, given a set of MR volumes of a glioblastoma patient, is able to extract MR image features that correlate well with their reference standard. Future studies will evaluate how well the computer-extracted features predict survival.

DDU-Nets: Distributed Dense Model for 3D MRI Brain Tumor Segmentation

  • Zhang, Hanxiao
  • Li, Jingxiong
  • Shen, Mali
  • Wang, Yaqi
  • Yang, Guang-Zhong
2020 Book Section, cited 0 times
Segmentation of brain tumors and their subregions remains a challenging task due to their weak features and deformable shapes. In this paper, three patterns (cross-skip, skip-1 and skip-2) of distributed dense connections (DDCs) are proposed to enhance feature reuse and propagation of CNNs by constructing tunnels between key layers of the network. For better detecting and segmenting brain tumors from multi-modal 3D MR images, CNN-based models embedded with DDCs (DDU-Nets) are trained efficiently from pixel to pixel with a limited number of parameters. Postprocessing is then applied to refine the segmentation results by reducing the false-positive samples. The proposed method is evaluated on the BraTS 2019 dataset with results demonstrating the effectiveness of the DDU-Nets while requiring less computational cost.

Automatic lung tumor segmentation from CT images using improved 3D densely connected UNet

  • Zhang, G.
  • Yang, Z.
  • Jiang, S.
2022 Journal Article, cited 0 times
Website
Accurate lung tumor segmentation has great significance in the treatment planning of lung cancer. However, robust lung tumor segmentation becomes challenging due to the heterogeneity of tumors and the similar visual characteristics between tumors and surrounding tissues. Hence, we developed an improved 3D dense connected UNet (I-3D DenseUNet) to segment various lung tumors from CT images. The nested dense skip connection adopted in the I-3D DenseUNet aims to contribute similar feature maps between encoder and decoder sub-networks. The dense connection used in encoder-decoder blocks also encourages feature propagation and reuse. A robust data augmentation strategy was employed to alleviate over-fitting based on a 3D thin plate spline (TPS) algorithm. We evaluated our method on 938 lung tumors from three datasets consisting of 421 tumors from the Cancer Imaging Archive (TCIA), 450 malignant tumors from the Lung Image Database Consortium (LIDC), and 67 tumors from the private dataset. Experiment results showed an excellent Dice similarity coefficients (DSC) of 0.8316 for the TCIA and LIDC and 0.8167 for the private dataset. The proposed method presents a strong ability in lung tumor segmentation, and it has the potential to help radiologists in lung cancer treatment planning. Framework of the proposed lung tumor segmentation method.

Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net

  • Zhang, G.
  • Yang, Z.
  • Huo, B.
  • Chai, S.
  • Jiang, S.
Comput Methods Programs Biomed 2021 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Accurately and reliably defining organs at risk (OARs) and tumors are the cornerstone of radiation therapy (RT) treatment planning for lung cancer. Almost all segmentation networks based on deep learning techniques rely on fully annotated data with strong supervision. However, existing public imaging datasets encountered in the RT domain frequently include singly labelled tumors or partially labelled organs because annotating full OARs and tumors in CT images is both rigorous and tedious. To utilize labelled data from different sources, we proposed a dual-path semi-supervised conditional nnU-Net for OARs and tumor segmentation that is trained on a union of partially labelled datasets. METHODS: The framework employs the nnU-Net as the base model and introduces a conditioning strategy by incorporating auxiliary information as an additional input layer into the decoder. The conditional nnU-Net efficiently leverages prior conditional information to classify the target class at the pixelwise level. Specifically, we employ the uncertainty-aware mean teacher (UA-MT) framework to assist in OARs segmentation, which can effectively leverage unlabelled data (images from a tumor labelled dataset) by encouraging consistent predictions of the same input under different perturbations. Furthermore, we individually design different combinations of loss functions to optimize the segmentation of OARs (Dice loss and cross-entropy loss) and tumors (Dice loss and focal loss) in a dual path. RESULTS: The proposed method is evaluated on two publicly available datasets of the spinal cord, left and right lung, heart, esophagus, and lung tumor, in which satisfactory segmentation performance has been achieved in term of both the region-based Dice similarity coefficient (DSC) and the boundary-based Hausdorff distance (HD). CONCLUSIONS: The proposed semi-supervised conditional nnU-Net breaks down the barriers between nonoverlapping labelled datasets and further alleviates the problem of "data hunger" and "data waste" in multi-class segmentation. The method has the potential to help radiologists with RT treatment planning in clinical practice.

AML leukocyte classification method for small samples based on ACGAN

  • Zhang, C.
  • Zhu, J.
2024 Journal Article, cited 0 times
Website
Leukemia is a class of hematologic malignancies, of which acute myeloid leukemia (AML) is the most common. Screening and diagnosis of AML are performed by microscopic examination or chemical testing of images of the patient's peripheral blood smear. In smear-microscopy, the ability to quickly identify, count, and differentiate different types of blood cells is critical for disease diagnosis. With the development of deep learning (DL), classification techniques based on neural networks have been applied to the recognition of blood cells. However, DL methods have high requirements for the number of valid datasets. This study aims to assess the applicability of the auxiliary classification generative adversarial network (ACGAN) in the classification task for small samples of white blood cells. The method is trained on the TCIA dataset, and the classification accuracy is compared with two classical classifiers and the current state-of-the-art methods. The results are evaluated using accuracy, precision, recall, and F1 score. The accuracy of the ACGAN on the validation set is 97.1 % and the precision, recall, and F1 scores on the validation set are 97.5 , 97.3, and 97.4 %, respectively. In addition, ACGAN received a higher score in comparison with other advanced methods, which can indicate that it is competitive in classification accuracy.

A deep learning reconstruction framework for low dose phase contrast computed tomography via inter-contrast enhancement

  • Zhang, Changsheng
  • Zhu, Guogang
  • Fu, Jian
  • Zhao, Gang
Measurement 2023 Journal Article, cited 0 times
Website
Phase contrast computed tomography (PCCT) offers excellent imaging contrast on soft tissue while it generate absorption, phase and dark-field contrast tomographic images. It has shown a great potential in clinical diagnosis. However, existing PCCT methods require high radiation doses. Reducing tube current is a universal low dose approach while it will introduce quantum noise in projections. In this paper, we report a deep learning (DL) framework for low dose PCCT based on inter-contrast enhancement. It utilizes the multi-contrast feature of PCCT and the varying effects of noise on each contrast. The missing structure in the contrasts that are more affected by noise can be recovered by those that are less affected. Considering the grating-based PCCT as example, the proposed framework is validated with experiments and a dramatic quality improvement of multi-contrast tomographic images has been obtained. This study shows potential of DL techniques in the field of low dose PCCT.

PanelNet: A Novel Deep Neural Network for Predicting Collective Diagnostic Ratings by a Panel of Radiologists for Pulmonary Nodules

  • Zhang, Chunyan
  • Xu, Songhua
  • Li, Zongfang
2020 Conference Paper, cited 0 times
Website
Reducing misdiagnosis rate is a central concern in modern medicine. In clinical practice, group-based collective diagnosis is frequently exercised to curb the misdiagnosis rate. However, little effort has been dedicated to emulating the collective intelligence behind the group-based decision making practice in computer-aided diagnosis research to this day. To fill the overlooked gap, this study introduces a novel deep neural network, titled PanelNet, that is able to computationally model and reproduce the aforesaid collective diagnosis capability demonstrated by a group of medical experts. To experimentally explore the validity of the new solution, we apply the proposed PanelNet to one of the key tasks in radiology---assessing malignant ratings of pulmonary nodules. For each nodule and a given panel, PanelNet is able to predict statistical distribution of malignant ratings collectively judged by the panel of radiologists. Extensive experimental results consistently demonstrate PanelNet outperforms multiple state-of-the-art computer-aided diagnosis methods applicable to the collective diagnostic task. To our best knowledge, no other collective computer-aided diagnosis method grounded on modern machine learning technologies has been previously proposed. By its design, PanelNet can also be easily applied to model collective diagnosis processes employed for other diseases.

A semantic fidelity interpretable-assisted decision model for lung nodule classification

  • Zhan, X.
  • Long, H.
  • Gou, F.
  • Wu, J.
Int J Comput Assist Radiol Surg 2023 Journal Article, cited 1 times
Website
PURPOSE: Early diagnosis of lung nodules is important for the treatment of lung cancer patients, existing capsule network-based assisted diagnostic models for lung nodule classification have shown promising prospects in terms of interpretability. However, these models lack the ability to draw features robustly at shallow networks, which in turn limits the performance of the models. Therefore, we propose a semantic fidelity capsule encoding and interpretable (SFCEI)-assisted decision model for lung nodule multi-class classification. METHODS: First, we propose multilevel receptive field feature encoding block to capture multi-scale features of lung nodules of different sizes. Second, we embed multilevel receptive field feature encoding blocks in the residual code-and-decode attention layer to extract fine-grained context features. Integrating multi-scale features and contextual features to form semantic fidelity lung nodule attribute capsule representations, which consequently enhances the performance of the model. RESULTS: We implemented comprehensive experiments on the dataset (LIDC-IDRI) to validate the superiority of the model. The stratified fivefold cross-validation results show that the accuracy (94.17%) of our method exceeds existing advanced approaches in the multi-class classification of malignancy scores for lung nodules. CONCLUSION: The experiments confirm that the methodology proposed can effectively capture the multi-scale features and contextual features of lung nodules. It enhances the capability of shallow structure drawing features in capsule networks, which in turn improves the classification performance of malignancy scores. The interpretable model can support the physicians' confidence in clinical decision-making.

Convection enhanced delivery of anti-angiogenic and cytotoxic agents in combination therapy against brain tumour

  • Zhan, W.
Eur J Pharm Sci 2020 Journal Article, cited 0 times
Website
Convection enhanced delivery is an effective alternative to routine delivery methods to overcome the blood brain barrier. However, its treatment efficacy remains disappointing in clinic owing to the rapid drug elimination in tumour tissue. In this study, multiphysics modelling is employed to investigate the combination delivery of anti-angiogenic and cytotoxic drugs from the perspective of intratumoural transport. Simulations are based on a 3-D realistic brain tumour model that is reconstructed from patient magnetic resonance images. The tumour microvasculature is targeted by bevacizumab, and six cytotoxic drugs are included, as doxorubicin, carmustine, cisplatin, fluorouracil, methotrexate and paclitaxel. The treatment efficacy is evaluated in terms of the distribution volume where the drug concentration is above the corresponding LD90. Results demonstrate that the infusion of bevacizumab can slightly improve interstitial fluid flow, but is significantly efficient in reducing the fluid loss from the blood circulatory system to inhibit the concentration dilution. As the transport of bevacizumab is dominated by convection, its spatial distribution and anti-angiogenic effectiveness present high sensitivity to the directional interstitial fluid flow. Infusing bevacizumab could enhance the delivery outcomes of all the six drugs, however, the degree of enhancement differs. The delivery of doxorubicin can be improved most, whereas, the impacts on methotrexate and paclitaxel are limited. Fluorouracil could cover the comparable distribution volume as paclitaxel in the combination therapy for effective cell killing. Results obtain in this study could be a guide for the design of this co-delivery treatment.

Effects of Focused-Ultrasound-and-Microbubble-Induced Blood-Brain Barrier Disruption on Drug Transport under Liposome-Mediated Delivery in Brain Tumour: A Pilot Numerical Simulation Study

  • Zhan, Wenbo
Pharmaceutics 2020 Journal Article, cited 0 times
Website

A multimodal radiomic machine learning approach to predict the LCK expression and clinical prognosis in high-grade serous ovarian cancer

  • Zhan, F.
  • He, L.
  • Yu, Y.
  • Chen, Q.
  • Guo, Y.
  • Wang, L.
2023 Journal Article, cited 0 times
Website
We developed and validated a multimodal radiomic machine learning approach to noninvasively predict the expression of lymphocyte cell-specific protein-tyrosine kinase (LCK) expression and clinical prognosis of patients with high-grade serous ovarian cancer (HGSOC). We analyzed gene enrichment using 343 HGSOC cases extracted from The Cancer Genome Atlas. The corresponding biomedical computed tomography images accessed from The Cancer Imaging Archive were used to construct the radiomic signature (Radscore). A radiomic nomogram was built by combining the Radscore and clinical and genetic information based on multimodal analysis. We compared the model performances and clinical practicability via area under the curve (AUC), Kaplan-Meier survival, and decision curve analyses. LCK mRNA expression was associated with the prognosis of HGSOC patients, serving as a significant prognostic marker of the immune response and immune cells infiltration. Six radiomic characteristics were chosen to predict the expression of LCK and overall survival (OS) in HGSOC patients. The logistic regression (LR) radiomic model exhibited slightly better predictive abilities than the support vector machine model, as assessed by comparing combined results. The performance of the LR radiomic model for predicting the level of LCK expression with five-fold cross-validation achieved AUCs of 0.879 and 0.834, respectively, in the training and validation sets. Decision curve analysis at 60 months demonstrated the high clinical utility of our model within thresholds of 0.25 and 0.7. The radiomic nomograms were robust and displayed effective calibration. Abnormally high expression of LCK in HGSOC patients is significantly correlated with the tumor immune microenvironment and can be used as an essential indicator for predicting the prognosis of HGSOC. The multimodal radiomic machine learning approach can capture the heterogeneity of HGSOC, noninvasively predict the expression of LCK, and replace LCK for predictive analysis, providing a new idea for predicting the clinical prognosis of HGSOC and formulating a personalized treatment plan.

The prognostic value of CT radiomic features from primary tumours and pathological lymphnodes in head and neck cancer patients

  • Zhai, Tiantian
2020 Thesis, cited 0 times
Website
Head and neck cancer (HNC) is responsible for about 0.83 million new cancer cases and 0.43 million cancer deaths worldwide every year. Around 30%-50% of patients with locally advanced HNC experience treatment failures, predominantly occurring at the site of the primary tumor, followed by regional failures and distant metastases. In order to optimize treatment strategy, the overall aim of this thesis is to identify the patients who are at high risk of treatment failures. We developed and externally validated a series of models on the different patterns of failure to predict the risk of local failures, regional failures, distant metastasis and individual nodal failures in HNC patients. New type of radiomic features based on the CT image were included in our modelling analysis, and we firstly showed that the radiomic features improved the prognostic performance of the models containing clinical factors significantly. Our studies provide clinicians new tools to predict the risk of treatment failures. This may support optimization of treatment strategy of this disease, and subsequently improve the patient survival rate.

A lossless DWT-SVD domain watermarking for medical information security

  • Zermi, N.
  • Khaldi, A.
  • Kafi, M. R.
  • Kahlessenane, F.
  • Euschi, S.
Multimed Tools Appl 2021 Journal Article, cited 0 times
Website
The goal of this work is to protect as much as possible the images exchanged in telemedicine, to avoid any confusion between the patient's radiographs, these images are watermarked with the patient's information as well as the acquisition data. Thus, during the extraction, the doctor will be able to affirm with certainty that the images belong to the treated patient. The ultimate goal of our completed work is to properly integrate the watermark with as little distortion as possible to typically retain the medical information in the image. In this innovative approach used DWT decomposition is appropriately applied to the image which allows a remarkably satisfactory adjustment during the insertion. An SVD is then applied to the three subbands LL, LH and HL, which ideally allows retaining the maximum energy of the used image in a guaranteed minimum of singular values. A specific combination of the three resulting singular value matrices is then performed for watermark integration. The proposed approach ensures data integrity, patient confidentiality when sharing data, and robustness to several conventional attacks.

Blockchain for Privacy Preserving and Trustworthy Distributed Machine Learning in Multicentric Medical Imaging (C-DistriM)

  • Zerka, Fadila
  • Urovi, Visara
  • Vaidyanathan, Akshayaa
  • Barakat, Samir
  • Leijenaar, Ralph T. H.
  • Walsh, Sean
  • Gabrani-Juma, Hanif
  • Miraglio, Benjamin
  • Woodruff, Henry C.
  • Dumontier, Michel
  • Lambin, Philippe
IEEE Access 2020 Journal Article, cited 0 times
The utility of Artificial Intelligence (AI) in healthcare strongly depends upon the quality of the data used to build models, and the confidence in the predictions they generate. Access to sufficient amounts of high-quality data to build accurate and reliable models remains problematic owing to substantive legal and ethical constraints in making clinically relevant research data available offsite. New technologies such as distributed learning offer a pathway forward, but unfortunately tend to suffer from a lack of transparency, which undermines trust in what data are used for the analysis. To address such issues, we hypothesized that, a novel distributed learning that combines sequential distributed learning with a blockchain-based platform, namely Chained Distributed Machine learning C-DistriM, would be feasible and would give a similar result as a standard centralized approach. C-DistriM enables health centers to dynamically participate in training distributed learning models. We demonstrate C-DistriM using the NSCLC-Radiomics open data to predict two-year lung-cancer survival. A comparison of the performance of this distributed solution, evaluated in six different scenarios, and the centralized approach, showed no statistically significant difference (AUCs between central and distributed models), all DeLong tests yielded p -val >0.05. This methodology removes the need to blindly trust the computation in one specific server on a distributed learning network. This fusion of blockchain and distributed learning serves as a proof-of-concept to increase transparency, trust, and ultimately accelerate the adoption of AI in multicentric studies. We conclude that our blockchain-based model for sequential training on distributed datasets is a feasible approach, provides equivalent performance to the centralized approach.

Privacy preserving distributed learning classifiers - Sequential learning with small sets of data

  • Zerka, F.
  • Urovi, V.
  • Bottari, F.
  • Leijenaar, R. T. H.
  • Walsh, S.
  • Gabrani-Juma, H.
  • Gueuning, M.
  • Vaidyanathan, A.
  • Vos, W.
  • Occhipinti, M.
  • Woodruff, H. C.
  • Dumontier, M.
  • Lambin, P.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
BACKGROUND: Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. METHODS: We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). FINDINGS: The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. INTERPRETATION: Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset.

Starlight: A kernel optimizer for GPU processing

  • Zeni, Alberto
  • Del Sozzo, Emanuele
  • D'Arnese, Eleonora
  • Conficconi, Davide
  • Santambrogio, Marco D.
2024 Journal Article, cited 0 times
Website
Over the past few years, GPUs have found widespread adoption in many scientific domains, offering notable performance and energy efficiency advantages compared to CPUs. However, optimizing GPU high-performance kernels poses challenges given the complexities of GPU architectures and programming models. Moreover, current GPU development tools provide few high-level suggestions and overlook the underlying hardware. Here we present Starlight, an open-source, highly flexible tool for enhancing GPU kernel analysis and optimization. Starlight autonomously describes Roofline Models, examines performance metrics, and correlates these insights with GPU architectural bottlenecks. Additionally, Starlight predicts potential performance enhancements before altering the source code. We demonstrate its efficacy by applying it to literature genomics and physics applications, attaining speedups from 1.1× to 2.5× over state-of-the-art baselines. Furthermore, Starlight supports the development of new GPU kernels, which we exemplify through an image processing application, showing speedups of 12.7× and 140× when compared against state-of-the-art FPGA- and GPU-based solutions.

An Attention Based Deep Learning Model for Direct Estimation of Pharmacokinetic Maps from DCE-MRI Images

  • Zeng, Qingyuan
  • Zhou, Wu
2021 Conference Paper, cited 0 times
Website
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a useful imaging technique that can quantitatively measure the pharmacokinetic (PK) parameters to characterize the microvasculature of tissues. Typically, the PK parameters are extracted by fitting the MR signal intensity of the pixels on the time series with the nonlinear least-squares method. The main disadvantage is that there are thousands of voxels in a single MR slice and the time consumption of voxels fitting too btain the P K parameters is very large. Recently, deep learning methods based on convolutional neural networks (CNNs) and Long Short-Term Memory (LSTM) network have been applied to directly estimate the PK parameters from the acquired DCE-MRI image-temporal series. However, how to effectively extract discriminative spatial and temporal features within DCE-MRI for the estimation of PK parameters is still a challenging problem, due to the large intensity variation of tissue images in different temporal phases of DCE-MRI during the injection of contrast agents. In this work, we propose an attention based deep learning model for the estimation of PK parameters, which can improve the estimation performance of PK parameters by focusing on dominant spatial and temporal characteristics. Specifically, a temporal frame attention block (FAB) and a channel/spatial attention block (CSAB) are separately designed to focus on dominant features in specific temporal phases, channels and spatial areas for better estimation. Experimental results of clinical DCE-MRI from an open source RIDER-NEURO dataset with quantitative and qualitative evaluation demonstrate that the proposed method outperforms previously reported CNN-based and LSTM-based deep learning models for the estimation of PK maps, and the ablation study also demonstrates the effectiveness of the proposed attention-based modules. In addition, the visualization of the attention mechanism reflects interesting findings that are consistent with clinical interpretation.

Segmentation of gliomas in pre-operative and post-operative multimodal magnetic resonance imaging volumes based on a hybrid generative-discriminative framework

  • Zeng, Ke
  • Bakas, Spyridon
  • Sotiras, Aristeidis
  • Akbari, Hamed
  • Rozycki, Martin
  • Rathore, Saima
  • Pati, Sarthak
  • Davatzikos, Christos
2016 Conference Proceedings, cited 8 times
Website

Ensemble CNN Networks for GBM Tumors Segmentation Using Multi-parametric MRI

  • Zeineldin, Ramy A.
  • Karar, Mohamed E.
  • Mathis-Ullrich, Franziska
  • Burgert, Oliver
2022 Book Section, cited 0 times
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).

Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities

  • Zayed, Nourhan
  • Elnemr, Heba A
International Journal of Biomedical Imaging 2015 Journal Article, cited 30 times
Website
The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others.

Brain tumor detection based on Naïve Bayes Classification

  • Zaw, Hein Tun
  • Maneerat, Noppadol
  • Win, Khin Yadanar
2019 Conference Paper, cited 2 times
Website
Brain cancer is caused by the population of abnormal cells called glial cells that takes place in the brain. Over the years, the number of patients who have brain cancer is increasing with respect to the aging population, is a worldwide health problem. The objective of this paper is to develop a method to detect the brain tissues which are affected by cancer especially for grade-4 tumor, Glioblastoma multiforme (GBM). GBM is one of the most malignant cancerous brain tumors as they are fast growing and more likely to spread to other parts of the brain. In this paper, Naïve Bayes classification is utilized for recognition of a tumor region accurately that contains all spreading cancerous tissues. Brain MRI database, preprocessing, morphological operations, pixel subtraction, maximum entropy threshold, statistical features extraction, and Naïve Bayes classifier based prediction algorithm are used in this research. The goal of this method is to detect the tumor area from different brain MRI images and to predict that detected area whether it is a tumor or not. When compared to other methods, this method can properly detect the tumor located in different regions of the brain including the middle region (aligned with eye level) which is the significant advantage of this method. When tested on 50 MRI images, this method develops 81.25% detection rate on tumor images and 100% detection rate on non-tumor images with the overall accuracy 94%.

Noise Reduction in CT Using Learned Wavelet-Frame Shrinkage Networks

  • Zavala-Mondragon, L. A.
  • Rongen, P.
  • Bescos, J. O.
  • de With, P. H. N.
  • van der Sommen, F.
IEEE Trans Med Imaging 2022 Journal Article, cited 0 times
Website
Encoding-decoding (ED) CNNs have demonstrated state-of-the-art performance for noise reduction over the past years. This has triggered the pursuit of better understanding the inner workings of such architectures, which has led to the theory of deep convolutional framelets (TDCF), revealing important links between signal processing and CNNs. Specifically, the TDCF demonstrates that ReLU CNNs induce low-rankness, since these models often do not satisfy the necessary redundancy to achieve perfect reconstruction (PR). In contrast, this paper explores CNNs that do meet the PR conditions. We demonstrate that in these type of CNNs soft shrinkage and PR can be assumed. Furthermore, based on our explorations we propose the learned wavelet-frame shrinkage network, or LWFSN and its residual counterpart, the rLWFSN. The ED path of the (r)LWFSN complies with the PR conditions, while the shrinkage stage is based on the linear expansion of thresholds proposed Blu and Luisier. In addition, the LWFSN has only a fraction of the training parameters (<1%) of conventional CNNs, very small inference times, low memory footprint, while still achieving performance close to state-of-the-art alternatives, such as the tight frame (TF) U-Net and FBPConvNet, in low-dose CT denoising.

Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones

  • Zaridis, D. I.
  • Mylona, E.
  • Tachos, N.
  • Pezoulas, V. C.
  • Grigoriadis, G.
  • Tsiknakis, N.
  • Marias, K.
  • Tsiknakis, M.
  • Fotiadis, D. I.
2023 Journal Article, cited 0 times
Website
Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models' predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate's gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images.

A Deep Learning-based cropping technique to improve segmentation of prostate's peripheral zone

  • Zaridis, Dimitris
  • Mylona, Eugenia
  • Tachos, Nikolaos
  • Marias, Kostas
  • Tsiknakis, Manolis
  • Fotiadis, Dimitios I.
2021 Conference Paper, cited 0 times
Automatic segmentation of the prostate peripheral zone on Magnetic Resonance Images (MRI) is a necessary but challenging step for accurate prostate cancer diagnosis. Deep learning (DL) based methods, such as U-Net, have recently been developed to segment the prostate and its' sub-regions. Nevertheless, the presence of class imbalance in the image labels, where the background pixels dominate over the region to be segmented, may severely hamper the segmentation performance. In the present work, we propose a DL-based preprocessing pipeline for segmenting the peripheral zone of the prostate by cropping unnecessary information without making a priori assumptions regarding the location of the region of interest. The effect of DL-cropping for improving the segmentation performance was compared to the standard center-cropping using three state-of-the-art DL networks, namely U-net, Bridged U-net and Dense U-net. The proposed method achieved an improvement of 24%, 12% and 15% for the U-net, Bridged U-net and Dense U-net, respectively, in terms of Dice score.

MuSA: a graphical user interface for multi-OMICs data integration in radiogenomic studies

  • Zanfardino, Mario
  • Castaldo, Rossana
  • Pane, Katia
  • Affinito, Ornella
  • Aiello, Marco
  • Salvatore, Marco
  • Franzese, Monica
Sci RepScientific reports 2021 Journal Article, cited 0 times
Website

A functional artificial neural network for noninvasive pretreatment evaluation of glioblastoma patients

  • Zander, E.
  • Ardeleanu, A.
  • Singleton, R.
  • Bede, B.
  • Wu, Y.
  • Zheng, S.
Neurooncol Adv 2022 Journal Article, cited 0 times
Website
Background: Pretreatment assessments for glioblastoma (GBM) patients, especially elderly or frail patients, are critical for treatment planning. However, genetic profiling with intracranial biopsy carries a significant risk of permanent morbidity. We previously demonstrated that the CUL2 gene, encoding the scaffold cullin2 protein in the cullin2-RING E3 ligase (CRL2), can predict GBM radiosensitivity and prognosis. CUL2 expression levels are closely regulated with its copy number variations (CNVs). This study aims to develop artificial neural networks (ANNs) for pretreatment evaluation of GBM patients with inputs obtainable without intracranial surgical biopsies. Methods: Public datasets including Ivy-GAP, The Cancer Genome Atlas Glioblastoma (TCGA-GBM), and the Chinese Glioma Genome Atlas (CGGA) were used for training and testing of the ANNs. T1 images from corresponding cases were studied using automated segmentation for features of heterogeneity and tumor edge contouring. A ratio comparing the surface area of tumor borders versus the total volume (SvV) was derived from the DICOM-SEG conversions of segmented tumors. The edges of these borders were detected using the canny edge detector. Packages including Keras, Pytorch, and TensorFlow were tested to build the ANNs. A 4-layered ANN (8-8-8-2) with a binary output was built with optimal performance after extensive testing. Results: The 4-layered deep learning ANN can identify a GBM patient's overall survival (OS) cohort with 80%-85% accuracy. The ANN requires 4 inputs, including CUL2 copy number, patients' age at GBM diagnosis, Karnofsky Performance Scale (KPS), and SvV ratio. Conclusion: Quantifiable image features can significantly improve the ability of ANNs to identify a GBM patients' survival cohort. Features such as clinical measures, genetic data, and image data, can be integrated into a single ANN for GBM pretreatment evaluation.

Brain Tumor Detection and Classification Using Deep Learning and Sine-Cosine Fitness Grey Wolf Optimization

  • ZainEldin, Hanaa
  • Gamel, Samah A.
  • El-Kenawy, El-Sayed M.
  • Alharbi, Amal H.
  • Khafaga, Doaa Sami
  • Ibrahim, Abdelhameed
  • Talaat, Fatma M.
2023 Journal Article, cited 0 times
Diagnosing a brain tumor takes a long time and relies heavily on the radiologist’s abilities and experience. The amount of data that must be handled has increased dramatically as the number of patients has increased, making old procedures both costly and ineffective. Many researchers investigated a variety of algorithms for detecting and classifying brain tumors that were both accurate and fast. Deep Learning (DL) approaches have recently been popular in developing automated systems capable of accurately diagnosing or segmenting brain tumors in less time. DL enables a pre-trained Convolutional Neural Network (CNN) model for medical images, specifically for classifying brain cancers. The proposed Brain Tumor Classification Model based on CNN (BCM-CNN) is a CNN hyperparameters optimization using an adaptive dynamic sine-cosine fitness grey wolf optimizer (ADSCFGWO) algorithm. There is an optimization of hyperparameters followed by a training model built with Inception-ResnetV2. The model employs commonly used pre-trained models (Inception-ResnetV2) to improve brain tumor diagnosis, and its output is a binary 0 or 1 (0: Normal, 1: Tumor). There are primarily two types of hyperparameters: (i) hyperparameters that determine the underlying network structure; (ii) a hyperparameter that is responsible for training the network. The ADSCFGWO algorithm draws from both the sine cosine and grey wolf algorithms in an adaptable framework that uses both algorithms’ strengths. The experimental results show that the BCM-CNN as a classifier achieved the best results due to the enhancement of the CNN’s performance by the CNN optimization’s hyperparameters. The BCM-CNN has achieved 99.98% accuracy with the BRaTS 2021 Task 1 dataset.

Predictive Modeling for Voxel-Based Quantification of Imaging-Based Subtypes of Pancreatic Ductal Adenocarcinoma (PDAC): A Multi-Institutional Study

  • Zaid, Mohamed
  • Widmann, Lauren
  • Dai, Annie
  • Sun, Kevin
  • Zhang, Jie
  • Zhao, Jun
  • Hurd, Mark W
  • Varadhachary, Gauri R
  • Wolff, Robert A
  • Maitra, Anirban
Cancers 2020 Journal Article, cited 0 times
Website

AUTOMATIC KIDNEY SEGMENTATION, RECONSTRUCTION, PREOPERATIVE PLANNING, AND 3D PRINTING

  • ZAGKOU, SPYRIDOULA
2021 Thesis, cited 0 times
Website
Renal cancer is the seventh most prevalent cancer among men and the tenth most frequent cancer among women, accounting for 5% and 3% of all adult malignancies, respectively. Κidney cancer is increasing dramatically in developing countries due to inadequate living conditions but and in developed countries due to bad lifestyles, smoking, obesity, and hypertension. For decades, radical nephrectomy (RN) was the standard method to address the problem of the high incidence of kidney cancer. However, the utilization of minimally invasive partial nephrectomy (PN), for the treatment of localized small renal masses has increased with the advent of laparoscopic and robotic-assisted procedures. In this framework, certain factors must be considered in surgical planning and decision-making of partial nephrectomies, such as the morphology and location of the tumor. Advanced technologies such as automatic image segmentation, image and surface reconstruction, and 3D printing have been developed to assess the tumor anatomy before surgery and its relationship to surrounding structures, such as the arteriovenous system, with the aim of preventing damage. Overall, it is obvious that 3D printed anatomical kidney models are very useful to urologists, surgeons, and researchers as a reference point for preoperative planning and intraoperative visualization for a more efficient treatment and a high standard of care. Furthermore, they can provide a lot of degrees of comfort in education, in patient counseling, and in delivering therapeutic methods customized to the needs of each individual patient. To this context, the fundamental objective of this thesis is to provide an analytical and general pipeline for the generation of a renal 3D printed model from CT images. In addition, there are proposed methods to enhance preoperative planning and help surgeons to prepare with increased accuracy the surgical procedure so that improve their performance. Keywords: Medical Image, Computed Tomography (CT), Semantic Segmentation, Convolutional Neural Networks (CNNs), Surface Reconstruction, Mesh Processing, 3D Printing of Kidney, Operative assistance

Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images

  • Zabala-Travers, Silvina
  • Choi, Mina
  • Cheng, Wei-Chung
  • Badano, Aldo
Medical Physics 2015 Journal Article, cited 4 times
Website
PURPOSE: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. METHODS: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow ("jet"), a heated black-body ("hot"), and a gray ("gray") color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. RESULTS: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. CONCLUSIONS: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images.

Radiomics study of lung tumor volume segmentation technique in contrast-enhanced Computed Tomography (CT) thorax images: A comparative study

  • Yunus, Mardhiyati Mohd
  • Sin, Ng Hui
  • Sabarudin, Akmal
  • Karim, Muhammad Khalis Abdul
  • Kechik, Mohd Mustafa Awang
  • Razali, Rosmizan Ahmad
  • Shamsul, Mohd Shahril Mohd
2023 Conference Paper, cited 0 times
Website
Medical image segmentation is crucial in extracting information regarding tumour characteristics including lung cancer. To obtain the information of macroscopic (tumour volume) and microscopic features (radiomics study), image segmentation process is required. Various kind of advance segmentation algorithms are available nowadays yet there is no so-called ‘the best segmentation technique’ that can be used in medical imaging modalities. This study compared manual slice by slice segmentation and semi-automated segmentation of lung tumour volume measurement with radiomics features of shape analysis and first-order statistical measures of texture analysis. Manual slice by slice delineation and region-growing semi-automated segmentation using 3D slicer software was performed on 45 sets of contrast-enhanced Computed Tomography (CT) Thorax images downloaded from The Cancer Imaging Archive (TCIA). The results showed that the manually and semi-automated segmentation has high similarity with volume Hausdorff distance (AHD) measured as 1.02 ± 0.71mm, high Dice similarity coefficient (DSC) value is 0.83 ± 0.05 and p value is 0.997; p > 0.05. Overall, 84.62% of the features under shape analysis and 33.33% of first-order statistical measures of texture analysis are no significant difference between these two segmentation methods. In conclusion, semiautomated segmentation can be perform as good as manual segmentation in lung tumour volume measurement, especially in terms of the ability to extract the shape order features of the lung tumour radiomics analysis.

Evaluating Scale Attention Network for Automatic Brain Tumor Segmentation with Large Multi-parametric MRI Database

  • Yuan, Yading
2022 Book Section, cited 0 times
Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. This is the 10th year of Brain Tumor Segmentation (BraTS) Challenge that utilizes multi-institutional multi-parametric magnetic resonance imaging (mpMRI) scans for tasks: 1) evaluation the state-of-the-art methods for the segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans; and 2) the evaluation of classification methods to predict the MGMT promoter methylation status at pre-operative baseline scans. We participated the image segmentation task by applying a fully automated segmentation framework that we previously developed in BraTS 2020. This framework, named as scale-attention network, incorporates a dynamic scale attention mechanism to integrate low-level details with high-level feature maps at different scales. Our framework was trained using the 1251 challenge training cases provided by BraTS 2021, and achieved an average Dice Similarity Coefficient (DSC) of 0.9277, 0.8851 and 0.8754, as well as 95% Hausdorff distance (in millimeter) of 4.2242, 15.3981 and 11.6925 on 570 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the second place in the brain tumor segmentation task of RSNA-ASNR-MICCAI BraTS 2021 Challenge (id: deepX).

Automatic Brain Tumor Segmentation with Scale Attention Network

  • Yuan, Yading
2021 Book Section, cited 0 times
Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. Multimodal Brain Tumor Segmentation Challenge 2020 (BraTS 2020) provides a common platform for comparing different automatic algorithms on multi-parametric Magnetic Resonance Imaging (mpMRI) in tasks of 1) Brain tumor segmentation MRI scans; 2) Prediction of patient overall survival (OS) from pre-operative MRI scans; 3) Distinction of true tumor recurrence from treatment related effects and 4) Evaluation of uncertainty measures in segmentation. We participate the image segmentation challenge by developing a fully automatic segmentation network based on encoder-decoder architecture. In order to better integrate information across different scales, we propose a dynamic scale attention mechanism that incorporates low-level details with high-level semantics from feature maps at different scales. Our framework was trained using the 369 challenge training cases provided by BraTS 2020, and achieved an average Dice Similarity Coefficient (DSC) of 0.8828, 0.8433 and 0.8177, as well as 95% Hausdorff distance (in millimeter) of 5.2176, 17.9697 and 13.4298 on 166 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the 3rd place among 693 registrations in the BraTS 2020 challenge.

A 3D semi-automated co-segmentation method for improved tumor target delineation in 3D PET/CT imaging

  • Yu, Zexi
  • Bui, Francis M
  • Babyn, Paul
2015 Conference Proceedings, cited 1 times
Website
The planning of radiotherapy is increasingly based on multi-modal imaging techniques such as positron emission tomography (PET)-computed tomography (CT), since PET/CT provides not only anatomical but also functional assessment of the tumor. In this work, we propose a novel co-segmentation method, utilizing both the PET and CT images, to localize the tumor. The method constructs the segmentation problem as minimization of a Markov random field model, which encapsulates features from both imaging modalities. The minimization problem can then be solved by the maximum flow algorithm, based on graph cuts theory. The proposed tumor delineation algorithm was validated in both a phantom, with a high-radiation area, and in patient data. The obtained results show significant improvement compared to existing segmentation methods, with respect to various qualitative and quantitative metrics.

Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images

  • Yu, Zexi
2016 Thesis, cited 0 times
Website

Prediction of pathologic stage in non-small cell lung cancer using machine learning algorithm based on CT image feature analysis

  • Yu, L.
  • Tao, G.
  • Zhu, L.
  • Wang, G.
  • Li, Z.
  • Ye, J.
  • Chen, Q.
BMC Cancer 2019 Journal Article, cited 11 times
Website
PURPOSE: To explore imaging biomarkers that can be used for diagnosis and prediction of pathologic stage in non-small cell lung cancer (NSCLC) using multiple machine learning algorithms based on CT image feature analysis. METHODS: Patients with stage IA to IV NSCLC were included, and the whole dataset was divided into training and testing sets and an external validation set. To tackle imbalanced datasets in NSCLC, we generated a new dataset and achieved equilibrium of class distribution by using SMOTE algorithm. The datasets were randomly split up into a training/testing set. We calculated the importance value of CT image features by means of mean decrease gini impurity generated by random forest algorithm and selected optimal features according to feature importance (mean decrease gini impurity > 0.005). The performance of prediction model in training and testing sets were evaluated from the perspectives of classification accuracy, average precision (AP) score and precision-recall curve. The predictive accuracy of the model was externally validated using lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) samples from TCGA database. RESULTS: The prediction model that incorporated nine image features exhibited a high classification accuracy, precision and recall scores in the training and testing sets. In the external validation, the predictive accuracy of the model in LUAD outperformed that in LUSC. CONCLUSIONS: The pathologic stage of patients with NSCLC can be accurately predicted based on CT image features, especially for LUAD. Our findings extend the application of machine learning algorithms in CT image feature prediction for pathologic staging and identify potential imaging biomarkers that can be used for diagnosis of pathologic stage in NSCLC patients.

Correlative hierarchical clustering-based low-rank dimensionality reduction of radiomics-driven phenotype in non-small cell lung cancer

  • Bardia Yousefi
  • Nariman Jahani
  • Michael J. LaRiviere
  • Eric Cohen
  • Meng-Kang Hsieh
  • José Marcio Luna
  • Rhea D. Chitalia
  • Jeffrey C. Thompson
  • Erica L. Carpenter
  • Sharyn I. Katz
  • Despina Kontos
2019 Conference Paper, cited 0 times
Website
Background: Lung cancer is one of the most common cancers in the United States and the most fatal, with 142,670 deaths in 2019. Accurately determining tumor response is critical to clinical treatment decisions, ultimately impacting patient survival. To better differentiate between non-small cell lung cancer (NSCLC) responders and non-responders to therapy, radiomic analysis is emerging as a promising approach to identify associated imaging features undetectable by the human eye. However, the plethora of variables extracted from an image may actually undermine the performance of computer-aided prognostic assessment, known as the curse of dimensionality. In the present study, we show that correlative-driven hierarchical clustering improves high-dimensional radiomics-based feature selection and dimensionality reduction, ultimately predicting overall survival in NSCLC patients. Methods: To select features for high-dimensional radiomics data, a correlation-incorporated hierarchical clustering algorithm automatically categorizes features into several groups. The truncation distance in the resulting dendrogram graph is used to control the categorization of the features, initiating low-rank dimensionality reduction in each cluster, and providing descriptive features for Cox proportional hazards (CPH)-based survival analysis. Using a publicly available non- NSCLC radiogenomic dataset of 204 patients’ CT images, 429 established radiomics features were extracted. Low-rank dimensionality reduction via principal component analysis (PCA) was employed (k=1, n<1) to find the representative components of each cluster of features and calculate cluster robustness using the relative weighted consistency metric. Results: Hierarchical clustering categorized radiomic features into several groups without primary initialization of cluster numbers using the correlation distance metric (as a function) to truncate the resulting dendrogram into different distances. The dimensionality was reduced from 429 to 67 features (for truncation distance of 0.1). The robustness within the features in clusters was varied from -1.12 to -30.02 for truncation distances of 0.1 to 1.8, respectively, which indicated that the robustness decreases with increasing truncation distance when smaller number of feature classes (i.e., clusters) are selected. The best multivariate CPH survival model had a C-statistic of 0.71 for truncation distance of 0.1, outperforming conventional PCA approaches by 0.04, even when the same number of principal components was considered for feature dimensionality. Conclusions: Correlative hierarchical clustering algorithm truncation distance is directly associated with robustness of the clusters of features selected and can effectively reduce feature dimensionality while improving outcome prediction.

Incremental Learning Meets Transfer Learning: Application to Multi-site Prostate MRI Segmentation

  • You, Chenyu
  • Xiang, Jinlin
  • Su, Kun
  • Zhang, Xiaoran
  • Dong, Siyuan
  • Onofrey, John
  • Staib, Lawrence
  • Duncan, James S.
2022 Conference Paper, cited 9 times
Website
Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, “incremental” refers to training sequentially constructed datasets, and “transfer” is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.

A feasibility study to estimate optimal rigid-body registration using combinatorial rigid registration optimization (CORRO)

  • Yorke, A. A.
  • Solis, D., Jr.
  • Guerrero, T.
J Appl Clin Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Clinical image pairs provide the most realistic test data for image registration evaluation. However, the optimal registration is unknown. Using combinatorial rigid registration optimization (CORRO) we demonstrate a method to estimate the optimal alignment for rigid-registration of clinical image pairs. METHODS: Expert selected landmark pairs were selected for each CT/CBCT image pair for six cases representing head and neck, thoracic, and pelvic anatomic regions. Combination subsets of a k number of landmark pairs (k-combination set) were generated without repeat to form a large set of k-combination sets (k-set) for k = 4,8,12. The rigid transformation between the image pairs was calculated for each k-combination set. The mean and standard deviation of these transformations were used to derive final registration for each k-set. RESULTS: The standard deviation of registration output decreased as the k-size increased for all cases. The joint entropy evaluated for each k-set of each case was smaller than those from two commercially available registration programs indicating a stronger correlation between the image pair after CORRO was used. A joint histogram plot of all three algorithms showed high correlation between them. As further proof of the efficacy of CORRO the joint entropy of each member of 30 000 k-combination sets in k = 4 were calculated for one of the thoracic cases. The minimum joint entropy was found to exist at the estimated mean of registration indicating CORRO converges to the optimal rigid-registration results. CONCLUSIONS: We have developed a methodology called CORRO that allows us to estimate optimal alignment for rigid-registration of clinical image pairs using a large set landmark point. The results for the rigid-body registration have been shown to be comparable to results from commercially available algorithms for all six cases. CORRO can serve as an excellent tool that can be used to test and validate rigid registration algorithms.

Quality Assurance of Image Registration Using Combinatorial Rigid Registration Optimization (CORRO)

  • Yorke, Afua A.
  • McDonald, Gary C.
  • Solis, David
  • Guerrero, Thomas
2021 Journal Article, cited 0 times
Purpose: Expert selected landmark points on clinical image pairs to provide a basis for rigid registration validation. Using combinatorial rigid registration optimization (CORRO) provide a statistically characterized reference data set for image registration of the pelvis by estimating optimal registration. Materials ad Methods: Landmarks for each CT/CBCT image pair for 58 cases were identified. From the landmark pairs, combination subsets of k-number of landmark pairs were generated without repeat, forming k-set for k=4, 8, and 12. A rigid registration between the image pairs was computed for each k-combination set (2,000-8,000,000). The mean and standard deviation of the registration were used as final registration for each image pair. Joint entropy was used to validate the output results. Results: An average of 154 (range: 91-212) landmark pairs were selected for each CT/CBCT image pair. The mean standard deviation of the registration output decreased as the k-size increased for all cases. In general, the joint entropy evaluated was found to be lower than results from commercially available software. Of all 58 cases 58.3% of the k=4, 15% of k=8 and 18.3% of k=12 resulted in the better registration using CORRO as compared to 8.3% from a commercial registration software. The minimum joint entropy was determined for one case and found to exist at the estimated registration mean in agreement with the CORRO algorithm. Conclusion: The results demonstrate that CORRO works even in the extreme case of the pelvic anatomy where the CBCT suffers from reduced quality due to increased noise levels. The estimated optimal registration using CORRO was found to be better than commercially available software for all k-sets tested. Additionally, the k-set of 4 resulted in overall best outcomes when compared to k=8 and 12, which is anticipated because k=8 and 12 are more likely to have combinations that affected the accuracy of the registration.

Prognostic value of tumor metabolic imaging phenotype by FDG PET radiomics in HNSCC

  • Yoon, H.
  • Ha, S.
  • Kwon, S. J.
  • Park, S. Y.
  • Kim, J.
  • O, J. H.
  • Yoo, I. R.
Ann Nucl Med 2021 Journal Article, cited 1 times
Website
Objective Tumor metabolic phenotype can be assessed with integrated image pattern analysis of 18F-fluoro-deoxy-glucose (FDG) Positron Emission Tomography/Computed Tomography (PET/CT), called radiomics. This study was performed to assess the prognostic value of radiomics PET parameters in head and neck squamous cell carcinoma (HNSCC) patients. Methods 18F-fluoro-deoxy-glucose (FDG) PET/CT data of 215 patients from HNSCC collection free database in The Cancer Imaging Archive (TCIA), and 122 patients in Seoul St. Mary’s Hospital with baseline FDG PET/CT for locally advanced HNSCC were reviewed. Data from TCIA database were used as a training cohort, and data from Seoul St. Mary’s Hospital as a validation cohort. With the training cohort, primary tumors were segmented by Nestles’ adaptive thresholding method. Segmental tumors in PET images were preprocessed using relative resampling of 64 bins. Forty-two PET parameters, including conventional parameters and texture parameters, were measured. Binary groups of homogeneous imaging phenotypes, clustered by K-means method, were compared for overall survival (OS) and disease-free survival (DFS) by log-rank test. Selected individual radiomics parameters were tested along with clinical factors, including age and sex, by Cox-regression test for OS and DFS, and the significant parameters were tested with multivariate analysis. Significant parameters on multivariate analysis were again tested with multivariate analysis in the validation cohort. Results A total of 119 patients, 70 from training, and 49 from validation cohort, were included in the study. The median follow-up period was 62 and 52 months for the training and the validation cohort, respectively. In the training cohort. binary groups with different metabolic radiomics phenotypes showed significant difference in OS (p = 0.036), and borderline difference in DFS (p = 0.086). Gray-Level Non-Uniformity for zone (GLNUGLZLM) was the most significant prognostic factor for both OS (hazard ratio [HR] 3.1, 95% confidence interval [CI] 1.4–7.3, p = 0.008) and DFS (HR 4.5, CI 1.3–16, p = 0.020). Multivariate analysis revealed GLNUGLZLM as an independent prognostic factor for OS (HR 3.7, 95% CI 1.1–7.5, p = 0.032). GLNUGLZLM remained as an independent prognostic factor in the validation cohort (HR 14.8. 95% CI 3.3–66, p < 0.001). Conclusions Baseline FDG PET radiomics contain risk information for survival prognosis in HNSCC patients. The metabolic heterogeneity parameter, GLNUGLZLM, may assist clinicians in patient risk assessment as a feasible prognostic factor.

MRI-Based Deep-Learning Method for Determining Glioma <em>MGMT</em> Promoter Methylation Status

  • Yogananda, C.G.B.
  • Shah, B.R.
  • Nalawade, S.S.
  • Murugesan, G.K.
  • Yu, F.F.
  • Pinho, M.C.
  • Wagner, B.C.
  • Mickey, B.
  • Patel, T.R.
  • Fei, B.
  • Madhuranthakam, A.J.
  • Maldjian, J.A.
American Journal of Neuroradiology 2021 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation confers an improved prognosis and treatment response in gliomas. We developed a deep learning network for determining MGMT promoter methylation status using T2 weighted Images (T2WI) only.MATERIALS AND METHODS: Brain MR imaging and corresponding genomic information were obtained for 247 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. One hundred sixty-three subjects had a methylated MGMT promoter. A T2WI-only network (MGMT-net) was developed to determine MGMT promoter methylation status and simultaneous single-label tumor segmentation. The network was trained using 3D-dense-UNets. Three-fold cross-validation was performed to generalize the performance of the networks. Dice scores were computed to determine tumor-segmentation accuracy.RESULTS: The MGMT-net demonstrated a mean cross-validation accuracy of 94.73% across the 3 folds (95.12%, 93.98%, and 95.12%, [SD, 0.66%]) in predicting MGMT methylation status with a sensitivity and specificity of 96.31% [SD, 0.04%] and 91.66% [SD, 2.06%], respectively, and a mean area under the curve of 0.93 [SD, 0.01]. The whole tumor-segmentation mean Dice score was 0.82 [SD, 0.008].CONCLUSIONS: We demonstrate high classification accuracy in predicting MGMT promoter methylation status using only T2WI. Our network surpasses the sensitivity, specificity, and accuracy of histologic and molecular methods. This result represents an important milestone toward using MR imaging to predict prognosis and treatment response.IDHisocitrate dehydrogenaseMGMTO6-methylguanine-DNA methyltransferasePCRpolymerase chain reactionT2WIT2 weighted ImagesTCGAThe Cancer Genome AtlasTCIAThe Cancer Imaging Archive

Non-invasive Profiling of Molecular Markers in Brain Gliomas using Deep Learning and Magnetic Resonance Images

  • Yogananda, Chandan Ganesh Bangalore
2021 Thesis, cited 0 times
Website
Gliomas account for the most common malignant primary brain tumors in both pediatric and adult populations. They arise from glial cells and are divided into low grade and high-grade gliomas with significant differences in patient survival. Patients with aggressive high-grade gliomas have life expectancies of less than 2 years. Glioblastoma (GBM) are aggressive brain tumors classified by the world health organization (WHO) as stage IV brain cancer. The overall survival for GBM patients is poor and is in the range of 12 to 15 months. These tumors are typically treated by surgery, followed by radiotherapy and chemotherapy. Gliomas often consist of active tumor tissue, necrotic tissue, and surrounding edema. Magnetic Resonance Imaging (MRI) is the most commonly used modality to assess brain tumors because of its superior soft tissue contrast. MRI tumor segmentation is used to identify the subcomponents as enhancing, necrotic or edematous tissue. Due to the heterogeneity and tissue relaxation differences in these subcomponents, multi-parametric (or multi-contrast) MRI is often used for accurate segmentation. Manual brain tumor segmentation is a challenging and tedious task for human experts due to the variability of tumor appearance, unclear borders of the tumor and the need to evaluate multiple MR images with different contrasts simultaneously. In addition, manual segmentation is often prone to significant intra- and inter-rater variability. To address these issues, Chapter 2 of my dissertation aims at designing and developing a highly accurate, 3D Dense-Unet Convolutional Neural Network (CNN) for segmenting brain tumors into subcomponents that can easily be incorporated into a clinical workflow. Primary brain tumors demonstrate broad variations in imaging features, response to therapy, and prognosis. It has become evident that this heterogeneity is associated with specific molecular and genetic profiles. For example, isocitrate dehydrogenase 1 and 2 (IDH 1/2) mutated gliomas demonstrate increased survival compared to wild-type gliomas with the same histologic grade. Identification of the IDH mutation status as a marker for therapy and prognosis is considered one of the most important recent discoveries in brain glioma biology. Additionally, 1p/19q co-deletion and O6-methyl guanine-DNA methyltransferase (MGMT) promoter methylation is associated with differences in response to specific chemoradiation regimens. Currently, the only reliable way of determining a molecular marker is by obtaining glioma tissue either via an invasive brain biopsy or following open surgical resection. Although the molecular profiling of gliomas is now a routine part of the evaluation of specimens obtained at biopsy or tumor resection, it would be helpful to have this information prior to surgery. In some cases, the information would aid in planning the extent of tumor resection. In others, for tumors in locations where resection is not possible, and the risk of a biopsy is high, accurate delineation of the molecular and genetic profile of the tumor might be used to guide empiric treatment with radiation and/or chemotherapy. The ability to non-invasively profile these molecular markers using only T2w MRI has significant implications in determining therapy, predicting prognosis, and feasible clinical translation. Thus, Chapters 3, 4 and 5 of my dissertation focuses on developing and evaluating deep learning algorithms for non-invasive profiling of molecular markers in brain gliomas using T2w MRI only. This includes developing highly accurate fully automated deep learning networks for, (i) classification of IDH mutation status (Chapter 3), (ii) classification of 1p/19q co-deletion status (Chapter 4), and (iii) classification of MGMT promoter status in Brain Gliomas (Chapter 5). An important caveat of using MRI is the effects of degradation on the images, such as motion artifact, and in turn, on the performance of deep learning-based algorithms. Motion artifacts are an especially pervasive source of MR image quality degradation and can be due to gross patient movements, as well as cardiac and respiratory motion. In clinical practice, these artifacts can interfere with diagnostic interpretation, necessitating repeat imaging. The effect of motion artifacts on medical images and deep learning based molecular profiling algorithms has not been studied systematically. It is likely that motion corruption will also lead to reduced performance of deep-learning algorithms in classifying brain tumor images. Deep learning based brain tumor segmentation and molecular profiling algorithms generally perform well only on specific datasets. Clinical translation of such algorithms has the potential to reduce interobserver variability, and improve planning for radiation therapy, improve speed & response to therapy. Although these algorithms perform very well on several publicly available datasets, their generalization to clinical datasets or tasks have been poor, preventing easy clinical translation. Thus, Chapter 6 of my dissertation focuses on evaluating the performance of the molecular profiling algorithms on motion corrupted, motion corrected and clinical T2w MRI. This includes, (i) evaluating the effect of motion corruption on the molecular profiling algorithms, (ii) determining if deep learning-based motion correction can recover the performance of these algorithms to levels similar to non-corrupted images, and (iii) evaluating the performance of these algorithms on clinical T2w MRI before & after motion correction. This chapter is an investigation on the effects of induced motion artifact on deep learning-based molecular classification, and the relative importance of robust correction methods in recovering the accuracies for potential clinical applicability. Deep-learning studies typically require a very large amount of data to achieve good performance. The number of subjects available from the TCIA database is relatively small when compared to the sample sizes typically required for deep learning. Despite this caveat, the data are representative of real-world clinical experience, with multiparametric MR images from multiple institutions, and represents one of the largest publicly available brain tumor databases. Additionally, the acquisition parameters and imaging vendor platforms are diverse across the imaging centers contributing data to TCIA. This study provides a framework for training, evaluating, and benchmarking any new artifact-correction architectures for potential insertion into a workflow. Although our results show promise for expeditious clinical translation, it will be essential to train and validate the algorithms using additional independent datasets. Thus, Chapter 7 of my dissertation discusses the limitations and possible future directions for this work.

Lung cancer deaths in the National Lung Screening Trial attributed to nonsolid nodules

  • Yip, Rowena
  • Yankelevitz, David F
  • Hu, Minxia
  • Li, Kunwei
  • Xu, Dong Ming
  • Jirapatnakul, Artit
  • Henschke, Claudia I
RadiologyRadiology 2016 Journal Article, cited 0 times

Lung Cancers Manifesting as Part-Solid Nodules in the National Lung Screening Trial

  • Yip, Rowena
  • Henschke, Claudia I
  • Xu, Dong Ming
  • Li, Kunwei
  • Jirapatnakul, Artit
  • Yankelevitz, David F
American Journal of Roentgenology 2017 Journal Article, cited 13 times
Website

The Tumor Mix-Up in 3D Unet for Glioma Segmentation

  • Yin, Pengyu
  • Hu, Yingdong
  • Liu, Jing
  • Duan, Jiaming
  • Yang, Wei
  • Cheng, Kun
2020 Book Section, cited 0 times
Automated segmentation of glioma and its subregions has significant importance throughout the clinical work flow including diagnosis, monitoring and treatment planning of brain cancer. The automatic delineation of tumours have draw much attention in the past few years, particularly the neural network based supervised learning methods. While the clinical data acquisition is much expensive and time consuming, which is the key limitation of machine learning in medical data. We describe a solution for the brain tumor segmentation in the context of the BRATS19 challenge. The major learning scheme is based on the 3D-Unet encoder and decoder with intense data augmentation followed by bias correction. At the moment we submit this short paper, our solution achieved Dice scores of 76.84, 85.74 and 74.51 for the enhancing tumor, whole tumor and tumor core, respectively on the validation data.

The Effect of Heterogenous Subregions in Glioblastomas on Survival Stratification: A Radiomics Analysis Using the Multimodality MRI

  • Yin, L.
  • Liu, Y.
  • Zhang, X.
  • Lu, H.
  • Liu, Y.
Technol Cancer Res Treat 2021 Journal Article, cited 0 times
Website
Intratumor heterogeneity is partly responsible for the poor prognosis of glioblastoma (GBM) patients. In this study, we aimed to assess the effect of different heterogeneous subregions of GBM on overall survival (OS) stratification. A total of 105 GBM patients were retrospectively enrolled and divided into long-term and short-term OS groups. Four MRI sequences, including contrast-enhanced T1-weighted imaging (T1C), T1, T2, and FLAIR, were collected for each patient. Then, 4 heterogeneous subregions, i.e. the region of entire abnormality (rEA), the regions of contrast-enhanced tumor (rCET), necrosis (rNec) and edema/non-contrast-enhanced tumor (rE/nCET), were manually drawn from the 4 MRI sequences. For each subregion, 50 radiomics features were extracted. The stratification performance of 4 heterogeneous subregions, as well as the performances of 4 MRI sequences, was evaluated both alone and in combination. Our results showed that rEA was superior in stratifying long-and short-term OS. For the 4 MRI sequences used in this study, the FLAIR sequence demonstrated the best performance of survival stratification based on the manual delineation of heterogeneous subregions. Our results suggest that heterogeneous subregions of GBMs contain different prognostic information, which should be considered when investigating survival stratification in patients with GBM.

CA-Net: Collaborative Attention Network for Multi-modal Diagnosis of Gliomas

  • Yin, Baocai
  • Cheng, Hu
  • Wang, Fengyan
  • Wang, Zengfu
2022 Book Section, cited 0 times
Deep neural network methods have led to impressive breakthroughs in the medical image field. Most of them focus on single-modal data, while diagnoses in clinical practice are usually determined based on multi-modal data, especially for tumor diseases. In this paper, we intend to find a way to effectively fuse radiology images and pathology images for the diagnosis of gliomas. To this end, we propose a collaborative attention network (CA-Net), which consists of three attention-based feature fusion modules, multi-instance attention, cross attention, and attention fusion. We first take an individual network for each modality to extract the original features. Multi-instance attention combines different informative patches in the pathology image to form a holistic pathology feature. Cross attention interacts between the two modalities and enhances single modality features by exploring complementary information from the other modality. The cross attention matrixes imply the feature reliability, so they are further utilized to obtain a coefficient for each modality to linearly fuse the enhanced features as the final representation in the attention fusion module. The three attention modules are collaborative to discover a comprehensive representation. Our result on the CPM-RadPath outperforms other fusion methods by a large margin, which demonstrates the effectiveness of the proposed method.

Brain Tumor Classification Based on MRI Images and Noise Reduced Pathology Images

  • Yin, Baocai
  • Cheng, Hu
  • Wang, Fengyan
  • Wang, Zengfu
2021 Book Section, cited 0 times
Gliomas are the most common and severe malignant tumors of the brain. The diagnosis and grading of gliomas are typically based on MRI images and pathology images. To improve the diagnosis accuracy and efficiency, we intend to design a framework for computer-aided diagnosis combining the two modalities. Without loss of generality, we first take an individual network for each modality to get the features and fuse them to predict the subtype of gliomas. For MRI images, we directly take a 3D-CNN to extract features, supervised by a cross-entropy loss function. There are too many normal regions in abnormal whole slide pathology images (WSI), which affect the training of pathology features. We call these normal regions as noise regions and propose two ideas to reduce them. Firstly, we introduce a nucleus segmentation model trained on some public datasets. The regions that has a small number of nuclei are excluded in the subsequent training of tumor classification. Secondly, we take a noise-rank module to further suppress the noise regions. After the noise reduction, we train a gliomas classification model based on the rest regions and obtain the features of pathology images. Finally, we fuse the features of the two modalities by a linear weighted module. We evaluate the proposed framework on CPM-RadPath2020 and achieve the first rank on the validation set.

DIAGNOSIS OF LUNG CANCER USING MULTISCALE CONVOLUTIONAL NEURAL NETWORK

  • Homayoon Yektai
  • Mohammad Manthouri
Biomedical Engineering: Applications, Basis and Communications 2020 Journal Article, cited 0 times
Website
Lung cancer is one of the dangerous diseases that cause huge cancer death worldwide. Early detection of lung cancer is the only possible way to improve a patient’s chance for survival. This study presents an innovative automated diagnosis classification method for Computed Tomography (CT) images of lungs. In this paper, the CT scan of lung images was analyzed with the multiscale convolution. The entire lung is segmented from the CT images and the parameters are calculated from the segmented image. The use of image processing techniques and identifying patterns in the detection of lung cancer from CT images reduces human errors in detecting tumors, and speeds up diagnosis time. Artificial Neural Network (ANN) has been widely used to detect lung cancer, and has significantly reduced the percentage of errors. Therefore, in this paper, Convolution Neural Network (CNN), which is the most effective method, is used for the detection of various types of cancers. This study presents a Multiscale Convolutional Neural Network (MCNN) approach for the classification of tumors. Based on the structure of MCNN, which presents CT picture to several deep convolutional neural networks with different size and resolutions, the classical handcrafted features extraction step is avoided. The proposed approach gives better classification rates than the classical state of the art methods allowing a safer Computer-Aided Diagnosis of pleural cancer. This study reaches a diagnosis accuracy of 93.7±0.3 using multiscale convolution technique, which reveals the efficiency of the proposed method.

Effects of phase aberration on transabdominal focusing for a large aperture, low f-number histotripsy transducer

  • Yeats, Ellen
  • Gupta, Dinank
  • Xu, Zhen
  • Hall, Timothy L
Physics in Medicine & Biology 2022 Journal Article, cited 4 times
Website

Development and Validation of an Automated Image-Based Deep Learning Platform for Sarcopenia Assessment in Head and Neck Cancer

  • Ye, Zezhong
  • Saraf, Anurag
  • Ravipati, Yashwanth
  • Hoebers, Frank
  • Catalano, Paul J.
  • Zha, Yining
  • Zapaishchykova, Anna
  • Likitlersuang, Jirapat
  • Guthier, Christian
  • Tishler, Roy B.
  • Schoenfeld, Jonathan D.
  • Margalit, Danielle N.
  • Haddad, Robert I.
  • Mak, Raymond H.
  • Naser, Mohamed
  • Wahid, Kareem A.
  • Sahlsten, Jaakko
  • Jaskari, Joel
  • Kaski, Kimmo
  • Mäkitie, Antti A.
  • Fuller, Clifton D.
  • Aerts, Hugo J. W. L.
  • Kann, Benjamin H.
2023 Journal Article, cited 0 times
Website
Sarcopenia is an established prognostic factor in patients with head and neck squamous cell carcinoma (HNSCC); the quantification of sarcopenia assessed by imaging is typically achieved through the skeletal muscle index (SMI), which can be derived from cervical skeletal muscle segmentation and cross-sectional area. However, manual muscle segmentation is labor intensive, prone to interobserver variability, and impractical for large-scale clinical use.To develop and externally validate a fully automated image-based deep learning platform for cervical vertebral muscle segmentation and SMI calculation and evaluate associations with survival and treatment toxicity outcomes.For this prognostic study, a model development data set was curated from publicly available and deidentified data from patients with HNSCC treated at MD Anderson Cancer Center between January 1, 2003, and December 31, 2013. A total of 899 patients undergoing primary radiation for HNSCC with abdominal computed tomography scans and complete clinical information were selected. An external validation data set was retrospectively collected from patients undergoing primary radiation therapy between January 1, 1996, and December 31, 2013, at Brigham and Women’s Hospital. The data analysis was performed between May 1, 2022, and March 31, 2023.C3 vertebral skeletal muscle segmentation during radiation therapy for HNSCC.Overall survival and treatment toxicity outcomes of HNSCC.The total patient cohort comprised 899 patients with HNSCC (median [range] age, 58 [24-90] years; 140 female [15.6%] and 755 male [84.0%]). Dice similarity coefficients for the validation set (n = 96) and internal test set (n = 48) were 0.90 (95% CI, 0.90-0.91) and 0.90 (95% CI, 0.89-0.91), respectively, with a mean 96.2% acceptable rate between 2 reviewers on external clinical testing (n = 377). Estimated cross-sectional area and SMI values were associated with manually annotated values (Pearson r = 0.99; P &lt; .001) across data sets. On multivariable Cox proportional hazards regression, SMI-derived sarcopenia was associated with worse overall survival (hazard ratio, 2.05; 95% CI, 1.04-4.04; P = .04) and longer feeding tube duration (median [range], 162 [6-1477] vs 134 [15-1255] days; hazard ratio, 0.66; 95% CI, 0.48-0.89; P = .006) than no sarcopenia.This prognostic study’s findings show external validation of a fully automated deep learning pipeline to accurately measure sarcopenia in HNSCC and an association with important disease outcomes. The pipeline could enable the integration of sarcopenia assessment into clinical decision making for individuals with HNSCC.

Optimizing interstitial photodynamic therapy with custom cylindrical diffusers

  • Yassine, Abdul‐Amir
  • Lilge, Lothar
  • Betz, Vaughn
Journal of biophotonics 2018 Journal Article, cited 0 times
Website

Machine learning for real-time optical property recovery in interstitial photodynamic therapy: a stimulation-based study

  • Yassine, Abdul-Amir
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2021 Journal Article, cited 1 times
Website

Automatic interstitial photodynamic therapy planning via convex optimization

  • Yassine, Abdul-Amir
  • Kingsford, William
  • Xu, Yiwen
  • Cassidy, Jeffrey
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2018 Journal Article, cited 3 times
Website

A NOVEL COMPARATIVE STUDY FOR AUTOMATIC THREE-CLASS AND FOUR-CLASS COVID-19 CLASSIFICATION ON X-RAY IMAGES USING DEEP LEARNING

  • Yaşar, Hüseyin
  • Ceylan, Murat
2022 Journal Article, cited 0 times
Website
The contagiousness rate of the COVID-19 virus, which was evaluated to have been transmitted from an animal to a human during the last months of 2019, is higher than the MERS-Cov and SARS-Cov viruses originating from the same family. The high rate of contagion has caused the COVID-19 virus to spread rapidly to all countries of the world. It is of great importance to be able to detect cases quickly in order to control the spread of the COVID-19 virus. Therefore, the development of systems that make automatic COVID-19 diagnoses using artificial intelligence approaches based on Xray, CT scans, and ultrasound images are an urgent and indispensable requirement. In order to increase the number of X-ray images used within the study, a mixed data set was created by combining eight different data sets, thus maximizing the scope of the study. In the study, a total of 9,667 X ray images were used, including 3,405 of COVID-19 samples, 2,780 of bacterial pneumonia samples, 1,493 of viral pneumonia samples and 1,989 of healthy samples. In this study, which aims to diagnose COVID-19 disease using X-ray images, automatic classification has been performed using two different classification structures: COVID-19 Pneumonia/Other Pneumonia/Healthy and COVID-19 Pneumonia/Bacterial Pneumonia/Viral Pneumonia/Healthy. Convolutional Neural Networks (CNNs), a successful deep learning method, were used as a classifier within the study. A total of seven CNN architectures were used: Mobilenetv2, Resnet101, Googlenet, Xception, Densenet201, Efficientnetb0, and Inceptionv3 architectures. The classification results were obtained from the original X-ray images, and the images were obtained by using Local Binary Pattern and Local Entropy. Then, new classification results were calculated from the obtained results using a pipeline algorithm. Detailed results were obtained to meet the scope of the study. According to the results of the experiments carried out, the three most successful CNN architectures for both three-class and four class automatic classification were Densenet201, Xception, and Inceptionv3, respectively. In addition, it is understood that the pipeline algorithm used in the study is very useful for improving the results. The study results show that up to an improvement of 1.57% were achieved in some comparison parameters.

A novel study for automatic two-class COVID-19 diagnosis (between COVID-19 and Healthy, Pneumonia) on X-ray images using texture analysis and 2-D/3-D convolutional neural networks

  • Yasar, H.
  • Ceylan, M.
Multimed Syst 2022 Journal Article, cited 0 times
Website
The pandemic caused by the COVID-19 virus affects the world widely and heavily. When examining the CT, X-ray, and ultrasound images, radiologists must first determine whether there are signs of COVID-19 in the images. That is, COVID-19/Healthy detection is made. The second determination is the separation of pneumonia caused by the COVID-19 virus and pneumonia caused by a bacteria or virus other than COVID-19. This distinction is key in determining the treatment and isolation procedure to be applied to the patient. In this study, which aims to diagnose COVID-19 early using X-ray images, automatic two-class classification was carried out in four different titles: COVID-19/Healthy, COVID-19 Pneumonia/Bacterial Pneumonia, COVID-19 Pneumonia/Viral Pneumonia, and COVID-19 Pneumonia/Other Pneumonia. For this study, 3405 COVID-19, 2780 Bacterial Pneumonia, 1493 Viral Pneumonia, and 1989 Healthy images obtained by combining eight different data sets with open access were used. In the study, besides using the original X-ray images alone, classification results were obtained by accessing the images obtained using Local Binary Pattern (LBP) and Local Entropy (LE). The classification procedures were repeated for the images that were combined with the original images, LBP, and LE images in various combinations. 2-D CNN (Two-Dimensional Convolutional Neural Networks) and 3-D CNN (Three-Dimensional Convolutional Neural Networks) architectures were used as classifiers within the scope of the study. Mobilenetv2, Resnet101, and Googlenet architectures were used in the study as a 2-D CNN. A 24-layer 3-D CNN architecture has also been designed and used. Our study is the first to analyze the effect of diversification of input data type on classification results of 2-D/3-D CNN architectures. The results obtained within the scope of the study indicate that diversifying X-ray images with tissue analysis methods in the diagnosis of COVID-19 and including CNN input provides significant improvements in the results. Also, it is understood that the 3-D CNN architecture can be an important alternative to achieve a high classification result.

Deep Learning–Based Approaches to Improve Classification Parameters for Diagnosing COVID-19 from CT Images

  • Yasar, H.
  • Ceylan, M.
Cognit Comput 2021 Journal Article, cited 0 times
Website
Patients infected with the COVID-19 virus develop severe pneumonia, which generally leads to death. Radiological evidence has demonstrated that the disease causes interstitial involvement in the lungs and lung opacities, as well as bilateral ground-glass opacities and patchy opacities. In this study, new pipeline suggestions are presented, and their performance is tested to decrease the number of false-negative (FN), false-positive (FP), and total misclassified images (FN + FP) in the diagnosis of COVID-19 (COVID-19/non-COVID-19 and COVID-19 pneumonia/other pneumonia) from CT lung images. A total of 4320 CT lung images, of which 2554 were related to COVID-19 and 1766 to non-COVID-19, were used for the test procedures in COVID-19 and non-COVID-19 classifications. Similarly, a total of 3801 CT lung images, of which 2554 were related to COVID-19 pneumonia and 1247 to other pneumonia, were used for the test procedures in COVID-19 pneumonia and other pneumonia classifications. A 24-layer convolutional neural network (CNN) architecture was used for the classification processes. Within the scope of this study, the results of two experiments were obtained by using CT lung images with and without local binary pattern (LBP) application, and sub-band images were obtained by applying dual-tree complex wavelet transform (DT-CWT) to these images. Next, new classification results were calculated from these two results by using the five pipeline approaches presented in this study. For COVID-19 and non-COVID-19 classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9676, 0.9181, 0.9456, 0.9545, and 0.9890, respectively; using pipeline approaches, the values were 0.9832, 0.9622, 0.9577, 0.9642, and 0.9923, respectively. For COVID-19 pneumonia/other pneumonia classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9615, 0.7270, 0.8846, 0.9180, and 0.9370, respectively; using pipeline approaches, the values were 0.9915, 0.8140, 0.9071, 0.9327, and 0.9615, respectively. The results of this study show that classification success can be increased by reducing the time to obtain per-image results through using the proposed pipeline approaches.

Rhinological Status of Patients with Nasolacrimal Duct Obstruction

  • Yartsev, Vasily D.
  • Atkova, Eugenia L.
  • Rozmanov, Eugeniy O.
  • Yartseva, Nina D.
International Archives of Otorhinolaryngology 2021 Journal Article, cited 0 times
Website
Introduction Studying the state of the nasal cavity and its sinuses and the morphometric parameters of the inferior nasal conchae, as well as a comparative analysis of obtained values in patients with primary (PANDO) and secondary acquired nasolacrimal duct obstruction (SALDO), is relevant. Objective To study the rhinological status of patients with PANDO and SALDO. Methods The present study was based on the results of computed tomography (CT) dacryocystography in patients with PANDO (n = 45) and SALDO due to exposure to radioactive iodine (n = 14). The control group included CT images of paranasal sinuses in patients with no pathology (n = 49). Rhinological status according to the Newman and Lund-Mackay scales and volume of the inferior nasal conchae were assessed. Statistical processing included nonparametric statistics methods; χ2 Pearson test; and the Spearman rank correlation method. Results The difference in values of the Newman and Lund-Mackay scales for the tested groups was significant. A significant difference in scores by the Newman scale was revealed when comparing the results of patients with SALDO and PANDO. Comparing the scores by the Lund-Mackay scale, a significant difference was found between the results of patients with SALDO and PANDO and between the results of patients with PANDO and the control group. Conclusion It was demonstrated that the rhinological status of patients with PANDO was worse than that of patients with SALDO and of subjects in the control group. No connection was found between the volume of the inferior nasal conchae and the development of lacrimal duct obstruction. Keywords Nasolacrimal Duct - sinus - computed tomography - dacryocystography - newman scale - lund-mackay scale

State-of-the-Art CNN Optimizer for Brain Tumor Segmentation in Magnetic Resonance Images

  • Yaqub, M.
  • Jinchao, F.
  • Zia, M. S.
  • Arshid, K.
  • Jia, K.
  • Rehman, Z. U.
  • Mehmood, A.
2020 Journal Article, cited 0 times
Website
Brain tumors have become a leading cause of death around the globe. The main reason for this epidemic is the difficulty conducting a timely diagnosis of the tumor. Fortunately, magnetic resonance images (MRI) are utilized to diagnose tumors in most cases. The performance of a Convolutional Neural Network (CNN) depends on many factors (i.e., weight initialization, optimization, batches and epochs, learning rate, activation function, loss function, and network topology), data quality, and specific combinations of these model attributes. When we deal with a segmentation or classification problem, utilizing a single optimizer is considered weak testing or validity unless the decision of the selection of an optimizer is backed up by a strong argument. Therefore, optimizer selection processes are considered important to validate the usage of a single optimizer in order to attain these decision problems. In this paper, we provides a comprehensive comparative analysis of popular optimizers of CNN to benchmark the segmentation for improvement. In detail, we perform a comparative analysis of 10 different state-of-the-art gradient descent-based optimizers, namely Adaptive Gradient (Adagrad), Adaptive Delta (AdaDelta), Stochastic Gradient Descent (SGD), Adaptive Momentum (Adam), Cyclic Learning Rate (CLR), Adaptive Max Pooling (Adamax), Root Mean Square Propagation (RMS Prop), Nesterov Adaptive Momentum (Nadam), and Nesterov accelerated gradient (NAG) for CNN. The experiments were performed on the BraTS2015 data set. The Adam optimizer had the best accuracy of 99.2% in enhancing the CNN ability in classification and segmentation.

Clinically relevant modeling of tumor growth and treatment response

  • Yankeelov, Thomas E
  • Atuegwu, Nkiruka
  • Hormuth, David
  • Weis, Jared A
  • Barnes, Stephanie L
  • Miga, Michael I
  • Rericha, Erin C
  • Quaranta, Vito
Science Translational Medicine 2013 Journal Article, cited 70 times
Website
Current mathematical models of tumor growth are limited in their clinical application because they require input data that are nearly impossible to obtain with sufficient spatial resolution in patients even at a single time point--for example, extent of vascularization, immune infiltrate, ratio of tumor-to-normal cells, or extracellular matrix status. Here we propose the use of emerging, quantitative tumor imaging methods to initialize a new generation of predictive models. In the near future, these models could be able to forecast clinical outputs, such as overall response to treatment and time to progression, which will provide opportunities for guided intervention and improved patient care.

An Improvement of Survival Stratification in Glioblastoma Patients via Combining Subregional Radiomics Signatures

  • Yang, Y.
  • Han, Y.
  • Hu, X.
  • Wang, W.
  • Cui, G.
  • Guo, L.
  • Zhang, X.
Front Neurosci 2021 Journal Article, cited 0 times
Website
Purpose: To investigate whether combining multiple radiomics signatures derived from the subregions of glioblastoma (GBM) can improve survival prediction of patients with GBM. Methods: In total, 129 patients were included in this study and split into training (n = 99) and test (n = 30) cohorts. Radiomics features were extracted from each tumor region then radiomics scores were obtained separately using least absolute shrinkage and selection operator (LASSO) COX regression. A clinical nomogram was also constructed using various clinical risk factors. Radiomics nomograms were constructed by combing a single radiomics signature from the whole tumor region with clinical risk factors or combining three radiomics signatures from three tumor subregions with clinical risk factors. The performance of these models was assessed by the discrimination, calibration and clinical usefulness metrics, and was compared with that of the clinical nomogram. Results: Incorporating the three radiomics signatures, i.e., Radscores for ET, NET, and ED, into the radiomics-based nomogram improved the performance in estimating survival (C-index: training/test cohort: 0.717/0.655) compared with that of the clinical nomogram (C-index: training/test cohort: 0.633/0.560) and that of the radiomics nomogram based on single region radiomics signatures (C-index: training/test cohort: 0.656/0.535). Conclusion: The multiregional radiomics nomogram exhibited a favorable survival stratification accuracy.

Cascaded Coarse-to-Fine Neural Network for Brain Tumor Segmentation

  • Yang, Shuojue
  • Guo, Dong
  • Wang, Lu
  • Wang, Guotai
2021 Book Section, cited 0 times
A cascaded framework of coarse-to-fine networks is proposed to segment brain tumor from multi-modality MR images into three subregions: enhancing tumor, whole tumor and tumor core. The framework is designed to decompose this multi-class segmentation into two sequential tasks according to hierarchical relationship among these regions. In the first task, a coarse-to-fine model based on Global Context Network predicts segmentation of whole tumor, which provides a bounding box of all three substructures to crop the input MR images. In the second task, cropped multi-modality MR images are fed into another two coarse-to-fine models based on NvNet trained on small patches to generate segmentation of tumor core and enhancing tumor, respectively. Experiments with BraTS 2020 validation set show that the proposed method achieves average Dice scores of 0.8003, 0.9123, 0.8630 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2020 testing set were 0.81715, 0.88229, 0.83085, respectively.

Snake-based interactive tooth segmentation for 3D mandibular meshes

  • Yang, Rui
  • Abdi, Amir H.
  • Eghbal, Atabak
  • Wang, Edward
  • Tran, Khanh Linh
  • Yang, David
  • Hodgson, Antony
  • Prisman, Eitan
  • Fels, Sidney
  • Linte, Cristian A.
  • Siewerdsen, Jeffrey H.
2021 Conference Paper, cited 0 times
Website
Mandibular meshes segmented from computerized tomography (CT) images contain rich information of the dentition conditions, which impairs the performance of shape completion algorithms relying on such data, but can benefit virtual planning for oral reconstructive surgeries. To locate the alveolar process and remove the dentition area, we propose a semiautomatic method using non-rigid registration, an active contour model, and constructive solid geometry (CSG) operations. An easy-to-use interactive tool is developed allowing users to adjust the tooth crown contour position. A validation study and a comparison study were conducted for method evaluation. In the validation study, we removed teeth for 28 models acquired from Vancouver General Hospital (VGH) and ran a shape completion test. Regarding 95th percentile Hausdorff distance (HD95), using edentulous models produced significantly better predictions of the premorbid shapes of diseased mandibles than using models with inconsistent dentition conditions (Z = −2.484, p = 0.01). The volumetric Dice score (DSC) shows no significant difference. In the second study, we compared the proposed method to manual removal in terms of manual processing time, symmetric HD95, and symmetric root mean square deviation (RMSD). The result indicates that our method reduced the manual processing time by 40% on average and approached the accuracy of manual tooth segmentation. It is promising to warrant further efforts towards clinical usage. This work forms the basis of a useful tool for coupling jaw reconstruction and restorative dentition for patient treatment planning.

Learning Dynamic Convolutions for Multi-modal 3D MRI Brain Tumor Segmentation

  • Yang, Qiushi
  • Yuan, Yixuan
2021 Book Section, cited 0 times
Accurate automated brain tumor segmentation with 3D Magnetic Resonance Image (MRIs) liberates doctors from tedious annotation work and further monitors and provides prompt treatment of the disease. Many recent Deep Convolutional Neural Networks (DCNN) achieve tremendous success on medical image analysis, especially tumor segmentation, while they usually use static networks without considering the inherent diversity of multi-modal inputs. In this paper, we introduce a dynamic convolutional module into brain tumor segmentation and help to learn input-adaptive parameters for specific multi-modal images. To the best of our knowledge, this is the first work to adopt dynamic convolutional networks to segment brain tumor with 3D MRI data. In addition, we employ multiple branches to learn low-level features from multi-modal inputs in an end-to-end fashion. We further investigate boundary information and propose a boundary-aware module to enforce our model to pay more attention to important pixels. Experimental results on the testing dataset and cross-validation dataset split from the training dataset of BraTS 2020 Challenge demonstrate that our proposed framework obtains competitive Dice scores compared with state-of-the-art approaches.

A Novel Deep Learning Framework for Standardizing the Label of OARs in CT

  • Yang, Qiming
  • Chao, Hongyang
  • Nguyen, Dan
  • Jiang, Steve
2019 Conference Paper, cited 0 times
When organs at risk (OARs) are contoured in computed tomography (CT) images for radiotherapy treatment planning, the labels are often inconsistent, which severely hampers the collection and curation of clinical data for research purpose. Currently, data cleaning is mainly done manually, which is time-consuming. The existing methods for automatically relabeling OARs remain unpractical with real patient data, due to the inconsistent delineation and similar small-volume OARs. This paper proposes an improved data augmentation technique according to the characteristics of clinical data. Besides, a novel 3D non-local convolutional neural network is proposed, which includes a decision making network with voting strategy. The resulting model can automatically identify OARs and solve the problems in existing methods, achieving the accurate OAR re-labeling goal. We used partial data from a public head-and-neck dataset (HN_PETCT) for training, and then tested the model on datasets from three different medical institutions. We have obtained the state-of-the-art results for identifying 28 OARs in the head-and-neck region, and also our model is capable of handling multi-center datasets indicating strong generalization ability. Compared to the baseline, the final result of our model achieved a significant improvement in the average true positive rate (TPR) on the three test datasets (+8.27%, +2.39%, +5.53%, respectively). More importantly, the F1 score of small-volume OAR with only 9 training samples increased from 28.63% to 91.17%.

Development of a radiomics nomogram based on the 2D and 3D CT features to predict the survival of non-small cell lung cancer patients

  • Yang, Lifeng
  • Yang, Jingbo
  • Zhou, Xiaobo
  • Huang, Liyu
  • Zhao, Weiling
  • Wang, Tao
  • Zhuang, Jian
  • Tian, Jie
European Radiology 2018 Journal Article, cited 0 times
Website

CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy

  • Yang, J.
  • Veeraraghavan, H.
  • van Elmpt, W.
  • Dekker, A.
  • Gooding, M.
  • Sharp, G.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS: Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES: The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS: This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.

Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017

  • Yang, J.
  • Veeraraghavan, H.
  • Armato, S. G., 3rd
  • Farahani, K.
  • Kirby, J. S.
  • Kalpathy-Kramer, J.
  • van Elmpt, W.
  • Dekker, A.
  • Han, X.
  • Feng, X.
  • Aljabar, P.
  • Oliveira, B.
  • van der Heyden, B.
  • Zamdborg, L.
  • Lam, D.
  • Gooding, M.
  • Sharp, G. C.
Med Phys 2018 Journal Article, cited 172 times
Website
PURPOSE: This report presents the methods and results of the Thoracic Auto-Segmentation Challenge organized at the 2017 Annual Meeting of American Association of Physicists in Medicine. The purpose of the challenge was to provide a benchmark dataset and platform for evaluating performance of autosegmentation methods of organs at risk (OARs) in thoracic CT images. METHODS: Sixty thoracic CT scans provided by three different institutions were separated into 36 training, 12 offline testing, and 12 online testing scans. Eleven participants completed the offline challenge, and seven completed the online challenge. The OARs were left and right lungs, heart, esophagus, and spinal cord. Clinical contours used for treatment planning were quality checked and edited to adhere to the RTOG 1106 contouring guidelines. Algorithms were evaluated using the Dice coefficient, Hausdorff distance, and mean surface distance. A consolidated score was computed by normalizing the metrics against interrater variability and averaging over all patients and structures. RESULTS: The interrater study revealed highest variability in Dice for the esophagus and spinal cord, and in surface distances for lungs and heart. Five out of seven algorithms that participated in the online challenge employed deep-learning methods. Although the top three participants using deep learning produced the best segmentation for all structures, there was no significant difference in the performance among them. The fourth place participant used a multi-atlas-based approach. The highest Dice scores were produced for lungs, with averages ranging from 0.95 to 0.98, while the lowest Dice scores were produced for esophagus, with a range of 0.55-0.72. CONCLUSION: The results of the challenge showed that the lungs and heart can be segmented fairly accurately by various algorithms, while deep-learning methods performed better on the esophagus. Our dataset together with the manual contours for all training cases continues to be available publicly as an ongoing benchmarking resource.

Abdominal CT pancreas segmentation using multi-scale convolution with aggregated transformations

  • Yang, Jin
  • Marcus, Daniel S.
  • Sotiras, Aristeidis
  • Iftekharuddin, Khan M.
  • Chen, Weijie
2023 Conference Paper, cited 0 times
Convolutional neural networks (CNNs) are a popular choice for medical image segmentation. However, they may be challenged by the large inter-subject variation in organ shapes and sizes due to CNNs typically employing convolutions with fixed-sized local receptive fields. To address this limitation, we proposed multi-scale aggregated residual convolution (MARC) and iterative multi-scale aggregated residual convolution (iMARC) to capture finer and richer features at various scales. Our goal is to improve single convolutions’ representation capabilities. This is achieved by employing convolutions with varying-sized receptive fields, combining multiple convolutions into a deeper one, and dividing single convolutions into a set of channel-independent sub-convolutions. These implementations result in an increase in their depth, width, and cardinality. The proposed MARC and iMARC can be easily integrated into general CNN architectures and trained end-to-end. To evaluate the improvements of MARC and iMARC on CNNs’ segmentation capabilities, we integrated MARC and iMARC into a standard 2D U-Net architecture for pancreas segmentation on abdominal computed tomography (CT) images. The results showed that our proposed MARC and iMARC enhanced the representation capabilities of single convolutions, resulting in improved segmentation performance with lower computational complexity.

Combining Global Information with Topological Prior for Brain Tumor Segmentation

  • Yang, Hua
  • Shen, Zhiqiang
  • Li, Zhaopei
  • Liu, Jinqing
  • Xiao, Jinchao
2022 Book Section, cited 0 times
Gliomas are the most common and aggressive malignant primary brain tumors. Automatic brain tumor segmentation from multi-modality magnetic resonance images using deep learning methods is critical for gliomas diagnosis. Deep learning segmentation architectures, especially based on fully convolutional neural network, have proved great performance on medical image segmentation. However, these approaches cannot explicitly model global information and overlook the topology structure of lesion regions, which leaves room for improvement. In this paper, we propose a convolution-and-transformer network (COTRNet) to explicitly capture global information and a topology aware loss to constrain the network to learn topological information. Moreover, we exploit transfer learning by using pretrained parameters on ImageNet and deep supervision by adding multi-level predictions to further improve the segmentation performance. COTRNet achieved dice scores of 78.08%, 76.18%, and 83.92% in the enhancing tumor, the tumor core, and the whole tumor segmentation on brain tumor segmentation challenge 2021. Experimental results demonstrated effectiveness of the proposed method.

Research on the Content-Based Classification of Medical Image

  • Yang, Hui
  • Liu, Feng
  • Wang, Zhiqi
  • Tang, Han
  • Sun, Shuyang
  • Sun, Shilei
2017 Journal Article, cited 1 times
Website
Medical images have increased tremendously in numbers and categories these years as the devices generating them become more and more advanced. In this paper, four classifiers for automatically identifying medical images of different body parts are explored and implemented. Classic and recognized image descriptors such as wavelet transform and SIFT are utilized and combined with SVM and proposed modified KNN to verify the validity of the traditional classification methods when applied to medical images. In the process, a novel representation of wavelet feature is advanced in combination with a proposed tuned KNN. This wavelet feature is also applied with SVM. SIFT and its variety, dense SIFT are both employed to extract image features and they are formatted by the spatial pyramid model into a concatenated histogram. All these methods are compared with one another for accuracy and efficiency. Moreover, a convolutional network (CNN) is constructed to classify medical images. We show that in regards to the various types and huge numbers of medical images, traditional methods and deep learning approach such as CNN can both achieve high accuracy results. The methods illustrated in this paper can all be reasonably applied to medical image application with variance in speed and accuracy.

Efficient diagnosis of hematologic malignancies using bone marrow microscopic images: A method based on MultiPathGAN and MobileViTv2

  • Yang, G.
  • Qin, Z.
  • Mu, J.
  • Mao, H.
  • Mao, H.
  • Han, M.
Comput Methods Programs Biomed 2023 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVES: Hematologic malignancies, including the associated multiple subtypes, are critically threatening to human health. The timely detection of malignancies is crucial for their effective treatment. In this regard, the examination of bone marrow smears constitutes a crucial step. Nonetheless, the conventional approach to cell identification and enumeration is laborious and time-intensive. Therefore, the present study aimed to develop a method for the efficient diagnosis of these malignancies directly from bone marrow microscopic images. METHODS: A deep learning-based framework was developed to facilitate the diagnosis of common hematologic malignancies. First, a total of 2033 microscopic images of bone marrow analysis, including the images for 6 disease types and 1 healthy control, were collected from two Chinese medical websites. Next, the collected images were classified into the training, validation, and test datasets in the ratio of 7:1:2. Subsequently, a method of stain normalization to multi-domains (stain domain augmentation) based on the MultiPathGAN model was developed to equalize the stain styles and expand the image datasets. Afterward, a lightweight hybrid model named MobileViTv2, which integrates the strengths of both CNNs and ViTs, was developed for disease classification. The resulting model was trained and utilized to diagnose patients based on multiple microscopic images of their bone marrow smears, obtained from a cohort of 61 individuals. RESULTS: MobileViTv2 exhibited an average accuracy of 94.28% when applied to the test set, with multiple myeloma, acute lymphocytic leukemia, and lymphoma revealed as the three diseases diagnosed with the highest accuracy values of 98%, 96%, and 96%, respectively. Regarding patient-level prediction, the average accuracy of MobileViTv2 was 96.72%. This model outperformed both CNN and ViT models in terms of accuracy, despite utilizing only 9.8 million parameters. When applied to two public datasets, MobileViTv2 exhibited accuracy values of 99.75% and 99.72%, respectively, and outperformed previous methods. CONCLUSIONS: The proposed framework could be applied directly to bone marrow microscopic images with different stain styles to efficiently establish the diagnosis of common hematologic malignancies.

MRI Brain Tumor Segmentation and Patient Survival Prediction Using Random Forests and Fully Convolutional Networks

  • Yang, Guang
  • Nigel Allinson
  • Xujiong Ye
2018 Book Section, cited 1 times
Website

Clinical application of mask region-based convolutional neural network for the automatic detection and segmentation of abnormal liver density based on hepatocellular carcinoma computed tomography datasets

  • Yang, C. J.
  • Wang, C. K.
  • Fang, Y. D.
  • Wang, J. Y.
  • Su, F. C.
  • Tsai, H. M.
  • Lin, Y. J.
  • Tsai, H. W.
  • Yeh, L. R.
PLoS One 2021 Journal Article, cited 0 times
Website
The aim of the study was to use a previously proposed mask region-based convolutional neural network (Mask R-CNN) for automatic abnormal liver density detection and segmentation based on hepatocellular carcinoma (HCC) computed tomography (CT) datasets from a radiological perspective. Training and testing datasets were acquired retrospectively from two hospitals of Taiwan. The training dataset contained 10,130 images of liver tumor densities of 11,258 regions of interest (ROIs). The positive testing dataset contained 1,833 images of liver tumor densities with 1,874 ROIs, and negative testing data comprised 20,283 images without abnormal densities in liver parenchyma. The Mask R-CNN was used to generate a medical model, and areas under the curve, true positive rates, false positive rates, and Dice coefficients were evaluated. For abnormal liver CT density detection, in each image, we identified the mean area under the curve, true positive rate, and false positive rate, which were 0.9490, 91.99%, and 13.68%, respectively. For segmentation ability, the highest mean Dice coefficient obtained was 0.8041. This study trained a Mask R-CNN on various HCC images to construct a medical model that serves as an auxiliary tool for alerting radiologists to abnormal CT density in liver scans; this model can simultaneously detect liver lesions and perform automatic instance segmentation.

Source free domain adaptation for medical image segmentation with fourier style mining

  • Yang, C.
  • Guo, X.
  • Chen, Z.
  • Yuan, Y.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Unsupervised domain adaptation (UDA) aims to exploit the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled target domain. Existing UDA techniques typically assume that samples from source and target domains are freely accessible during the training. However, it may be impractical to access source images due to privacy concerns, especially in medical imaging scenarios with the patient information. To tackle this issue, we devise a novel source free domain adaptation framework with fourier style mining, where only a well-trained source segmentation model is available for the adaptation to the target domain. Our framework is composed of two stages: a generation stage and an adaptation stage. In the generation stage, we design a Fourier Style Mining (FSM) generator to inverse source-like images through statistic information of the pretrained source model and mutual Fourier Transform. These generated source-like images can provide source data distribution and benefit the domain alignment. In the adaptation stage, we design a Contrastive Domain Distillation (CDD) module to achieve feature-level adaptation, including a domain distillation loss to transfer relation knowledge and a domain contrastive loss to narrow down the domain gap by a self-supervised paradigm. Besides, a Compact-Aware Domain Consistency (CADC) module is proposed to enhance consistency learning by filtering out noisy pseudo labels with shape compactness metric, thus achieving output-level adaptation. Extensive experiments on cross-device and cross-centre datasets are conducted for polyp and prostate segmentation, and our method delivers impressive performance compared with state-of-the-art domain adaptation methods. The source code is available at https://github.com/CityU-AIM-Group/SFDA-FSM.

Variation-Aware Federated Learning With Multi-Source Decentralized Medical Image Data

  • Yan, Z.
  • Wicaksana, J.
  • Wang, Z.
  • Yang, X.
  • Cheng, K. T.
IEEE J Biomed Health Inform 2021 Journal Article, cited 69 times
Website
Privacy concerns make it infeasible to construct a large medical image dataset by fusing small ones from different sources/institutions. Therefore, federated learning (FL) becomes a promising technique to learn from multi-source decentralized data with privacy preservation. However, the cross-client variation problem in medical image data would be the bottleneck in practice. In this paper, we propose a variation-aware federated learning (VAFL) framework, where the variations among clients are minimized by transforming the images of all clients onto a common image space. We first select one client with the lowest data complexity to define the target image space and synthesize a collection of images through a privacy-preserving generative adversarial network, called PPWGAN-GP. Then, a subset of those synthesized images, which effectively capture the characteristics of the raw images and are sufficiently distinct from any raw image, is automatically selected for sharing with other clients. For each client, a modified CycleGAN is applied to translate its raw images to the target image space defined by the shared synthesized images. In this way, the cross-client variation problem is addressed with privacy preservation. We apply the framework for automated classification of clinically significant prostate cancer and evaluate it using multi-source decentralized apparent diffusion coefficient (ADC) image data. Experimental results demonstrate that the proposed VAFL framework stably outperforms the current horizontal FL framework. As VAFL is independent of deep learning architectures for classification, we believe that the proposed framework is widely applicable to other medical image classification tasks.

Markerless Lung Tumor Localization From Intraoperative Stereo Color Fluoroscopic Images for Radiotherapy

  • Yan, Yongxuan
  • Fujii, Fumitake
  • Shiinoki, Takehiro
  • Liu, Shengping
IEEE Access 2024 Journal Article, cited 0 times
Website
Accurately determining tumor regions from stereo fluoroscopic images during radiotherapy is a challenging task. As a result, high-density fiducial markers are implanted around tumors in clinical practice as internal surrogates of the tumor, which leads to associated surgical risks. This study was conducted to achieve lung tumor localization without the use of fiducial markers. We propose training a cascade U-net system to perform color to grayscale conversion, enhancement, bone suppression, and tumor detection to determine the precise tumor region. We generated Digitally Reconstructed Radiographs (DRRs) and tumor labels from 4D planning CT images as training data. An improved maximum projection algorithm and a novel color-to-gray conversion algorithm were proposed to improve the quality of the generated training data. Training a bone suppression model using bone-enhanced and bone-suppressed DRRs enables the bone suppression model to achieve better bone suppression performance. The mean peak signal-to-noise ratios in the test sets of the trained translation and bone suppression models are 39.284 ± 0.034 dB and 37.713 ± 0.724 dB, respectively. The results indicate that our proposed markerless tumor localization method is applicable in seven out of ten cases; in applicable cases, the centroid position error of the tumor detection model is less than 1.13 mm; and the calculated tumor center motion trajectories using the proposed network highly coincide with the motion trajectories of implanted fiducial markers in over 60% of captured groups, providing a promising direction for markerless tumor localization tracking methods.

Accelerating Brain DTI and GYN MRI Studies Using Neural Network

  • Yan, Yuhao
Medical Physics 2021 Thesis, cited 0 times
Website
There always exists a demand to accelerate the time-consuming MRI acquisition process. Many methods have been proposed to achieve this goal, including deep learning method which appears to be a robust tool compared to conventional methods. While many works have been done to evaluate the performance of neural networks on standard anatomical MR images, few attentions have been paid to accelerating other less conventional MR image acquisitions. This work aims to evaluate the feasibility of neural networks on accelerating Brain DTI and Gynecological Brachytherapy MRI.Three neural networks including U-net, Cascade-net and PD-net were evaluated. Brain DTI data was acquired from public database RIDER NEURO MRI while cervix gynecological MRI data was acquired from Duke University Hospital clinic data. A 25% Cartesian undersampling strategy was applied to all the training and test data. Diffusion weighted images and quantitative functional maps in Brain DTI, T1-spgr and T2 images in GYN studies were reconstructed. The performance of the neural networks was evaluated by quantitatively calculating the similarity between the reconstructed images and the reference images, using the metric Total Relative Error (TRE). Results showed that with the architectures and parameters set in this work, all three neural networks could accelerate Brain DTI and GYN T2 MR imaging. Generally, PD-net slightly outperformed Cascade-net, and they both outperformed U-net with respect to image reconstruction performance. While this was also true for reconstruction of quantitative functional diffusion weighted maps and GYN T1-spgr images, the overall performance of the three neural networks on these two tasks needed further improvement. To be concluded, PD-net is very promising on accelerating T2-weighted-based MR imaging. Future work can focus on adjusting the parameters and architectures of the neural networks to improve the performance on accelerating GYN T1-spgr MR imaging and adopting more robust undersampling strategy such as radial undersampling strategy to further improve the overall acceleration performance.

3D Deep Residual Encoder-Decoder CNNS with Squeeze-and-Excitation for Brain Tumor Segmentation

  • Yan, Kai
  • Sun, Qiuchang
  • Li, Ling
  • Li, Zhicheng
2020 Book Section, cited 0 times
Segmenting brain tumors from multimodal MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Due to the highly heterogeneous appearance and shape, segmentation of brain tumors in multimodal MRI scans is a challenging task in medical image analysis. In recent years, many segmentation algorithms based on neural network architecture are proposed to address this task. Observing the previous state-of-the-art algorithms, not only did we explore multimodal brain tumor segmentation in 2D space, 2.5D space and 3D space respectively, we also made a lot of attempts in attention block to improve the segmentation result. In this paper, we describe a 3D deep residual encoder-decoder CNNS with Squeeze-and-Excitation block for brain tumor segmentation. In order to learn more effective image features, we have utilized an attention module after each Res-block to weight each channel, which emphasizes useful features while suppresses invalid ones. To deal with class imbalance, we have formulated a weighted Dice loss function. We find that 3D segmentation network with attention block which can enhance context features can significantly improve the performance. In addition, the results of data preprocessing have a great impact on segmentation performance. Our method obtained Dice scores of 0.70, 0.85 and 0.80 for segmenting enhancing tumor, whole tumor and tumor core, respectively on the testing data set.

Predicting 1p/19q co-deletion status from magnetic resonance imaging using deep learning in adult-type diffuse lower-grade gliomas: a discovery and validation study

  • Yan, J.
  • Zhang, S.
  • Sun, Q.
  • Wang, W.
  • Duan, W.
  • Wang, L.
  • Ding, T.
  • Pei, D.
  • Sun, C.
  • Wang, W.
  • Liu, Z.
  • Hong, X.
  • Wang, X.
  • Guo, Y.
  • Li, W.
  • Cheng, J.
  • Liu, X.
  • Li, Z. C.
  • Zhang, Z.
Lab Invest 2022 Journal Article, cited 0 times
Website
Determination of 1p/19q co-deletion status is important for the classification, prognostication, and personalized therapy in diffuse lower-grade gliomas (LGG). We developed and validated a deep learning imaging signature (DLIS) from preoperative magnetic resonance imaging (MRI) for predicting the 1p/19q status in patients with LGG. The DLIS was constructed on a training dataset (n = 330) and validated on both an internal validation dataset (n = 123) and a public TCIA dataset (n = 102). The receiver operating characteristic (ROC) analysis and precision recall curves (PRC) were used to measure the classification performance. The area under ROC curves (AUC) of the DLIS was 0.999 for training dataset, 0.986 for validation dataset, and 0.983 for testing dataset. The F1-score of the prediction model was 0.992 for training dataset, 0.940 for validation dataset, and 0.925 for testing dataset. Our data suggests that DLIS could be used to predict the 1p/19q status from preoperative imaging in patients with LGG. The imaging-based deep learning has the potential to be a noninvasive tool predictive of molecular markers in adult diffuse gliomas.

Classification of LGG/GBM Brain Tumor in MRI Using Deep-Learning Schemes: A Study

  • Yamuna, S.
  • Vijayakumar, K.
  • Valli, R.
2023 Conference Paper, cited 0 times
Website
Brain abnormalities require immediate medical attention, including diagnosis and treatment. One of the most severe brain disorders is brain tumor, and magnetic resonance imaging (MRI) is frequently used for clinical level screening of these illnesses. In order to categorize brain MRI images into low-grade gliomas (LGG) and glioblastoma-multiform (GBM), a deep learning strategy will be implemented in this work. The steps in this scheme are as follows: (i) gathering data and converting 3D to 2D; (ii) deep features mining using selected scheme; (iii) binary classification using SoftMax; and (iv) comparison analysis using selected deep learning techniques to determine the best model for additional refinement. The LGG/GBM photos are thought to be gathered by the Cancer Imaging Archive (TCIA) database. The results of this study demonstrate that max-pooling offers a higher accuracy than average-pooling based models, and the performance of the created scheme is validated using both average- and maxpooling. In the chosen models, the result of VGG16 is superior for the LGG/GBM detection task.

Deep learning method for brain tumor identification with multimodal 3D-MRI

  • Yakaiah, Potharaju
  • Srikar, D.
  • Kaushik, G.
  • Geetha, Y.
2023 Conference Paper, cited 0 times
Website
In the primary gliomas, the brain tumors be the majority frequent of all types. Both the accurate and detailed delineation of tumor borders are significant for detection, treatment planning, also discovering risk factors this paper presents a brain tumor segmentation system using a deep learning approach. U-net is a new type of deep learning network which has been trained to segment the brain tumors. Essentially, our architecture be a nested, deeply-supervised decoder-encoder-skipper network. We use the BraTS data set as our training data for our model. For all practical purposes, a tumor in the validation dataset must be 0.757, 0.17 also 0.89.

Automatic 3D Mesh-Based Centerline Extraction from a Tubular Geometry Form

  • Yahya-Zoubir, Bahia
  • Hamami, Latifa
  • Saadaoui, Llies
  • Ouared, Rafik
Information Technology And Control 2016 Journal Article, cited 0 times
Website

Morphological diagnosis of hematologic malignancy using feature fusion-based deep convolutional neural network

  • Yadav, D. P.
  • Kumar, D.
  • Jalal, A. S.
  • Kumar, A.
  • Singh, K. U.
  • Shah, M. A.
2023 Journal Article, cited 0 times
Website
Leukemia is a cancer of white blood cells characterized by immature lymphocytes. Due to blood cancer, many people die every year. Hence, the early detection of these blast cells is necessary for avoiding blood cancer. A novel deep convolutional neural network (CNN) 3SNet that has depth-wise convolution blocks to reduce the computation costs has been developed to aid the diagnosis of leukemia cells. The proposed method includes three inputs to the deep CNN model. These inputs are grayscale and their corresponding histogram of gradient (HOG) and local binary pattern (LBP) images. The HOG image finds the local shape, and the LBP image describes the leukaemia cell's texture pattern. The suggested model was trained and tested with images from the AML-Cytomorphology_LMU dataset. The mean average precision (MAP) for the cell with less than 100 images in the dataset was 84%, whereas for cells with more than 100 images in the dataset was 93.83%. In addition, the ROC curve area for these cells is more than 98%. This confirmed proposed model could be an adjunct tool to provide a second opinion to a doctor.

A Multi-path Decoder Network for Brain Tumor Segmentation

  • Xue, Yunzhe
  • Xie, Meiyan
  • Farhat, Fadi G.
  • Boukrina, Olga
  • Barrett, A. M.
  • Binder, Jeffrey R.
  • Roshan, Usman W.
  • Graves, William W.
2020 Book Section, cited 0 times
The identification of brain tumor type, shape, and size from MRI images plays an important role in glioma diagnosis and treatment. Manually identifying the tumor is time expensive and prone to error. And while information from different image modalities may help in principle, using these modalities for manual tumor segmentation may be even more time consuming. Convolutional U-Net architectures with encoders and decoders are state of the art in automated methods for image segmentation. Often only a single encoder and decoder is used, where different modalities and regions of the tumor share the same model parameters. This may lead to incorrect segmentations. We propose a convolutional U-Net that has separate, independent encoders for each image modality. The outputs from each encoder are concatenated and given to separate fusion and decoder blocks for each region of the tumor. The features from each decoder block are then calibrated in a final feature fusion block, after which the model gives it final predictions. Our network is an end-to-end model that simplifies training and reproducibility. On the BraTS 2019 validation dataset our model achieves average Dice values of 0.75, 0.90, and 0.83 for the enhancing tumor, whole tumor, and tumor core subregions respectively.

Deep hybrid neural-like P systems for multiorgan segmentation in head and neck CT/MR images

  • Xue, Jie
  • Wang, Yuan
  • Kong, Deting
  • Wu, Feiyang
  • Yin, Anjie
  • Qu, Jianhua
  • Liu, Xiyu
Expert Systems with Applications 2021 Journal Article, cited 0 times
Website
Automatic segmentation of organs-at-risk (OARs) of the head and neck, such as the brainstem, the left and right parotid glands, mandible, optic chiasm, and the left and right optic nerves, are crucial when formulating radiotherapy plans. However, there are difficulties due to (1) the small sizes of these organs (especially the optic chiasm and optic nerves) and (2) the different positions and phenotypes of the OARs. In this paper, we propose a novel, automatic multiorgan segmentation algorithm based on a new hybrid neural-like P system, to alleviate the above challenges. The new P system possesses the joint advantages of cell-like and neural-like P systems and includes new structures and rules, allowing it to solve more real-world problems in parallelism. In the new P system, effective ensemble convolutional neural networks (CNNs) are implemented with different initializations simultaneously to perform pixel-wise segmentations of OARs, which can obtain more effective features and leverage the strength of ensemble learning. Evaluations on three public datasets show the effectiveness and robustness of the proposed algorithm for accurate OARs segmentation in various image modalities.

Lung cancer diagnosis in CT images based on Alexnet optimized by modified Bowerbird optimization algorithm

  • Xu, Yeguo
  • Wang, Yuhang
  • Razmjooy, Navid
Biomedical Signal Processing and Control 2022 Journal Article, cited 0 times
Objective Cancer is the uncontrolled growth of abnormal cells that do not function as normal cells. Lung cancer is the leading cause of cancer death in the world, so early detection of lung disease will have a major impact on the likelihood of a definitive cure. Computed Tomography (CT) has been identified as one of the best imaging techniques. Various tools available for medical image processing include data collection in the form of images and algorithms for image analysis and system testing. Methods This study proposes a new diagnosis system for lung cancer based on image processing and artificial intelligence from CT-scan images. In the present study, after noise reduction based on wiener Filtering, Alexnet has been utilized for diagnosing healthy and cancerous cases. The system also uses optimum terms of different features, including Gabor wavelet transform, GLCM, and GLRM to be used in replacing with the network feature extraction part. The study also uses a new modified version of the Satin Bowerbird Optimization Algorithm for optimal designing of the Alexnet architecture and optimal selection of the features. Results Simulation results of the proposed method on the RIDER Lung CT collection database and the comparison results with some other state-of-the-art methods show that the proposed method provides a satisfying tool for lung cancer diagnosis. The comparison results show that the proposed method with 95.96% accuracy shows the highest value toward the others. The results also show that a higher harmonic mean value for the proposed method with higher F1-score of the method toward the others. Plus, the highest test recall results (98.06%) of the proposed method indicate its higher rate of relevant instances that are retrieved for the images. Conclusion Therefore, using the proposed method can provide an efficient tool for optimal diagnosis of the Lung Cancer from the CT Images. Significance this shows that using the proposed method as a new deep-learning-based methodology, can provide higher accuracy and can resolve the big problem of optimal hyperparameters selection of the deep-learning-based methodology techniques for the aimed case.

Brain Tumor Segmentation Using Attention-Based Network in 3D MRI Images

  • Xu, Xiaowei
  • Zhao, Wangyuan
  • Zhao, Jun
2020 Book Section, cited 0 times
Gliomas are the most common primary brain malignancies. Identifying the sub-regions of gliomas before surgery is meaningful, which may extend the survival of patients. However, due to the heterogeneous appearance and shape of gliomas, it is a challenge to accurately segment the enhancing tumor, the necrotic, the non-enhancing tumor core and the peritumoral edema. In this study, an attention-based network was used to segment the glioma sub-regions in multi-modality MRI scans. Attention U-Net was employed as the basic architecture of the proposed network. The attention gates help the network focus on the task-relevant regions in the image. Besides the spatial-wise attention gates, the channel-wise attention gates proposed in SE Net were also embedded into the segmentation network. This attention mechanism in the feature dimension prompts the network to focus on the useful feature maps. Furthermore, in order to reduce false positives, a training strategy combined with a sampling strategy was proposed in our study. The segmentation performance of the proposed network was evaluated on the BraTS 2019 validation dataset and testing dataset. In the validation dataset, the dice similarity coefficients of enhancing tumor, tumor core and whole tumor were 0.759, 0.807 and 0.893 respectively. And in the testing dataset, the dice scores of enhancing tumor, tumor core and whole tumor were 0.794, 0.814 and 0.866 respectively.

CARes‐UNet: Content‐Aware residual UNet for lesion segmentation of COVID‐19 from chest CT images

  • Xu, Xinhua
  • Wen, Yuhang
  • Zhao, Lu
  • Zhang, Yi
  • Zhao, Youjun
  • Tang, Zixuan
  • Yang, Ziduo
  • Chen, Calvin Yu‐Chian
Medical Physics 2021 Journal Article, cited 0 times
Website

Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression

  • XU, Xiaoyang
2019 Thesis, cited 0 times
Website
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.

Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis

  • Xu, Weijie
  • Nie, Lina
  • Chen, Beijing
  • Ding, Weiping
2023 Journal Article, cited 0 times
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.

Radiomics-based survival risk stratification of glioblastoma is associated with different genome alteration

  • Xu, P. F.
  • Li, C.
  • Chen, Y. S.
  • Li, D. P.
  • Xi, S. Y.
  • Chen, F. R.
  • Li, X.
  • Chen, Z. P.
Comput Biol Med 2023 Journal Article, cited 0 times
Website
BACKGROUND: Glioblastoma (GBM) is a remarkable heterogeneous tumor with few non-invasive, repeatable, and cost-effective prognostic biomarkers reported. In this study, we aim to explore the association between radiomic features and prognosis and genomic alterations in GBM. METHODS: A total of 180 GBM patients (training cohort: n = 119; validation cohort 1: n = 37; validation cohort 2: n = 24) were enrolled and underwent preoperative MRI scans. From the multiparametric (T1, T1-Gd, T2, and T2-FLAIR) MR images, the radscore was developed to predict overall survival (OS) in a multistep postprocessing workflow and validated in two external validation cohorts. The prognostic accuracy of the radscore was assessed with concordance index (C-index) and Brier scores. Furthermore, we used hierarchical clustering and enrichment analysis to explore the association between image features and genomic alterations. RESULTS: The MRI-based radscore was significantly correlated with OS in the training cohort (C-index: 0.70), validation cohort 1 (C-index: 0.66), and validation cohort 2 (C-index: 0.74). Multivariate analysis revealed that the radscore was an independent prognostic factor. Cluster analysis and enrichment analysis revealed that two distinct phenotypic clusters involved in distinct biological processes and pathways, including the VEGFA-VEGFR2 signaling pathway (q-value = 0.033), JAK-STAT signaling pathway (q-value = 0.049), and regulation of MAPK cascade (q-value = 0.0015/0.025). CONCLUSIONS: Radiomic features and radiomics-derived radscores provided important phenotypic and prognostic information with great potential for risk stratification in GBM.

Development and acceptability validation of a deep learning-based tool for whole-prostate segmentation on multiparametric MRI: a multicenter study

  • Xu, L.
  • Zhang, G.
  • Zhang, D.
  • Zhang, J.
  • Zhang, X.
  • Bai, X.
  • Chen, L.
  • Jin, R.
  • Mao, L.
  • Li, X.
  • Sun, H.
  • Jin, Z.
Quant Imaging Med Surg 2023 Journal Article, cited 0 times
Website
BACKGROUND: Accurate whole prostate segmentation on magnetic resonance imaging (MRI) is important in the management of prostatic diseases. In this multicenter study, we aimed to develop and evaluate a clinically applicable deep learning-based tool for automatic whole prostate segmentation on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI). METHODS: In this retrospective study, 3-dimensional (3D) U-Net-based models in the segmentation tool were trained with 223 patients who underwent prostate MRI and subsequent biopsy from 1 hospital and validated in 1 internal testing cohort (n=95) and 3 external testing cohorts: PROSTATEx Challenge for T2WI and DWI (n=141), Tongji Hospital (n=30), and Beijing Hospital for T2WI (n=29). Patients from the latter 2 centers were diagnosed with advanced prostate cancer. The DWI model was further fine-tuned to compensate for the scanner variety in external testing. A quantitative evaluation, including Dice similarity coefficients (DSCs), 95% Hausdorff distance (95HD), and average boundary distance (ABD), and a qualitative analysis were used to evaluate the clinical usefulness. RESULTS: The segmentation tool showed good performance in the testing cohorts on T2WI (DSC: 0.922 for internal testing and 0.897-0.947 for external testing) and DWI (DSC: 0.914 for internal testing and 0.815 for external testing with fine-tuning). The fine-tuning process significantly improved the DWI model's performance in the external testing dataset (DSC: 0.275 vs. 0.815; P<0.01). Across all testing cohorts, the 95HD was <8 mm, and the ABD was <3 mm. The DSCs in the prostate midgland (T2WI: 0.949-0.976; DWI: 0.843-0.942) were significantly higher than those in the apex (T2WI: 0.833-0.926; DWI: 0.755-0.821) and base (T2WI: 0.851-0.922; DWI: 0.810-0.929) (all P values <0.01). The qualitative analysis showed that 98.6% of T2WI and 72.3% of DWI autosegmentation results in the external testing cohort were clinically acceptable. CONCLUSIONS: The 3D U-Net-based segmentation tool can automatically segment the prostate on T2WI with good and robust performance, especially in the prostate midgland. Segmentation on DWI was feasible, but fine-tuning might be needed for different scanners.

A Deep Supervised U-Attention Net for Pixel-Wise Brain Tumor Segmentation

  • Xu, Jia Hua
  • Teng, Wai Po Kevin
  • Wang, Xiong Jun
  • Nürnberger, Andreas
2021 Book Section, cited 0 times
Glioblastoma (GBM) is one of the leading causes of cancer death. The imaging diagnostics are critical for all phases in the treatment of brain tumor. However, manually-checked output by a radiologist has several limitations such as tedious annotation, time consuming and subjective biases, which influence the outcome of a brain tumor affected region. Therefore, the development of an automatic segmentation framework has attracted lots of attention from both clinical and academic researchers. Recently, most state-of-the-art algorithms are derived from deep learning methodologies such as the U-net, attention network. In this paper, we propose a deep supervised U-Attention Net framework for pixel-wise brain tumor segmentation, which combines the U-net, Attention network and a deep supervised multistage layer. Subsequently, we are able to achieve a low resolution and high resolution feature representations even for small tumor regions. Preliminary results of our method on training data have mean dice coefficients of about 0.75, 0.88, and 0.80; on the other hand, validation data achieve a mean dice coefficient of 0.67, 0.86, and 0.70, for enhancing tumor (ET), whole tumor (WT), and tumor core (TC) respectively .

Prostate cancer detection using residual networks

  • Xu, Helen
  • Baxter, John S H
  • Akin, Oguz
  • Cantor-Rivera, Diego
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS: A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS: The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION: This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.

Deep Generative Adversarial Reinforcement Learning for Semi-Supervised Segmentation of Low-Contrast and Small Objects in Medical Images

  • Xu, C.
  • Zhang, T.
  • Zhang, D.
  • Zhang, D.
  • Han, J.
IEEE Trans Med Imaging 2024 Journal Article, cited 0 times
Website
Deep reinforcement learning (DRL) has demonstrated impressive performance in medical image segmentation, particularly for low-contrast and small medical objects. However, current DRL-based segmentation methods face limitations due to the optimization of error propagation in two separate stages and the need for a significant amount of labeled data. In this paper, we propose a novel deep generative adversarial reinforcement learning (DGARL) approach that, for the first time, enables end-to-end semi-supervised medical image segmentation in the DRL domain. DGARL ingeniously establishes a pipeline that integrates DRL and generative adversarial networks (GANs) to optimize both detection and segmentation tasks holistically while mutually enhancing each other. Specifically, DGARL introduces two innovative components to facilitate this integration in semi-supervised settings. First, a task-joint GAN with two discriminators links the detection results to the GAN's segmentation performance evaluation, allowing simultaneous joint evaluation and feedback. This ensures that DRL and GAN can be directly optimized based on each other's results. Second, a bidirectional exploration DRL integrates backward exploration and forward exploration to ensure the DRL agent explores the correct direction when forward exploration is disabled due to lack of explicit rewards. This mitigates the issue of unlabeled data being unable to provide rewards and rendering DRL unexplorable. Comprehensive experiments on three generalization datasets, comprising a total of 640 patients, demonstrate that our novel DGARL achieves 85.02% Dice and improves at least 1.91% for brain tumors, achieves 73.18% Dice and improves at least 4.28% for liver tumors, and achieves 70.85% Dice and improves at least 2.73% for pancreas compared to the ten most recent advanced methods, our results attest to the superiority of DGARL. Code is available at GitHub.

Optical breast atlas as a testbed for image reconstruction in optical mammography

  • Xing, Y.
  • Duan, Y.
  • P. Indurkar P
  • Qiu, A.
  • Chen, N.
Sci Data 2021 Journal Article, cited 0 times
Website
We present two optical breast atlases for optical mammography, aiming to advance the image reconstruction research by providing a common platform to test advanced image reconstruction algorithms. Each atlas consists of five individual breast models. The first atlas provides breast vasculature surface models, which are derived from human breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using image segmentation. A finite element-based method is used to deform the breast vasculature models from their natural shapes to generate the second atlas, compressed breast models. Breast compression is typically done in X-ray mammography but also necessary for some optical mammography systems. Technical validation is presented to demonstrate how the atlases can be used to study the image reconstruction algorithms. Optical measurements are generated numerically with compressed breast models and a predefined configuration of light sources and photodetectors. The simulated data is fed into three standard image reconstruction algorithms to reconstruct optical images of the vasculature, which can then be compared with the ground truth to evaluate their performance.

UniMiSS: Universal Medical Self-supervised Learning via Breaking Dimensionality Barrier

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
  • Wu, Qi
2022 Conference Proceedings, cited 1 times
Website
Self-supervised learning (SSL) opens up huge opportunities for medical image analysis that is well known for its lack of annotations. However, aggregating massive (unlabeled) 3D medical images like computerized tomography (CT) remains challenging due to its high imaging cost and privacy restrictions. In this paper, we advocate bringing a wealth of 2D images like chest X-rays as compensation for the lack of 3D data, aiming to build a universal medical self-supervised representation learning framework, called UniMiSS. The following problem is how to break the dimensionality barrier, i.e., making it possible to perform SSL with both 2D and 3D images? To achieve this, we design a pyramid U-like medical Transformer (MiT). It is composed of the switchable patch embedding (SPE) module and Transformers. The SPE module adaptively switches to either 2D or 3D patch embedding, depending on the input dimension. The embedded patches are converted into a sequence regardless of their original dimensions. The Transformers model the long-term dependencies in a sequence-to-sequence manner, thus enabling UniMiSS to learn representations from both 2D and 3D images. With the MiT as the backbone, we perform the UniMiSS in a self-distillation manner. We conduct expensive experiments on six 3D/2D medical image analysis tasks, including segmentation and classification. The results show that the proposed UniMiSS achieves promising performance on various downstream tasks, outperforming the ImageNet pre-training and other advanced SSL counterparts substantially. Code is available at https://github.com/YtongXie/UniMiSS-code.

Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest CT

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
  • Fulham, Michael
  • Zhang, Yanning
Information Fusion 2018 Journal Article, cited 13 times
Website

Semi-supervised Adversarial Model for Benign-Malignant Lung Nodule Classification on Chest CT

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
Medical Image Analysis 2019 Journal Article, cited 0 times
Classification of benign-malignant lung nodules on chest CT is the most critical step in the early detection of lung cancer and prolongation of patient survival. Despite their success in image classification, deep convolutional neural networks (DCNNs) always require a large number of labeled training data, which are not available for most medical image analysis applications due to the work required in image acquization and particularly image annotation. In this paper, we propose a semi-supervised adversarial classification (SSAC) model that can be trained by using both labeled and unlabeled data for benign-malignant lung nodule classification. This model consists of an adversarial autoencoder-based unsupervised reconstruction network R, a supervised classification network C, and learnable transition layers that enable the adaption of the image representation ability learned by R to C. The SSAC model has been extended to the multi-view knowledge-based collaborative learning, aiming to employ three SSACs to characterize each nodule’s overall appearance, heterogeneity in shape and texture, respectively, and to perform such characterization on nine planar views. The MK-SSAC model has been evaluated on the benchmark LIDC-IDRI dataset and achieves an accuracy of 92.53% and an AUC of 95.81%, which are superior to the performance of other lung nodule classification and semi-supervised learning approaches.

Low-complexity atlas-based prostate segmentation by combining global, regional, and local metrics

  • Xie, Qiuliang
  • Ruan, Dan
Medical Physics 2014 Journal Article, cited 15 times
Website
PURPOSE: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. METHODS: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, a target-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. RESULTS: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors' method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. CONCLUSIONS: The proposed method requires a fixed number of nonrigid registrations, independent of atlas size, providing desirable scalability especially important for a large or growing atlas. When applied to prostate segmentation, the method achieved better performance to the state-of-the-art atlas-based approaches, with significant improvement in computation efficiency. The proposed rationale of utilizing jointly global, regional, and local metrics, based on the information characteristic and surrogate behavior for registration and fusion subtasks, can be extended naturally to similarity metrics beyond MSE, such as correlation or mutual information types.

An Automated Segmentation Method for Lung Parenchyma Image Sequences Based on Fractal Geometry and Convex Hull Algorithm

  • Xiao, Xiaojiao
  • Zhao, Juanjuan
  • Qiang, Yan
  • Wang, Hua
  • Xiao, Yingze
  • Zhang, Xiaolong
  • Zhang, Yudong
Applied Sciences 2018 Journal Article, cited 1 times
Website

CateNorm: Categorical Normalization for Robust Medical Image Segmentation

  • Xiao, Junfei
  • Yu, Lequan
  • Zhou, Zongwei
  • Bai, Yutong
  • Xing, Lei
  • Yuille, Alan
  • Zhou, Yuyin
2022 Conference Proceedings, cited 0 times
Website

Efficient copyright protection for three CT images based on quaternion polar harmonic Fourier moments

  • Xia, Zhiqiu
  • Wang, Xingyuan
  • Li, Xiaoxiao
  • Wang, Chunpeng
  • Unar, Salahuddin
  • Wang, Mingxu
  • Zhao, Tingting
Signal Processing 2019 Journal Article, cited 0 times

Volume fractions of DCE-MRI parameter as early predictor of histologic response in soft tissue sarcoma: A feasibility study

  • Xia, Wei
  • Yan, Zhuangzhi
  • Gao, Xin
European Journal of Radiology 2017 Journal Article, cited 2 times
Website

Predicting Microvascular Invasion in Hepatocellular Carcinoma Using CT-based Radiomics Model

  • Xia, T. Y.
  • Zhou, Z. H.
  • Meng, X. P.
  • Zha, J. H.
  • Yu, Q.
  • Wang, W. L.
  • Song, Y.
  • Wang, Y. C.
  • Tang, T. Y.
  • Xu, J.
  • Zhang, T.
  • Long, X. Y.
  • Liang, Y.
  • Xiao, W. B.
  • Ju, S. H.
RadiologyRadiology 2023 Journal Article, cited 3 times
Website
Background Prediction of microvascular invasion (MVI) may help determine treatment strategies for hepatocellular carcinoma (HCC). Purpose To develop a radiomics approach for predicting MVI status based on preoperative multiphase CT images and to identify MVI-associated differentially expressed genes. Materials and Methods Patients with pathologically proven HCC from May 2012 to September 2020 were retrospectively included from four medical centers. Radiomics features were extracted from tumors and peritumor regions on preoperative registration or subtraction CT images. In the training set, these features were used to build five radiomics models via logistic regression after feature reduction. The models were tested using internal and external test sets against a pathologic reference standard to calculate area under the receiver operating characteristic curve (AUC). The optimal AUC radiomics model and clinical-radiologic characteristics were combined to build the hybrid model. The log-rank test was used in the outcome cohort (Kunming center) to analyze early recurrence-free survival and overall survival based on high versus low model-derived score. RNA sequencing data from The Cancer Image Archive were used for gene expression analysis. Results A total of 773 patients (median age, 59 years; IQR, 49-64 years; 633 men) were divided into the training set (n = 334), internal test set (n = 142), external test set (n = 141), outcome cohort (n = 121), and RNA sequencing analysis set (n = 35). The AUCs from the radiomics and hybrid models, respectively, were 0.76 and 0.86 for the internal test set and 0.72 and 0.84 for the external test set. Early recurrence-free survival (P < .01) and overall survival (P < .007) can be categorized using the hybrid model. Differentially expressed genes in patients with findings positive for MVI were involved in glucose metabolism. Conclusion The hybrid model showed the best performance in prediction of MVI. (c) RSNA, 2023 Supplemental material is available for this article. See also the editorial by Summers in this issue.

Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile

  • Xia, Tian
2018 Thesis, cited 0 times
Website

Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation

  • Xia, J.
  • Lu, Y.
  • Tan, L.
Comput Math Methods Med 2020 Journal Article, cited 0 times
Website
Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter-adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength beta adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.

Automatic glioma segmentation based on adaptive superpixel

  • Wu, Yaping
  • Zhao, Zhe
  • Wu, Weiguo
  • Lin, Yusong
  • Wang, Meiyun
BMC Med Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. METHODS: The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. RESULTS: The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.

Joint model- and immunohistochemistry-driven few-shot learning scheme for breast cancer segmentation on 4D DCE-MRI

  • Wu, Youqing
  • Wang, Yihang
  • Sun, Heng
  • Jiang, Chunjuan
  • Li, Bo
  • Li, Lihua
  • Pan, Xiang
Applied Intelligence 2022 Journal Article, cited 0 times
Website
Automatic segmentation of breast cancer on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which reveals both temporal and spatial profiles of the foundational anatomy, plays a crucial role in the clinical diagnosis and treatment of breast cancer. Recently, deep learning has witnessed great advances in tumour segmentation tasks. However, most of those high-performing models require a large number of annotated gold-standard samples, which remains a challenge in the accurate segmentation of 4D DCE-MRI breast cancer with high heterogeneity. To address this problem, we propose a joint immunohistochemistry- (IHC) and model-driven few-shot learning scheme for 4D DCE-MRI breast cancer segmentation. Specifically, a unique bidirectional convolutional recurrent graph attention autoencoder (BiCRGADer) is developed to exploit the spatiotemporal pharmacokinetic characteristics contained in 4D DCE-MRI sequences. Moreover, the IHC-driven strategy that employs a few-shot learning scenario optimizes BiCRGADer by learning the features of MR imaging phenotypes of specific molecular subtypes during training. In particular, a parameter-free module (PFM) is designed to adaptively enrich query features with support features and masks. The combined model- and IHC-driven scheme boosts performance with only a small training sample size. We conduct methodological analyses and empirical evaluations on datasets from The Cancer Imaging Archive (TCIA) to justify the effectiveness and adaptability of our scheme. Extensive experiments show that the proposed scheme outperforms state-of-the-art segmentation models and provides a potential and powerful noninvasive approach for the artificial intelligence community dealing with oncological applications.

DeepMMSA: A Novel Multimodal Deep Learning Method for Non-small Cell Lung Cancer Survival Analysis

  • Wu, Yujiao
  • Ma, Jie
  • Huang, Xiaoshui
  • Ling, Sai Ho
  • Weidong Su, Steven
2021 Conference Paper, cited 18 times
Website
Lung cancer is the leading cause of cancer death worldwide. The critical reason for the deaths is delayed diagnosis and poor prognosis. With the accelerated development of deep learning techniques, it has been successfully applied extensively in many real-world applications, including health sectors such as medical image interpretation and disease diagnosis. By combining more modalities that being engaged in the processing of information, multimodal learning can extract better features and improve the predictive ability. The conventional methods for lung cancer survival analysis normally utilize clinical data and only provide a statistical probability. To improve the survival prediction accuracy and help prognostic decision-making in clinical practice for medical experts, we for the first time propose a multimodal deep learning framework for non-small cell lung cancer (NSCLC) survival analysis, named DeepMMSA. This framework leverages CT images in combination with clinical data, enabling the abundant information held within medical images to be associate with lung cancer survival information. We validate our model on the data of 422 NSCLC patients from The Cancer Imaging Archive (TCIA). Experimental results support our hypothesis that there is an underlying relationship between prognostic information and radiomic images. Besides, quantitative results show that our method could surpass the state-of-the-art methods by 4% on concordance.

Mutual consistency learning for semi-supervised medical image segmentation

  • Wu, Yicheng
  • Ge, Zongyuan
  • Zhang, Donghao
  • Xu, Minfeng
  • Zhang, Lei
  • Xia, Yong
  • Cai, Jianfei
Medical Image Analysis 2022 Journal Article, cited 1 times
Website

Three-Plane–assembled Deep Learning Segmentation of Gliomas

  • Wu, Shaocheng
  • Li, Hongyang
  • Quang, Daniel
  • Guan, Yuanfang
Radiology: Artificial Intelligence 2020 Journal Article, cited 0 times
Website
An accurate and fast deep learning approach developed for automatic segmentation of brain glioma on multimodal MRI scans achieved Sørensen–Dice scores of 0.80, 0.83, and 0.91 for enhancing tumor, tumor core, and whole tumor, respectively. Purpose To design a computational method for automatic brain glioma segmentation of multimodal MRI scans with high efficiency and accuracy. Materials and Methods The 2018 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset was used in this study, consisting of routine clinically acquired preoperative multimodal MRI scans. Three subregions of glioma—the necrotic and nonenhancing tumor core, the peritumoral edema, and the contrast-enhancing tumor—were manually labeled by experienced radiologists. Two-dimensional U-Net models were built using a three-plane–assembled approach to segment three subregions individually (three-region model) or to segment only the whole tumor (WT) region (WT-only model). The term three-plane–assembled means that coronal and sagittal images were generated by reformatting the original axial images. The model performance for each case was evaluated in three classes: enhancing tumor (ET), tumor core (TC), and WT. Results On the internal unseen testing dataset split from the 2018 BraTS training dataset, the proposed models achieved mean Sørensen–Dice scores of 0.80, 0.84, and 0.91, respectively, for ET, TC, and WT. On the BraTS validation dataset, the proposed models achieved mean 95% Hausdorff distances of 3.1 mm, 7.0 mm, and 5.0 mm, respectively, for ET, TC, and WT and mean Sørensen–Dice scores of 0.80, 0.83, and 0.91, respectively, for ET, TC, and WT. On the BraTS testing dataset, the proposed models ranked fourth out of 61 teams. The source code is available at https://github.com/GuanLab/Brain_Glioma. Conclusion This deep learning method consistently segmented subregions of brain glioma with high accuracy, efficiency, reliability, and generalization ability on screening images from a large population, and it can be efficiently implemented in clinical practice to assist neuro-oncologists or radiologists. Supplemental material is available for this article.

Whole Mammography Diagnosis via Multi-instance Supervised Discriminative Localization and Classification

  • Wu, Qingxia
  • Tan, Hongna
  • Wu, Yaping
  • Dong, Pei
  • Che, Jifei
  • Li, Zheren
  • Lei, Chenjin
  • Shen, Dinggang
  • Xue, Zhong
  • Wang, Meiyun
2022 Conference Proceedings, cited 0 times
Precise mammography diagnosis plays a vital role in breast cancer management, especially in identifying malignancy with computer assistance. Due to high resolution, large image size, and small lesion region, it is challenging to localize lesions while classifying the whole mammography, which also renders difficulty for annotating mammography datasets and balancing tumor and normal background regions for training. To fully use local lesion information and macroscopic malignancy information, we propose a two-step mammography classification method based on multi-instance learning. In step one, a multi-task encoder-decoder architecture (mt-ConvNext-Unet) is employed for instance-level lesion localization and lesion type classification. To enhance the ability of feature extraction, we adopt ConvNext as the encoder, and added normalization layer and scSE attention blocks in the decoder to strengthen localization ability of small lesions. A classification branch is used after the encoder to jointly train lesion classification and segmentation. The instance-based outputs are merged into the image-level both for segmentation and classification (SegMap and ClsMap). In step two, a whole mammography classification model is applied for breast-level cancer diagnosis by combining the results of CC and MLO views with EfficientNet. Experimental results on the open dataset show that our method not only accurately classifies breast cancer on mammography but also highlights the suspicious regions.

Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition

  • Wu, Panpan
  • Xia, Kewen
  • Yu, Hengyong
Computer Methods and Programs in Biomedicine 2016 Journal Article, cited 5 times
Website
BACKGROUND AND OBJECTIVE: Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. METHODS: An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. RESULTS: Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. CONCLUSIONS: Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE.

Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning

  • Wu, Panpan
  • Sun, Xuanchao
  • Zhao, Ziping
  • Wang, Haishuai
  • Pan, Shirui
  • Schuller, Bjorn
Comput Intell Neurosci 2020 Journal Article, cited 0 times
Website
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.

Development and Validation of Pre- and Post-Operative Models to Predict Recurrence After Resection of Solitary Hepatocellular Carcinoma: A Multi-Institutional Study

  • Wu, Ming-Yu
  • Qiao, Qian
  • Wang, Ke
  • Ji, Gu-Wei
  • Cai, Bing
  • Li, Xiang-Cheng
Cancer Manag Res 2020 Journal Article, cited 1 times
Website
Background: The ideal candidates for resection are patients with solitary hepatocellular carcinoma (HCC); however, postoperative recurrence rate remains high. We aimed to establish prognostic models to predict HCC recurrence based on readily accessible clinical parameters and multi-institutional databases. Patients and Methods: A total of 485 patients undergoing curative resection for solitary HCC were recruited from two independent institutions and the Cancer Imaging Archive database. We randomly divided the patients into training (n=323) and validation cohorts (n=162). Two models were developed: one using pre-operative and one using pre- and post-operative parameters. Performance of the models was compared with staging systems. Results: Using multivariable analysis, albumin-bilirubin grade, serum alpha-fetoprotein and tumor size were selected into the pre-operative model; albumin-bilirubin grade, serum alpha-fetoprotein, tumor size, microvascular invasion and cirrhosis were selected into the postoperative model. The two models exhibited better discriminative ability (concordance index: 0.673-0.728) and lower prediction error (integrated Brier score: 0.169-0.188) than currently used staging systems for predicting recurrence in both cohorts. Both models stratified patients into low- and high-risk subgroups of recurrence with distinct recurrence patterns. Conclusion: The two models with corresponding user-friendly calculators are useful tools to predict recurrence before and after resection that may facilitate individualized management of solitary HCC.

A comprehensive texture feature analysis framework of renal cell carcinoma: pathological, prognostic, and genomic evaluation based on CT images

  • Wu, K.
  • Wu, P.
  • Yang, K.
  • Li, Z.
  • Kong, S.
  • Yu, L.
  • Zhang, E.
  • Liu, H.
  • Guo, Q.
  • Wu, S.
Eur Radiol 2022 Journal Article, cited 14 times
Website
OBJECTIVES: We tried to realize accurate pathological classification, assessment of prognosis, and genomic molecular typing of renal cell carcinoma by CT texture feature analysis. To determine whether CT texture features can perform accurate pathological classification and evaluation of prognosis and genomic characteristics in renal cell carcinoma. METHODS: Patients with renal cell carcinoma from five open-source cohorts were analyzed retrospectively in this study. These data were randomly split to train and test machine learning algorithms to segment the lesion, predict the histological subtype, tumor stage, and pathological grade. Dice coefficient and performance metrics such as accuracy and AUC were calculated to evaluate the segmentation and classification model. Quantitative decomposition of the predictive model was conducted to explore the contribution of each feature. Besides, survival analysis and the statistical correlation between CT texture features, pathological, and genomic signatures were investigated. RESULTS: A total of 569 enhanced CT images of 443 patients (mean age 59.4, 278 males) were included in the analysis. In the segmentation task, the mean dice coefficient was 0.96 for the kidney and 0.88 for the cancer region. For classification of histologic subtype, tumor stage, and pathological grade, the model was on a par with radiologists and the AUC was 0.83 [Formula: see text] 0.1, 0.80 [Formula: see text] 0.1, and 0.77 [Formula: see text] 0.1 at 95% confidence intervals, respectively. Moreover, specific quantitative CT features related to clinical prognosis were identified. A strong statistical correlation (R(2) = 0.83) between the feature crosses and genomic characteristics was shown. The structural equation modeling confirmed significant associations between CT features, pathological (beta = - 0.75), and molecular subtype (beta = - 0.30). CONCLUSIONS: The framework illustrates high performance in the pathological classification of renal cell carcinoma. Prognosis and genomic characteristics can be inferred by quantitative image analysis. KEY POINTS: * The analytical framework exhibits high-performance pathological classification of renal cell carcinoma and is on a par with human radiologists. * Quantitative decomposition of the predictive model shows that specific texture features contribute to histologic subtype and tumor stage classification. * Structural equation modeling shows the associations of genomic characteristics to CT texture features. Overall survival and molecular characteristics can be inferred by quantitative CT texture analysis in renal cell carcinoma.

Identifying relations between imaging phenotypes and molecular subtypes of breast cancer: Model discovery and external validation

  • Wu, Jia
  • Sun, Xiaoli
  • Wang, Jeff
  • Cui, Yi
  • Kato, Fumi
  • Shirato, Hiroki
  • Ikeda, Debra M
  • Li, Ruijiang
Journal of Magnetic Resonance Imaging 2017 Journal Article, cited 17 times
Website
Purpose: To determine whether dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) characteristics of the breast tumor and background parenchyma can distinguish molecular subtypes (ie, luminal A/B or basal) of breast cancer. Materials and methods: In all, 84 patients from one institution and 126 patients from The Cancer Genome Atlas (TCGA) were used for discovery and external validation, respectively. Thirty-five quantitative image features were extracted from DCE-MRI (1.5 or 3T) including morphology, texture, and volumetric features, which capture both tumor and background parenchymal enhancement (BPE) characteristics. Multiple testing was corrected using the Benjamini-Hochberg method to control the false-discovery rate (FDR). Sparse logistic regression models were built using the discovery cohort to distinguish each of the three studied molecular subtypes versus the rest, and the models were evaluated in the validation cohort. Results: On univariate analysis in discovery and validation cohorts, two features characterizing tumor and two characterizing BPE were statistically significant in separating luminal A versus nonluminal A cancers; two features characterizing tumor were statistically significant for separating luminal B; one feature characterizing tumor and one characterizing BPE reached statistical significance for distinguishing basal (Wilcoxon P < 0.05, FDR < 0.25). In discovery and validation cohorts, multivariate logistic regression models achieved an area under the receiver operator characteristic curve (AUC) of 0.71 and 0.73 for luminal A cancer, 0.67 and 0.69 for luminal B cancer, and 0.66 and 0.79 for basal cancer, respectively. Conclusion: DCE-MRI characteristics of breast cancer and BPE may potentially be used to distinguish among molecular subtypes of breast cancer. Level of evidence: 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1017-1027. Keywords: breast cancer; classification; dynamic contrast enhanced MRI; imaging genomics; molecular subtype.

Magnetic resonance imaging and molecular features associated with tumor-infiltrating lymphocytes in breast cancer

  • Wu, Jia
  • Li, Xuejie
  • Teng, Xiaodong
  • Rubin, Daniel L
  • Napel, Sandy
  • Daniel, Bruce L
  • Li, Ruijiang
Breast Cancer Research 2018 Journal Article, cited 0 times
Website

Heterogeneous Enhancement Patterns of Tumor-adjacent Parenchyma at MR Imaging Are Associated with Dysregulated Signaling Pathways and Poor Survival in Breast Cancer

  • Wu, Jia
  • Li, Bailiang
  • Sun, Xiaoli
  • Cao, Guohong
  • Rubin, Daniel L
  • Napel, Sandy
  • Ikeda, Debra M
  • Kurian, Allison W
  • Li, Ruijiang
RadiologyRadiology 2017 Journal Article, cited 9 times
Website

Intratumor partitioning and texture analysis of dynamic contrast‐enhanced (DCE)‐MRI identifies relevant tumor subregions to predict pathological response of breast cancer to neoadjuvant chemotherapy

  • Wu, Jia
  • Gong, Guanghua
  • Cui, Yi
  • Li, Ruijiang
Journal of Magnetic Resonance Imaging 2016 Journal Article, cited 43 times
Website
PURPOSE: To predict pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multiregion analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). MATERIALS AND METHODS: In this Institutional Review Board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using 3T DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with high temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitative Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. RESULTS: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast washout were statistically significant (P < 0.05) after correcting for multiple testing, with area under the receiver operating characteristic (ROC) curve (AUC) or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (P = 0.002) in leave-one-out cross-validation. This improved upon conventional imaging predictors such as tumor volume (AUC = 0.53) and texture features based on whole-tumor analysis (AUC = 0.65). CONCLUSION: The heterogeneity of the tumor subregion associated with fast washout on DCE-MRI predicted pathological response to NAC in breast cancer. J. Magn. Reson. Imaging 2016;44:1107-1115.

Unsupervised clustering of quantitative image phenotypes reveals breast cancer subtypes with distinct prognoses and molecular pathways

  • Wu, Jia
  • Cui, Yi
  • Sun, Xiaoli
  • Cao, Guohong
  • Li, Bailiang
  • Ikeda, Debra M
  • Kurian, Allison W
  • Li, Ruijiang
Clinical Cancer Research 2017 Journal Article, cited 14 times
Website

HarDNet-BTS: A Harmonic Shortcut Network for Brain Tumor Segmentation

  • Wu, Hung-Yu
  • Lin, Youn-Long
2022 Book Section, cited 0 times
Tumor segmentation of brain MRI image is an important and challenging computer vision task. With well-curated multi-institutional multi-parametric MRI (mpMRI) data, the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021 is a great bench-marking venue for world-wide researchers to contribute to the advancement of the state-of-the-art. HarDNet is a memory-efficient neural network backbone that has demonstrated excellent performance and efficiency in image classification, object detection, real-time semantic segmentation, and colonoscopy polyp segmentation. In this paper, we propose HarDNet-BTS, a U-Net-like encoder-decoder architecture with HarDNet backbone, for Brain Tumor Segmentation. We train it with the BraTS 2021 dataset using three training strategies and ensemble the resultant models to improve the prediction quality. Assessment reports from the BraTS 2021 validation server show that HarDNet-BTS delivers state-of-the-art performance (Dice_ET = 0.8442, Dice_TC = 0.8793, Dice_WT = 0.9260, HD95_ET = 12.592, HD95_TC = 7.073, HD95_WT = 3.884). It was ranked 8th in the validation phase. Its performance on the final testing dataset is consistent with that of the validation phase (Dice_ET = 0.8727, Dice_TC = 0.8665, Dice_WT = 0.9286, HD95_ET = 8.496, HD95_TC = 18.606, HD95_WT = 4.059). Inferencing an MRI case takes only 16 s of GPU time and 6GBs of GPU memory.

Optimal batch determination for improved harmonization and prognostication of multi-center PET/CT radiomics feature in head and neck cancer

  • Wu, Huiqin
  • Liu, Xiaohui
  • Peng, Lihong
  • Yang, Yuling
  • Zhou, Zidong
  • Du, Dongyang
  • Xu, Hui
  • Lv, Wenbing
  • Lu, Lijun
Phys Med Biol 2023 Journal Article, cited 0 times
Website
Objective. To determine the optimal approach for identifying and mitigating batch effects in PET/CT radiomics features, and further improve the prognosis of patients with head and neck cancer (HNC), this study investigated the performance of three batch harmonization methods.Approach. Unsupervised harmonization identified the batch labels by K-means clustering. Supervised harmonization regarding the image acquisition factors (center, manufacturer, scanner, filter kernel) as known/given batch labels, and Combat harmonization was then implemented separately and sequentially based on the batch labels, i.e. harmonizing features among batches determined by each factor individually or harmonizing features among batches determined by multiple factors successively. Extensive experiments were conducted to predict overall survival (OS) on public PET/CT datasets that contain 800 patients from 9 centers.Main results. In the external validation cohort, results show that compared to original models without harmonization, Combat harmonization would be beneficial in OS prediction with C-index of 0.687-0.740 versus 0.684-0.767. Supervised harmonization slightly outperformed unsupervised harmonization in all models (C-index: 0.692-0.767 versus 0.684-0.750). Separate harmonization outperformed sequential harmonization in CT_m+clinic and CT_cm+clinic models with C-index of 0.752 and 0.722, respectively, while sequential harmonization involved clinical features in PET_rs+clinic model further improving the performance and achieving the highest C-index of 0.767.Significance. Optimal batch determination especially sequential harmonization for Combat holds the potential to improve the prognostic power of radiomics model in multi-center HNC dataset with PET/CT imaging.

Evaluating long-term outcomes via computed tomography in lung cancer screening

  • Wu, D
  • Liu, R
  • Levitt, B
  • Riley, T
  • Baumgartner, KB
J Biom Biostat 2016 Journal Article, cited 0 times

Predicting Genotype and Survival in Glioma Using Standard Clinical MR Imaging Apparent Diffusion Coefficient Images: A Pilot Study from The Cancer Genome Atlas

  • Wu, C-C
  • Jain, R
  • Radmanesh, A
  • Poisson, LM
  • Guo, W-Y
  • Zagzag, D
  • Snuderl, M
  • Placantonakis, DG
  • Golfinos, J
  • Chi, AS
American Journal of Neuroradiology 2018 Journal Article, cited 1 times
Website

Dosiomics improves prediction of locoregional recurrence for intensity modulated radiotherapy treated head and neck cancer cases

  • Wu, A.
  • Li, Y.
  • Qi, M.
  • Lu, X.
  • Jia, Q.
  • Guo, F.
  • Dai, Z.
  • Liu, Y.
  • Chen, C.
  • Zhou, L.
  • Song, T.
Oral Oncol 2020 Journal Article, cited 0 times
Website
OBJECTIVES: To investigate whether dosiomics can benefit to IMRT treated patient's locoregional recurrences (LR) prediction through a comparative study on prediction performance inspection between radiomics methods and that integrating dosiomics in head and neck cancer cases. MATERIALS AND METHODS: A cohort of 237 patients with head and neck cancer from four different institutions was obtained from The Cancer Imaging Archive and utilized to train and validate the radiomics-only prognostic model and integrate the dosiomics prognostic model. For radiomics, the radiomics features were initially extracted from images, including CTs and PETs, and selected on the basis of their concordance index (CI) values, then condensed via principle component analysis. Lastly, multivariate Cox proportional hazards regression models were constructed with class-imbalance adjustment as the LR prediction models by inputting those condensed features. For dosiomics integration model establishment, the initial features were similar, but with additional 3-dimensional dose distribution from radiation treatment plans. The CI and the Kaplan-Meier curves with log-rank analysis were used to assess and compare these models. RESULTS: Observed from the independent validation dataset, the CI of the model for dosiomics integration (0.66) was significantly different from that for radiomics (0.59) (Wilcoxon test, p=5.9x10(-31)). The integrated model successfully classified the patients into high- and low-risk groups (log-rank test, p=2.5x10(-02)), whereas the radiomics model was not able to provide such classification (log-rank test, p=0.37). CONCLUSION: Dosiomics can benefit in predicting the LR in IMRT-treated patients and should not be neglected for related investigations.

Determining patient abdomen thickness from a single digital radiograph with a computational model: clinical results from a proof of concept study

  • Worrall, M.
  • Vinnicombe, S.
  • Sutton, D.
Br J Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: A computational model has been created to estimate the abdominal thickness of a patient following an X-ray examination; its intended application is assisting with patient dose audit of paediatric X-ray examinations. This work evaluates the accuracy of the computational model in a clinical setting for adult patients undergoing anteroposterior (AP) abdomen X-ray examinations. METHODS: The model estimates patient thickness using the radiographic image, the exposure factors with which the image was acquired, a priori knowledge of the characteristics of the X-ray unit and detector and the results of extensive Monte Carlo simulation of patient examinations. For 20 patients undergoing AP abdominal X-ray examinations, the model was used to estimate the patient thickness; these estimates were compared against a direct measurement made at the time of the examination. RESULTS: Estimates of patient thickness made using the model were on average within +/-5.8% of the measured thickness. CONCLUSION: The model can be used to accurately estimate the thickness of a patient undergoing an AP abdominal X-ray examination where the patient's size falls within the range of the size of patients used to create the computational model. ADVANCES IN KNOWLEDGE: This work demonstrates that it is possible to accurately estimate the AP abdominal thickness of an adult patient using the digital X-ray image and a computational model.

Development of a method for automating effective patient diameter estimation for digital radiography

  • Worrall, Mark
2019 Thesis, cited 0 times
Website
National patient dose audit of paediatric radiographic examinations is complicated by a lack of data containing a direct measurement of the patient diameter in the examination orientation or height and weight. This has meant that National Diagnostic Reference Levels (NDRLs) for paediatric radiographic examinations have not been updated in the UK since 2000, despite significant changes in imaging technology over that period. This work is the first step in the development of a computational model intended to automate an estimate of paediatric patient diameter. Whilst the application is intended for a paediatric population, its development within this thesis uses an adult cohort. The computational model uses the radiographic image, the examination exposure factors and a priori information relating to the x-ray system and the digital detector. The computational model uses the Beer-Lambert law. A hypothesis was developed that this would work for clinical exposures despite its single energy photon basis. Values of initial air kerma are estimated from the examination exposure factors and measurements made on the x-ray system. Values of kerma at the image receptor are estimated from a measurement of pixel value made at the centre of the radiograph and the measured calibration between pixel value and kerma for the image receptor. Values of effective linear attenuation coefficient are estimated from Monte Carlo simulations. Monte Carlo simulations were created for two x-ray systems. The simulations were optimised and thoroughly validated to ensure that any result obtained is accurate. The validation process compared simulation results with measurements made on the x-ray units themselves, producing values for effective linear attenuation coefficient that were demonstrated to be accurate. Estimates of attenuator thickness can be made using the estimated values for each variable. The computational model was demonstrated to accurately estimate the thickness of single composition attenuators across a range of thicknesses and exposure factors on three different x-ray systems. The computational model was used in a clinical validation study of 20 adult patients undergoing AP abdominal x-ray examinations. For 19 of these examinations, it estimated the true patient thickness to within ±9%. This work presents a feasible computational model that could be used to automate the estimation of paediatric patient thickness during radiographic examinations allowing for automation of paediatric radiographic dose audit.

Quantifying the reproducibility of lung ventilation images between 4-Dimensional Cone Beam CT and 4-Dimensional CT

  • Woodruff, Henry C.
  • Shieh, Chun-Chien
  • Hegi-Johnson, Fiona
  • Keall, Paul J.
  • Kipritidis, John
Medical Physics 2017 Journal Article, cited 2 times
Website

Deep learning for semi-automated unidirectional measurement of lung tumor size in CT

  • Woo, M.
  • Devane, A. M.
  • Lowe, S. C.
  • Lowther, E. L.
  • Gimbel, R. W.
Cancer Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: Performing Response Evaluation Criteria in Solid Tumor (RECISTS) measurement is a non-trivial task requiring much expertise and time. A deep learning-based algorithm has the potential to assist with rapid and consistent lesion measurement. PURPOSE: The aim of this study is to develop and evaluate deep learning (DL) algorithm for semi-automated unidirectional CT measurement of lung lesions. METHODS: This retrospective study included 1617 lung CT images from 8 publicly open datasets. A convolutional neural network was trained using 1373 training and validation images annotated by two radiologists. Performance of the DL algorithm was evaluated 244 test images annotated by one radiologist. DL algorithm's measurement consistency with human radiologist was evaluated using Intraclass Correlation Coefficient (ICC) and Bland-Altman plotting. Bonferroni's method was used to analyze difference in their diagnostic behavior, attributed by tumor characteristics. Statistical significance was set at p < 0.05. RESULTS: The DL algorithm yielded ICC score of 0.959 with human radiologist. Bland-Altman plotting suggested 240 (98.4 %) measurements realized within the upper and lower limits of agreement (LOA). Some measurements outside the LOA revealed difference in clinical reasoning between DL algorithm and human radiologist. Overall, the algorithm marginally overestimated the size of lesion by 2.97 % compared to human radiologists. Further investigation indicated tumor characteristics may be associated with the DL algorithm's diagnostic behavior of over or underestimating the lesion size compared to human radiologist. CONCLUSIONS: The DL algorithm for unidirectional measurement of lung tumor size demonstrated excellent agreement with human radiologist.

Deep Learning Frameworks to Improve Inter-Observer Variability in CT Measurement of Solid Tumor

  • Woo, MinJae
2021 Thesis, cited 0 times
Website

Training and Validation of Deep Learning-Based Auto-Segmentation Models for Lung Stereotactic Ablative Radiotherapy Using Retrospective Radiotherapy Planning Contours

  • Wong, J.
  • Huang, V.
  • Giambattista, J. A.
  • Teke, T.
  • Kolbeck, C.
  • Giambattista, J.
  • Atrchian, S.
Front Oncol 2021 Journal Article, cited 0 times
Website
Purpose: Deep learning-based auto-segmented contour (DC) models require high quality data for their development, and previous studies have typically used prospectively produced contours, which can be resource intensive and time consuming to obtain. The aim of this study was to investigate the feasibility of using retrospective peer-reviewed radiotherapy planning contours in the training and evaluation of DC models for lung stereotactic ablative radiotherapy (SABR). Methods: Using commercial deep learning-based auto-segmentation software, DC models for lung SABR organs at risk (OAR) and gross tumor volume (GTV) were trained using a deep convolutional neural network and a median of 105 contours per structure model obtained from 160 publicly available CT scans and 50 peer-reviewed SABR planning 4D-CT scans from center A. DCs were generated for 50 additional planning CT scans from center A and 50 from center B, and compared with the clinical contours (CC) using the Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). Results: Comparing DCs to CCs, the mean DSC and 95% HD were 0.93 and 2.85mm for aorta, 0.81 and 3.32mm for esophagus, 0.95 and 5.09mm for heart, 0.98 and 2.99mm for bilateral lung, 0.52 and 7.08mm for bilateral brachial plexus, 0.82 and 4.23mm for proximal bronchial tree, 0.90 and 1.62mm for spinal cord, 0.91 and 2.27mm for trachea, and 0.71 and 5.23mm for GTV. DC to CC comparisons of center A and center B were similar for all OAR structures. Conclusions: The DCs developed with retrospective peer-reviewed treatment contours approximated CCs for the majority of OARs, including on an external dataset. DCs for structures with more variability tended to be less accurate and likely require using a larger number of training cases or novel training approaches to improve performance. Developing DC models from existing radiotherapy planning contours appears feasible and warrants further clinical workflow testing.

Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning

  • Wong, Jordan
  • Fong, Allan
  • McVicar, Nevin
  • Smith, Sally
  • Giambattista, Joshua
  • Wells, Derek
  • Kolbeck, Carter
  • Giambattista, Jonathan
  • Gondara, Lovedeep
  • Alexander, Abraham
Radiother Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Deep learning-based auto-segmented contours (DC) aim to alleviate labour intensive contouring of organs at risk (OAR) and clinical target volumes (CTV). Most previous DC validation studies have a limited number of expert observers for comparison and/or use a validation dataset related to the training dataset. We determine if DC models are comparable to Radiation Oncologist (RO) inter-observer variability on an independent dataset. METHODS: Expert contours (EC) were created by multiple ROs for central nervous system (CNS), head and neck (H&N), and prostate radiotherapy (RT) OARs and CTVs. DCs were generated using deep learning-based auto-segmentation software trained by a single RO on publicly available data. Contours were compared using Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS: Sixty planning CT scans had 2-4 ECs, for a total of 60 CNS, 53 H&N, and 50 prostate RT contour sets. The mean DC and EC contouring times were 0.4 vs 7.7 min for CNS, 0.6 vs 26.6 min for H&N, and 0.4 vs 21.3 min for prostate RT contours. There were minimal differences in DSC and 95% HD involving DCs for OAR comparisons, but more noticeable differences for CTV comparisons. CONCLUSIONS: The accuracy of DCs trained by a single RO is comparable to expert inter-observer variability for the RT planning contours in this study. Use of deep learning-based auto-segmentation in clinical practice will likely lead to significant benefits to RT planning workflow and resources.

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding

  • Wong, Alex
  • Chen, Allison
  • Wu, Yangchao
  • Cicek, Safa
  • Tiard, Alexandre
  • Hong, Byung-Woo
  • Soatto, Stefano
2022 Book Section, cited 0 times
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues. We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network. Our embedding network learns features that can resolve detailed structures in the brain without the need for high-resolution training images, which are often unavailable and expensive to acquire. Alternatively, the encoder-decoder learns global structures by means of striding and max pooling. Our embedding network complements the encoder-decoder architecture by guiding the decoder with fine-grained details lost to spatial downsampling during the encoder stage. Unlike previous works, our decoder outputs at 2× the input resolution, where a single pixel in the input resolution is predicted by four neighboring subpixels in our output. To obtain the output at the original scale, we propose a learnable downsampler (as opposed to hand-crafted ones e.g. bilinear) that combines subpixel predictions. Our approach improves the baseline architecture by ≈ 11.7% and achieves the state of the art on the ATLAS public benchmark dataset with a smaller memory footprint and faster runtime than the best competing method. Our source code has been made available at: https://github.com/alexklwong/subpixel-embedding-segmentation.

Improving breast cancer diagnostics with deep learning for MRI

  • Witowski, Jan
  • Heacock, Laura
  • Reig, Beatriu
  • Kang, Stella K
  • Lewin, Alana
  • Pysarenko, Kristine
  • Patel, Shalin
  • Samreen, Naziya
  • Rudnicki, Wojciech
  • Łuczyńska, Elżbieta
Science Translational Medicine 2022 Journal Article, cited 0 times
Website

Introducing the Medical Physics Dataset Article

  • Williamson, Jeffrey F
  • Das, Shiva K
  • Goodsitt, Mitchell S
  • Deasy, Joseph O
Medical Physics 2017 Journal Article, cited 7 times
Website

Effect of patient inhalation profile and airway structure on drug deposition in image-based models with particle-particle interactions

  • Williams, J.
  • Kolehmainen, J.
  • Cunningham, S.
  • Ozel, A.
  • Wolfram, U.
Int J Pharm 2022 Journal Article, cited 0 times
Website
For many of the one billion sufferers of respiratory diseases worldwide, managing their disease with inhalers improves their ability to breathe. Poor disease management and rising pollution can trigger exacerbations that require urgent relief. Higher drug deposition in the throat instead of the lungs limits the impact on patient symptoms. To optimise delivery to the lung, patient-specific computational studies of aerosol inhalation can be used. However in many studies, inhalation modelling does not represent situations when the breathing is impaired, such as in recovery from an exacerbation, where the patient's inhalation is much faster and shorter. Here we compare differences in deposition of inhaler particles (10, 4 mum) in the airways of three patients. We aimed to evaluate deposition differences between healthy and impaired breathing with image-based healthy and diseased patient models. We found that the ratio of drug in the lower to upper lobes was 35% larger with a healthy inhalation. For smaller particles the upper airway deposition was similar in all patients, but local deposition hotspots differed in size, location and intensity. Our results identify that image-based airways must be used in respiratory modelling. Various inhalation profiles should be tested for optimal prediction of inhaler deposition.

Deep-Learning-based Segmentation of Organs-at-Risk in the Head for MR-assisted Radiation Therapy Planning

  • Wiesinger, Florian
  • Petit, Steven
  • Hideghéty, Katalin
  • Hernandez Tamames, Juan
  • McCallum, Hazel
  • Maxwell, Ross
  • Pearson, Rachel
  • Verduijn, Gerda
  • Darázs, Barbara
  • Kaushik, Sandeep
  • Cozzini, Cristina
  • Bobb, Chad
  • Fodor, Emese
  • Paczona, Viktor
  • Kószó, Renáta
  • Együd, Zsófia
  • Borzasi, Emőke
  • Végváry, Zoltán
  • Tan, Tao
  • Gyalai, Bence
  • Czabány, Renáta
  • Deák-Karancsi, Borbála
  • Kolozsvári, Bernadett
  • Czipczer, Vanda
  • Capala, Marta
  • Ruskó, László
2021 Journal Article, cited 0 times
Website
Segmentation of organs-at-risk (OAR) in MR images has several clinical applications; including radiation therapy (RT) planning. This paper presents a deep-learning-based method to segment 15 structures in the head region. The proposed method first applies 2D U-Net models to each of the three planes (axial, coronal, sagittal) to roughly segment the structure. Then, the results of the 2D models are combined into a fused prediction to localize the 3D bounding box of the structure. Finally, a 3D U-Net is applied to the volume of the bounding box to determine the precise contour of the structure. The model was trained on a public dataset and evaluated on both public and private datasets that contain T2-weighted MR scans of the head-and-neck region. For all cases the contour of each structure was defined by operators trained by expert clinical delineators. The evaluation demonstrated that various structures can be accurately and efficiently localized and segmented using the presented framework. The contours generated by the proposed method were also qualitatively evaluated. The majority (92%) of the segmented OARs was rated as clinically useful for radiation therapy.

Supervised Machine Learning Approach Utilizing Artificial Neural Networks for Automated Prostate Zone Segmentation in Abdominal MR images

  • Wieser, Hans-Peter
2013 Thesis, cited 0 times
Website

Proton radiotherapy spot order optimization to maximize the FLASH effect

  • Widenfalk, Oscar
2023 Thesis, cited 0 times
Cancer is a group of deadly diseases, to which one treatment method is radiotherapy. Recent studies indicate advantages of delivering so-called FLASH treatments using ultra-high dose rates (> 40 Gy/s), with a normal tissue sparing FLASH effect. Delivering a high dose in a short time imposes requirements on both the treatment machine and the treatment plan. To see as much of the FLASH effect as possible, the delivery pattern should be optimized, which is the focus of this thesis. The optimization method was applied to 17 lung plans, and the results show that a local-searchbased optimization achieves overall good results, achieving a mean FLASH coverage of 31.7 % outside of the CTV after a mean optimization time of 8.75 s. This is faster than published results using a genetic algorithm.

Proton radiotherapy spot order optimization to maximize the FLASH effect

  • Widenfalk, Oscar
2023 Thesis, cited 0 times
Website
Cancer is a group of deadly diseases, to which one treatment method is radiotherapy. Recent studies indicate advantages of delivering so-called FLASH treatments using ultra-high dose rates (> 40 Gy/s), with a normal tissue sparing FLASH effect. Delivering a high dose in a short time imposes requirements on both the treatment machine and the treatment plan. To see as much of the FLASH effect as possible, the delivery pattern should be optimized, which is the focus of this thesis. The optimization method was applied to 17 lung plans, and the results show that a local-search-based optimization achieves overall good results, achieving a mean FLASH coverage of 31.7 % outside of the CTV after a mean optimization time of 8.75 s. This is faster than published results using a genetic algorithm.

Customized Federated Learning for Multi-Source Decentralized Medical Image Classification

  • Wicaksana, J.
  • Yan, Z.
  • Yang, X.
  • Liu, Y.
  • Fan, L.
  • Cheng, K. T.
IEEE J Biomed Health Inform 2022 Journal Article, cited 4 times
Website
The performance of deep networks for medical image analysis is often constrained by limited medical data, which is privacy-sensitive. Federated learning (FL) alleviates the constraint by allowing different institutions to collaboratively train a federated model without sharing data. However, the federated model is often suboptimal with respect to the characteristics of each client's local data. Instead of training a single global model, we propose Customized FL (CusFL), for which each client iteratively trains a client-specific/private model based on a federated global model aggregated from all private models trained in the immediate previous iteration. Two overarching strategies employed by CusFL lead to its superior performance: 1) the federated model is mainly for feature alignment and thus only consists of feature extraction layers; 2) the federated feature extractor is used to guide the training of each private model. In that way, CusFL allows each client to selectively learn useful knowledge from the federated model to improve its personalized model. We evaluated CusFL on multi-source medical image datasets for the identification of clinically significant prostate cancer and the classification of skin lesions.

The Image Biomarker Standardization Initiative: Standardized Convolutional Filters for Reproducible Radiomics and Enhanced Clinical Insights

  • Whybra, P.
  • Zwanenburg, A.
  • Andrearczyk, V.
  • Schaer, R.
  • Apte, A. P.
  • Ayotte, A.
  • Baheti, B.
  • Bakas, S.
  • Bettinelli, A.
  • Boellaard, R.
  • Boldrini, L.
  • Buvat, I.
  • Cook, G. J. R.
  • Dietsche, F.
  • Dinapoli, N.
  • Gabrys, H. S.
  • Goh, V.
  • Guckenberger, M.
  • Hatt, M.
  • Hosseinzadeh, M.
  • Iyer, A.
  • Lenkowicz, J.
  • Loutfi, M. A. L.
  • Lock, S.
  • Marturano, F.
  • Morin, O.
  • Nioche, C.
  • Orlhac, F.
  • Pati, S.
  • Rahmim, A.
  • Rezaeijo, S. M.
  • Rookyard, C. G.
  • Salmanpour, M. R.
  • Schindele, A.
  • Shiri, I.
  • Spezi, E.
  • Tanadini-Lang, S.
  • Tixier, F.
  • Upadhaya, T.
  • Valentini, V.
  • van Griethuysen, J. J. M.
  • Yousefirizi, F.
  • Zaidi, H.
  • Muller, H.
  • Vallieres, M.
  • Depeursinge, A.
RadiologyRadiology 2024 Journal Article, cited 1 times
Website
Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations x three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.

Sensitivity of standardised radiomics algorithms to mask generation across different software platforms

  • Whybra, Philip
  • Spezi, Emiliano
Sci RepScientific reports 2023 Journal Article, cited 0 times
Website
The field of radiomics continues to converge on a standardised approach to image processing and feature extraction. Conventional radiomics requires a segmentation. Certain features can be sensitive to small contour variations. The industry standard for medical image communication stores contours as coordinate points that must be converted to a binary mask before image processing can take place. This study investigates the impact that the process of converting contours to mask can have on radiomic features calculation. To this end we used a popular open dataset for radiomics standardisation and we compared the impact of masks generated by importing the dataset into 4 medical imaging software. We interfaced our previously standardised radiomics platform with these software using their published application programming interface to access image volume, masks and other data needed to calculate features. Additionally, we used super-sampling strategies to systematically evaluate the impact of contour data pre processing methods on radiomic features calculation. Finally, we evaluated the effect that using different mask generation approaches could have on patient clustering in a multi-center radiomics study. The study shows that even when working on the same dataset, mask and feature discrepancy occurs depending on the contour to mask conversion technique implemented in various medical imaging software. We show that this also affects patient clustering and potentially radiomic-based modelling in multi-centre studies where a mix of mask generation software is used. We provide recommendations to negate this issue and facilitate reproducible and reliable radiomics.

CT-based radiomic analysis of hepatocellular carcinoma patients to predict key genomic information

  • West, Derek L
  • Kotrotsou, Aikaterini
  • Niekamp, Andrew Scott
  • Idris, Tagwa
  • Giniebra Camejo, Dunia
  • Mazal, Nicolas James
  • Cardenas, Nicolas James
  • Goldberg, Jackson L
  • Colen, Rivka R
Journal of Clinical Oncology 2017 Journal Article, cited 1 times
Website

Deep learning in CT colonography: differentiating premalignant from benign colorectal polyps

  • Wesp, P.
  • Grosu, S.
  • Graser, A.
  • Maurus, S.
  • Schulz, C.
  • Knosel, T.
  • Fabritius, M. P.
  • Schachtner, B.
  • Yeh, B. M.
  • Cyran, C. C.
  • Ricke, J.
  • Kazmierczak, P. M.
  • Ingrisch, M.
Eur Radiol 2022 Journal Article, cited 0 times
Website
OBJECTIVES: To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning. METHODS: In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++. RESULTS: The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 +/- 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of >/= 0.25 in 90% of polyp tissue. CONCLUSIONS: In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader. KEY POINTS: * Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans. * Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts. * Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6-9 mm size.

Multi-task Learning for Brain Tumor Segmentation

  • Weninger, Leon
  • Liu, Qianyu
  • Merhof, Dorit
2020 Book Section, cited 0 times
Accurate and reproducible detection of a brain tumor and segmentation of its sub-regions has high relevance in clinical trials and practice. Numerous recent publications have shown that deep learning algorithms are well suited for this application. However, fully supervised methods require a large amount of annotated training data. To obtain such data, time-consuming expert annotations are necessary. Furthermore, the enhancing core appears to be the most challenging to segment among the different sub-regions. Therefore, we propose a novel and straightforward method to improve brain tumor segmentation by joint learning of three related tasks with a partly shared architecture. Next to the tumor segmentation, image reconstruction and detection of enhancing tumor are learned simultaneously using a shared encoder. Meanwhile, different decoders are used for the different tasks, allowing for arbitrary switching of the loss function. In effect, this means that the architecture can partly learn on data without annotations by using only the autoencoder part. This makes it possible to train on bigger, but unannotated datasets, as only the segmenting decoder needs to be fine-tuned solely on annotated images. The second auxiliary task, detecting the presence of enhancing tumor tissue, is intended to provide a focus of the network on this area, and provides further information for postprocessing. The final prediction on the BraTS validation data using our method gives Dice scores of 0.89, 0.79 and 0.75 for the whole tumor, tumor core and the enhancing tumor region, respectively.

Automatic Segmentation of Brain Tumor from 3D MR Images Using SegNet, U-Net, and PSP-Net

  • Weng, Yan-Ting
  • Chan, Hsiang-Wei
  • Huang, Teng-Yi
2020 Book Section, cited 0 times
In the study, we used three two-dimensional convolutional neural networks, including SegNet, U-Net, and PSP-Net, to design an automatic segmentation of brain tumor from three-dimensional MR datasets. We extracted 2D slices from three slice orientations as the input tensor of the network in the training stage. In the prediction stage, we predict a volume several times with slicing along different angles. Based on the results, we learned that the result predicted more times has better outcomes than those predicted less times. Also, we implement two ensemble methods to combine the result of the three networks. According to the results, the above strategies all contributed to the improvement of the accuracy of segmentation.

General purpose radiomics for multi-modal clinical research

  • Wels, Michael G.
  • Suehling, Michael
  • Muehlberg, Alexander
  • Lades, Félix
2019 Conference Proceedings, cited 0 times
Website
In this paper we present an integrated software solution∗ targeting clinical researchers for discovering relevant radiomic biomarkers covering the entire value chain of clinical radiomics research. Its intention is to make this kind of research possible even for less experienced scientists. The solution provides means to create, collect, manage, and statistically analyze patient cohorts consisting of potentially multimodal 3D medical imaging data, associated volume of interest annotations, and radiomic features. Volumes of interest can be created by an extensive set of semi-automatic segmentation tools. Radiomic feature computation relies on the de facto standard library PyRadiomics and ensures comparability and reproducibility of carried out studies. Tabular cohort studies containing the radiomics of the volumes of interest can be managed directly within the software solution. The integrated statistical analysis capabilities introduce an additional layer of abstraction allowing non-experts to benefit from radiomics research as well. There are ready-to-use methods for clustering, uni- and multivariate statistics, and machine learning to be applied to the collected cohorts. They are validated in two case studies: for one thing, on a subset of the publicly available NSCLC-Radiomics data collection containing pretreatment CT scans of 317 non-small cell lung cancer (NSCLC) patients and for another, on the Lung Image Database Consortium imaging study with diagnostic and lung cancer screening CT scans including 2,753 distinct lesions from 870 patients. Integrated software solutions with optimized workflows like the one presented and further developments thereof may play an important role in making precision medicine come to life in clinical environments.

Predicting Isocitrate Dehydrogenase Mutation Status in Glioma Using Structural Brain Networks and Graph Neural Networks

  • Wei, Yiran
  • Li, Yonghao
  • Chen, Xi
  • Schönlieb, Carola-Bibiane
  • Li, Chao
  • Price, Stephen J.
2022 Book Section, cited 0 times
Glioma is a common malignant brain tumor with distinct survival among patients. The isocitrate dehydrogenase (IDH) gene mutation provides critical diagnostic and prognostic value for glioma. It is of crucial significance to non-invasively predict IDH mutation based on pre-treatment MRI. Machine learning/deep learning models show reasonable performance in predicting IDH mutation using MRI. However, most models neglect the systematic brain alterations caused by tumor invasion, where widespread infiltration along white matter tracts is a hallmark of glioma. Structural brain network provides an effective tool to characterize brain organisation, which could be captured by the graph neural networks (GNN) to more accurately predict IDH mutation. Here we propose a method to predict IDH mutation using GNN, based on the structural brain network of patients. Specifically, we firstly construct a network template of healthy subjects, consisting of atlases of edges (white matter tracts) and nodes (cortical/subcortical brain regions) to provide regions of interest (ROIs). Next, we employ autoencoders to extract the latent multi-modal MRI features from the ROIs of edges and nodes in patients, to train a GNN architecture for predicting IDH mutation. The results show that the proposed method outperforms the baseline models using the 3D-CNN and 3D-DenseNet. In addition, model interpretation suggests its ability to identify the tracts infiltrated by tumor, corresponding to clinical prior knowledge. In conclusion, integrating brain networks with GNN offers a new avenue to study brain lesions using computational neuroscience and computer vision approaches.

A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images

  • Webb, Grayson
2018 Thesis, cited 0 times
Website
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method. Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume error are also improved while achieving lower deviation.

Cox models with time‐varying covariates and partly‐interval censoring–A maximum penalised likelihood approach

  • Webb, Annabel
  • Ma, Jun
2022 Journal Article, cited 0 times
Time-varying covariates can be important predictors when model based predictions are considered. A Cox model that includes time-varying covariates is usually referred to as an extended Cox model. When only right censoring is presented in the observed survival times, the conventional partial likelihood method is still applicable to estimate the regression coefficients of an extended Cox model. However, if there are interval-censored survival times, then the partial likelihood method is not directly available unless an imputation, such as the middle point imputation, is used to replaced the left- and interval-censored data. However, such imputation methods are well known for causing biases. This paper considers fitting of the extended Cox models using the maximum penalised likelihood method allowing observed survival times to be partly interval censored, where a penalty function is used to regularise the baseline hazard estimate. We present simulation studies to demonstrate the performance of our proposed method, and illustrate our method with applications to two real datasets from medical research.

EfficientNetV2 based for MRI brain tumor image classification

  • Waskita, A. A.
  • Amda, Julfa Muhammad
  • Sihono, Dwi Seno Kuncoro
  • Prasetio, Heru
2023 Conference Paper, cited 1 times
Website
An accurate and timely diagnosis is of utmost importance when it comes to treating brain tumors effectively. To facilitate this process, we have developed a brain tumor classification approach that employs transfer learning using a pre-trained version of the EfficientNet V2 model. Our dataset comprises brain tumor images that have been categorized into four distinct labels: tumor (glioma, meningioma, pituitary) and normal. As our base model, we employed the EfficientNet V2 model with variations of B0, B1, B2, and B3 for experiments. To adapt the model to our number of label categories, we modified the final layer and retrained it on our dataset. Our optimization process involved using Adam’s algorithm and the categorical cross-entropy loss function. We conducted experiments in multiple stages, which involved randomizing the dataset, pre-processing, training the model, and evaluating the results. During the evaluation, we used appropriate metrics to assess the accuracy and loss of the test data. Furthermore, we analyzed the performance of the model by visualizing the loss and accuracy curves throughout the training process. Our extensive experimentation involving dataset randomization, pre-processing, model training, and evaluation has yielded remarkable results. Through relevant evaluation metrics and visualization of loss and accuracy curves, we have achieved impressive accuracy and loss rates on test data. Our research has led us to the successful classification of brain tumors using the EfficientNet V2 models with B0, B1, B2, and B3 variations. Additionally, our use of a confusion matrix has allowed us to assess the classification ability of each tumor category. This breakthrough research has the potential to greatly enhance medical diagnosis by utilizing transfer learning techniques and pre-trained models. We hope that this approach can help detect and treat brain tumors in their early stages, ultimately leading to better patient outcomes.

Quantifying the incremental value of deep learning: Application to lung nodule detection

  • Warsavage, Theodore Jr
  • Xing, Fuyong
  • Baron, Anna E
  • Feser, William J
  • Hirsch, Erin
  • Miller, York E
  • Malkoski, Stephen
  • Wolf, Holly J
  • Wilson, David O
  • Ghosh, Debashis
PLoS One 2020 Journal Article, cited 0 times
Website
We present a case study for implementing a machine learning algorithm with an incremental value framework in the domain of lung cancer research. Machine learning methods have often been shown to be competitive with prediction models in some domains; however, implementation of these methods is in early development. Often these methods are only directly compared to existing methods; here we present a framework for assessing the value of a machine learning model by assessing the incremental value. We developed a machine learning model to identify and classify lung nodules and assessed the incremental value added to existing risk prediction models. Multiple external datasets were used for validation. We found that our image model, trained on a dataset from The Cancer Imaging Archive (TCIA), improves upon existing models that are restricted to patient characteristics, but it was inconclusive about whether it improves on models that consider nodule features. Another interesting finding is the variable performance on different datasets, suggesting population generalization with machine learning models may be more challenging than is often considered.

Survival analysis of pre-operative GBM patients by using quantitative image features

  • Wangaryattawanich, Pattana
  • Wang, Jixin
  • Thomas, Ginu A
  • Chaddad, Ahmad
  • Zinn, Pascal O
  • Colen, Rivka R
2014 Conference Proceedings, cited 1 times
Website
This paper concerns a preliminary study of the relationship between survival time of both overall and progression free survival, and multiple imaging features of patients with glioblastoma. Simulation results showed that specific imaging features were found to have significant prognostic value to predict survival time in glioblastoma patients.

Multicenter imaging outcomes study of The Cancer Genome Atlas glioblastoma patient cohort: imaging predictors of overall and progression-free survival

  • Wangaryattawanich, Pattana
  • Hatami, Masumeh
  • Wang, Jixin
  • Thomas, Ginu
  • Flanders, Adam
  • Kirby, Justin
  • Wintermark, Max
  • Huang, Erich S.
  • Bakhtiari, Ali Shojaee
  • Luedi, Markus M.
  • Hashmi, Syed S.
  • Rubin, Daniel L.
  • Chen, James Y.
  • Hwang, Scott N.
  • Freymann, John
  • Holder, Chad A.
  • Zinn, Pascal O.
  • Colen, Rivka R.
2015 Journal Article, cited 40 times
Website
Despite an aggressive therapeutic approach, the prognosis for most patients with glioblastoma (GBM) remains poor. The aim of this study was to determine the significance of preoperative MRI variables, both quantitative and qualitative, with regard to overall and progression-free survival in GBM.We retrospectively identified 94 untreated GBM patients from the Cancer Imaging Archive who had pretreatment MRI and corresponding patient outcomes and clinical information in The Cancer Genome Atlas. Qualitative imaging assessments were based on the Visually Accessible Rembrandt Images feature-set criteria. Volumetric parameters were obtained of the specific tumor components: contrast enhancement, necrosis, and edema/invasion. Cox regression was used to assess prognostic and survival significance of each image.Univariable Cox regression analysis demonstrated 10 imaging features and 2 clinical variables to be significantly associated with overall survival. Multivariable Cox regression analysis showed that tumor-enhancing volume (P = .03) and eloquent brain involvement (P &lt; .001) were independent prognostic indicators of overall survival. In the multivariable Cox analysis of the volumetric features, the edema/invasion volume of more than 85 000 mm3 and the proportion of enhancing tumor were significantly correlated with higher mortality (Ps = .004 and .003, respectively).Preoperative MRI parameters have a significant prognostic role in predicting survival in patients with GBM, thus making them useful for patient stratification and endpoint biomarkers in clinical trials.

Quantifying lung cancer heterogeneity using novel CT features: a cross-institute study

  • Wang, Z.
  • Yang, C.
  • Han, W.
  • Sui, X.
  • Zheng, F.
  • Xue, F.
  • Xu, X.
  • Wu, P.
  • Chen, Y.
  • Gu, W.
  • Song, W.
  • Jiang, J.
Insights Imaging 2022 Journal Article, cited 0 times
Website
BACKGROUND: Radiomics-based image metrics are not used in the clinic despite the rapidly growing literature. We selected eight promising radiomic features and validated their value in decoding lung cancer heterogeneity. METHODS: CT images of 236 lung cancer patients were obtained from three different institutes, whereupon radiomic features were extracted according to a standardized procedure. The predictive value for patient long-term prognosis and association with routinely used semantic, genetic (e.g., epidermal growth factor receptor (EGFR)), and histopathological cancer profiles were validated. Feature measurement reproducibility was assessed. RESULTS: All eight selected features were robust across repeat scans (intraclass coefficient range: 0.81-0.99), and were associated with at least one of the cancer profiles: prognostic, semantic, genetic, and histopathological. For instance, "kurtosis" had a high predictive value of early death (AUC at first year: 0.70-0.75 in two independent cohorts), negative association with histopathological grade (Spearman's r: - 0.30), and altered expression levels regarding EGFR mutation and semantic characteristics (solid intensity, spiculated shape, juxtapleural location, and pleura tag; all p < 0.05). Combined as a radiomic score, the features had a higher area under curve for predicting 5-year survival (train: 0.855, test: 0.780, external validation: 0.760) than routine characteristics (0.733, 0.622, 0.613, respectively), and a better capability in patient death risk stratification (hazard ratio: 5.828, 95% confidence interval: 2.915-11.561) than histopathological staging and grading. CONCLUSIONS: We highlighted the clinical value of radiomic features. Following confirmation, these features may change the way in which we approach CT imaging and improve the individualized care of lung cancer patients.

Single NMR image super-resolution based on extreme learning machine

  • Wang, Zhiqiong
  • Xin, Junchang
  • Wang, Zhongyang
  • Tian, Shuo
  • Qiu, Xuejun
Physica Medica 2016 Journal Article, cited 0 times
Website
Introduction: The performance limitation of MRI equipment and higher resolution demand of NMR images from radiologists have formed a strong contrast. Therefore, it is important to study the super resolution algorithm suitable for NMR images, using low costs software to replace the expensive equipment-updating. Methods and materials: Firstly, a series of NMR images are obtained from original NMR images with original noise to the lowest resolution images with the highest noise. Then, based on extreme learning machine, the mapping relation model is constructed from lower resolution NMR images with higher noise to higher resolution NMR images with lower noise in each pair of adjacent images in the obtained image sequence. Finally, the optimal mapping model is established by the ensemble way to reconstruct the higher resolution NMR images with lower noise on the basis of original resolution NMR images with original noise. Experiments are carried out by 990111 NMR brain images in datasets NITRC, REMBRANDT, RIDER NEURO MRI, TCGA-GBM and TCGA-LGG. Results: The performance of proposed method is compared with three approaches through 7 indexes, and the experimental results show that our proposed method has a significant improvement. Discussion: Since our method considers the influence of the noise, it has 20% higher in Peak-Signal-to-Noise-Ratio comparison. As our method is sensitive to details, and has a better characteristic retention, it has higher image quality upgrade of 15% in the additional evaluation. Finally, since extreme learning machine has a celerity learning speed, our method is 46.1% faster. Keywords: Extreme learning machine; NMR; Single image; Super-resolution.

Radiomics features based on T2-weighted fluid-attenuated inversion recovery MRI predict the expression levels of CD44 and CD133 in lower-grade gliomas

  • Wang, Z.
  • Tang, X.
  • Wu, J.
  • Zhang, Z.
  • He, K.
  • Wu, D.
  • Chen, S.
  • Xiao, X.
Future Oncol 2021 Journal Article, cited 0 times
Website
Objective: To verify the association between CD44 and CD133 expression levels and the prognosis of patients with lower-grade gliomas (LGGs) and constructing radiomic models to predict those two genes' expression levels before surgery. Materials & methods: Genomic data of patients with LGG and the corresponding T2-weighted fluid-attenuated inversion recovery images were downloaded from the Cancer Genome Atlas and the Cancer Imaging Archive, which were utilized for prognosis analysis, radiomic feature extraction and model construction, respectively. Results & conclusion: CD44 and CD133 expression levels in LGG can significantly affect the prognosis of patients with LGG. Based on the T2-weighted fluid-attenuated inversion recovery images, the radiomic features can effectively predict the expression levels of CD44 and CD133 before surgery.

Using a deep learning prior for accelerating hyperpolarized (13) C MRSI on synthetic cancer datasets

  • Wang, Z.
  • Luo, G.
  • Li, Y.
  • Cao, P.
Magn Reson Med 2024 Journal Article, cited 0 times
Website
PURPOSE: We aimed to incorporate a deep learning prior with k-space data fidelity for accelerating hyperpolarized carbon-13 MRSI, demonstrated on synthetic cancer datasets. METHODS: A two-site exchange model, derived from the Bloch equation of MR signal evolution, was firstly used in simulating training and testing data, that is, synthetic phantom datasets. Five singular maps generated from each simulated dataset were used to train a deep learning prior, which was then employed with the fidelity term to reconstruct the undersampled MRI k-space data. The proposed method was assessed on synthetic human brain tumor images (N = 33), prostate cancer images (N = 72), and mouse tumor images (N = 58) for three undersampling factors and 2.5% additive Gaussian noise. Furthermore, varied levels of Gaussian noise with SDs of 2.5%, 5%, and 10% were added on synthetic prostate cancer data, and corresponding reconstruction results were evaluated. RESULTS: For quantitative evaluation, peak SNRs were approximately 32 dB, and the accuracy was generally improved for 5 to 8 dB compared with those from compressed sensing with L1-norm regularization or total variation regularization. Reasonable normalized RMS error were obtained. Our method also worked robustly against noise, even on a data with noise SD of 10%. CONCLUSION: The proposed singular value decomposition + iterative deep learning model could be considered as a general framework that extended the application of deep learning MRI reconstruction to metabolic imaging. The morphology of tumors and metabolic images could be measured robustly in six times acceleration using our method.

Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network

  • Wang, Z.
  • Liu, C.
  • Cheng, D.
  • Wang, L.
  • Yang, X.
  • Cheng, K. T.
IEEE Trans Med Imaging 2018 Journal Article, cited 127 times
Website
Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.

Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization

  • Wang, Zhiwei
  • Lin, Yi
  • Cheng, Kwang-Ting Tim
  • Yang, Xin
Medical Image Analysis 2020 Journal Article, cited 0 times

CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation

  • Wang, Y. L.
  • Zhao, Z. J.
  • Hu, S. Y.
  • Chang, F. L.
Comput Methods Programs Biomed 2021 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS: We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS: We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION: Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.

Modality-Pairing Learning for Brain Tumor Segmentation

  • Wang, Yixin
  • Zhang, Yao
  • Hou, Feng
  • Liu, Yang
  • Tian, Jiang
  • Zhong, Cheng
  • Zhang, Yang
  • He, Zhiqiang
2021 Book Section, cited 0 times
Automatic brain tumor segmentation from multi-modality Magnetic Resonance Images (MRI) using deep learning methods plays an important role in assisting the diagnosis and treatment of brain tumor. However, previous methods mostly ignore the latent relationship among different modalities. In this work, we propose a novel end-to-end Modality-Pairing learning method for brain tumor segmentation. Paralleled branches are designed to exploit different modality features and a series of layer connections are utilized to capture complex relationships and abundant information among modalities. We also use a consistency loss to minimize the prediction variance between two branches. Besides, learning rate warmup strategy is adopted to solve the problem of the training instability and early over-fitting. Lastly, we use average ensemble of multiple models and some post-processing techniques to get final results. Our method is tested on the BraTS 2020 online testing dataset, obtaining promising segmentation performance, with average dice scores of 0.891, 0.842, 0.816 for the whole tumor, tumor core and enhancing tumor, respectively. We won the second place of the BraTS 2020 Challenge for the tumor segmentation task.

Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography

  • Wang, Yi
  • Zhang, Hao
  • Chae, Kum Ju
  • Choi, Younhee
  • Jin, Gong Yong
  • Ko, Seok-Bum
Multidimensional Systems and Signal Processing 2020 Journal Article, cited 0 times
Website
Computed tomography (CT) is widely used to locate pulmonary nodules for preliminary diagnosis of the lung cancer. However, due to high visual similarities between malignant (cancer) and benign (non-cancer) nodules, distinguishing malignant from malign nodules is not an easy task for a thoracic radiologist. In this paper, a novel convolutional neural network (ConvNet) architecture is proposed to classify the pulmonary nodules as either benign or malignant. Due to the high variance of nodule characteristics in CT scans, such as size and shape, a multi-path, multi-scale architecture is proposed and applied in the proposed ConvNet to improve the classification performance. The multi-scale method utilizes filters with different sizes to more effectively extracted nodule features from local regions, and the multi-path architecture combines features extracted from different ConvNet layers thereby enhancing the nodule features with respect to global regions. The proposed ConvNet is trained and evaluated on the LUNGx Challenge database, and achieves a sensitivity of 0.887 and a specificity of 0.924 with an area under the curve (AUC) of 0.948. The proposed ConvNet achieves a 14% AUC improvement compared to the state-of-the-art unsupervised learning approach. The proposed ConvNet also outperforms the other state-of-the-art ConvNets explicitly designed for pulmonary nodule classification. For clinical usage, the proposed ConvNet could potentially assist the radiologists to make diagnostic decisions in CT screening.

IILS: Intelligent imaging layout system for automatic imaging report standardization and intra-interdisciplinary clinical workflow optimization

  • Wang, Yang
  • Yan, Fangrong
  • Lu, Xiaofan
  • Zheng, Guanming
  • Zhang, Xin
  • Wang, Chen
  • Zhou, Kefeng
  • Zhang, Yingwei
  • Li, Hui
  • Zhao, Qi
  • Zhu, Hu
  • Chen, Fei
  • Gao, Cailiang
  • Qing, Zhao
  • Ye, Jing
  • Li, Aijing
  • Xin, Xiaoyan
  • Li, Danyan
  • Wang, Han
  • Yu, Hongming
  • Cao, Lu
  • Zhao, Chaowei
  • Deng, Rui
  • Tan, Libo
  • Chen, Yong
  • Yuan, Lihua
  • Zhou, Zhuping
  • Yang, Wen
  • Shao, Mingran
  • Dou, Xin
  • Zhou, Nan
  • Zhou, Fei
  • Zhu, Yue
  • Lu, Guangming
  • Zhang, Bing
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: To achieve imaging report standardization and improve the quality and efficiency of the intra-interdisciplinary clinical workflow, we proposed an intelligent imaging layout system (IILS) for a clinical decision support system-based ubiquitous healthcare service, which is a lung nodule management system using medical images. METHODS: We created a lung IILS based on deep learning for imaging report standardization and workflow optimization for the identification of nodules. Our IILS utilized a deep learning plus adaptive auto layout tool, which trained and tested a neural network with imaging data from all the main CT manufacturers from 11,205 patients. Model performance was evaluated by the receiver operating characteristic curve (ROC) and calculating the corresponding area under the curve (AUC). The clinical application value for our IILS was assessed by a comprehensive comparison of multiple aspects. FINDINGS: Our IILS is clinically applicable due to the consistency with nodules detected by IILS, with its highest consistency of 0.94 and an AUC of 90.6% for malignant pulmonary nodules versus benign nodules with a sensitivity of 76.5% and specificity of 89.1%. Applying this IILS to a dataset of chest CT images, we demonstrate performance comparable to that of human experts in providing a better layout and aiding in diagnosis in 100% valid images and nodule display. The IILS was superior to the traditional manual system in performance, such as reducing the number of clicks from 14.45+/-0.38 to 2, time consumed from 16.87+/-0.38s to 6.92+/-0.10s, number of invalid images from 7.06+/-0.24 to 0, and missing lung nodules from 46.8% to 0%. INTERPRETATION: This IILS might achieve imaging report standardization, and improve the clinical workflow therefore opening a new window for clinical application of artificial intelligence. FUND: The National Natural Science Foundation of China.

SGPNet: A Three-Dimensional Multitask Residual Framework for Segmentation and IDH Genotype Prediction of Gliomas

  • Wang, Yao
  • Wang, Yan
  • Guo, Chunjie
  • Zhang, Shuangquan
  • Yang, Lili
  • Rakhshan, Vahid
Computational Intelligence and Neuroscience 2021 Journal Article, cited 0 times
Website
Glioma is the main type of malignant brain tumor in adults, and the status of isocitrate dehydrogenase (IDH) mutation highly affects the diagnosis, treatment, and prognosis of gliomas. Radiographic medical imaging provides a noninvasive platform for sampling both inter and intralesion heterogeneity of gliomas, and previous research has shown that the IDH genotype can be predicted from the fusion of multimodality radiology images. The features of medical images and IDH genotype are vital for medical treatment; however, it still lacks a multitask framework for the segmentation of the lesion areas of gliomas and the prediction of IDH genotype. In this paper, we propose a novel three-dimensional (3D) multitask deep learning model for segmentation and genotype prediction (SGPNet). The residual units are also introduced into the SGPNet that allows the output blocks to extract hierarchical features for different tasks and facilitate the information propagation. Our model reduces 26.6% classification error rates comparing with previous models on the datasets of Multimodal Brain Tumor Segmentation Challenge (BRATS) 2020 and The Cancer Genome Atlas (TCGA) gliomas’ databases. Furthermore, we first practically investigate the influence of lesion areas on the performance of IDH genotype prediction by setting different groups of learning targets. The experimental results indicate that the information of lesion areas is more important for the IDH genotype prediction. Our framework is effective and generalizable, which can serve as a highly automated tool to be applied in clinical decision making.

Deep learning based time-to-event analysis with PET, CT and joint PET/CT for head and neck cancer prognosis

  • Wang, Y.
  • Lombardo, E.
  • Avanzo, M.
  • Zschaek, S.
  • Weingartner, J.
  • Holzgreve, A.
  • Albert, N. L.
  • Marschner, S.
  • Fanetti, G.
  • Franchin, G.
  • Stancanello, J.
  • Walter, F.
  • Corradini, S.
  • Niyazi, M.
  • Lang, J.
  • Belka, C.
  • Riboldi, M.
  • Kurz, C.
  • Landry, G.
Comput Methods Programs Biomed 2022 Journal Article, cited 0 times
Website
OBJECTIVES: Recent studies have shown that deep learning based on pre-treatment positron emission tomography (PET) or computed tomography (CT) is promising for distant metastasis (DM) and overall survival (OS) prognosis in head and neck cancer (HNC). However, lesion segmentation is typically required, resulting in a predictive power susceptible to variations in primary and lymph node gross tumor volume (GTV) segmentation. This study aimed at achieving prognosis without GTV segmentation, and extending single modality prognosis to joint PET/CT to allow investigating the predictive performance of combined- compared to single-modality inputs. METHODS: We employed a 3D-Resnet combined with a time-to-event outcome model to incorporate censoring information. We focused on the prognosis of DM and OS for HNC patients. For each clinical endpoint, five models with PET and/or CT images as input were compared: PET-GTV, PET-only, CT-GTV, CT-only, and PET/CT-GTV models, where -GTV indicates that the corresponding images were masked using the GTV contour. Publicly available delineated CT and PET scans from 4 different Canadian hospitals (293) and the MAASTRO clinic (74) were used for training by 3-fold cross-validation (CV). For independent testing, we used 110 patients from a collaborating institution. The predictive performance was evaluated via Harrell's Concordance Index (HCI) and Kaplan-Meier curves. RESULTS: In a 5-year time-to-event analysis, all models could produce CV HCIs with median values around 0.8 for DM and 0.7 for OS. The best performance was obtained with the PET-only model, achieving a median testing HCI of 0.82 for DM and 0.69 for OS. Compared with the PET/CT-GTV model, the PET-only still had advantages of up to 0.07 in terms of testing HCI. The Kaplan-Meier curves and corresponding log-rank test results also demonstrated significant stratification capability of our models for the testing cohort. CONCLUSION: Deep learning-based DM and OS time-to-event models showed predictive capability and could provide indications for personalized RT. The best predictive performance achieved by the PET-only model suggested GTV segmentation might be less relevant for PET-based prognosis.

Automatic Glioma Grading Based on Two-Stage Networks by Integrating Pathology and MRI Images

  • Wang, Xiyue
  • Yang, Sen
  • Wu, Xiyi
2021 Book Section, cited 0 times
Glioma with a high incidence is one of the most common brain cancers. In the clinic, pathologist diagnoses the types of the glioma by observing the whole-slide images (WSIs) with different magnifications, which is time-consuming, laborious, and experience-dependent. The automatic grading of the glioma based on WSIs can provide aided diagnosis for clinicians. This paper proposes two fully convolutional networks, which are respectively used for WSIs and MRI images to achieve the automatic glioma grading (astrocytoma (lower-grade A), oligodendroglioma (middle-grade O), and glioblastoma (higher-grade G)). The final classification result is the probability average of the two networks. In the clinic and also in our multi-modalities image representation, grade A and O are difficult to distinguish. This work proposes a two-stage training strategy to exclude the distraction of the grade G and focuses on the classification of grade A and O. The experimental result shows that the proposed model achieves high glioma classification performance with the balanced accuracy of 0.889, Cohen’s Kappa of 0.903, and F1-score of 0.943 tested on the validation set.

Additional Value of PET/CT-Based Radiomics to Metabolic Parameters in Diagnosing Lynch Syndrome and Predicting PD1 Expression in Endometrial Carcinoma

  • Wang, X.
  • Wu, K.
  • Li, X.
  • Jin, J.
  • Yu, Y.
  • Sun, H.
Front Oncol 2021 Journal Article, cited 0 times
Website
Purpose: We aim to compare the radiomic features and parameters on 2-deoxy-2-[fluorine-18] fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. We also hope to explore the biologic significance of selected radiomic features. Materials and Methods: We conducted a retrospective cohort study, first using the 18F-FDG PET/CT images and clinical data from 100 patients with endometrial cancer to construct a training group (70 patients) and a test group (30 patients). The metabolic parameters and radiomic features of each tumor were compared between patients with and without Lynch syndrome. An independent cohort of 23 patients with solid tumors was used to evaluate the value of selected radiomic features in predicting the expression of the programmed cell death 1 (PD1), using 18F-FDG PET/CT images and RNA-seq genomic data. Results: There was no statistically significant difference in the standardized uptake values on PET between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. However, there were significant differences between the 2 groups in metabolic tumor volume and total lesion glycolysis (p < 0.005). There was a difference in the radiomic feature of gray level co-occurrence matrix entropy (GLCMEntropy; p < 0.001) between the groups: the area under the curve was 0.94 in the training group (sensitivity, 82.86%; specificity, 97.14%) and 0.893 in the test group (sensitivity, 80%; specificity, 93.33%). In the independent cohort of 23 patients, differences in GLCMEntropy were related to the expression of PD1 (rs =0.577; p < 0.001). Conclusions: In patients with endometrial cancer, higher metabolic tumor volumes, total lesion glycolysis values, and GLCMEntropy values on 18F-FDG PET/CT could suggest a higher risk for Lynch syndrome. The radiomic feature of GLCMEntropy for tumors is a potential predictor of PD1 expression.

Multiple medical image encryption algorithm based on scrambling of region of interest and diffusion of odd-even interleaved points

  • Wang, Xingyuan
  • Wang, Yafei
Expert Systems with Applications 2023 Journal Article, cited 0 times
Website
Due to the security requirement brought by the rapid development of electronic medical, this paper proposes an encryption algorithm for multiple medical images. The algorithm can not only encrypt grayscale medical images of any number and any size at the same time, but also has good encryption effect when applied to color images. Considering the characteristics of medical images, we design an encryption algorithm based on the region of interest (ROI). Firstly, extract the region of interest of the plaintext images and obtain the coordinates, calculate the hash value of the large image composed of all plaintext images. Set the coordinates and hash value as the secret key. This operation makes the whole encryption algorithm closely related to the plaintext images, which greatly enhances the ability to resist chosen plaintext attacks and improves security of the algorithm. In the process of encryption, chaotic sequences generated by Logistic-Tent chaotic system (LTS) are used to perform two scrambling and one diffusion, that is, pixel swapping based on the region of interest, Fisher-Yates scrambling and our newly proposed diffusion algorithm based on odd–even interleaved points. After testing and performance analysis, the algorithm can achieve good encryption effect, can resist various attacks, and has a higher security level and faster encryption speed.

An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images

  • Xinqi Wang
  • Keming Mao
  • Lizhe Wang
  • Peiyi Yang
  • Duo Lu
  • Ping He
Sensors (Basel) 2019 Journal Article, cited 0 times
Website
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.

A deep learning approach to remove contrast from contrast-enhanced CT for proton dose calculation

  • Wang, X.
  • Hao, Y.
  • Duan, Y.
  • Yang, D.
J Appl Clin Med Phys 2024 Journal Article, cited 0 times
Website
PURPOSE: Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans. METHODS: A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation. RESULTS: Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was approximately 4.72, whereas the difference between CECT and the scanned NCECT was approximately 64.52, indicating a approximately 93% reduction in mean HU difference. CONCLUSIONS: A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.

A prognostic analysis method for non-small cell lung cancer based on the computed tomography radiomics

  • Wang, Xu
  • Duan, Huihong
  • Li, Xiaobing
  • Ye, Xiaodan
  • Huang, Gang
  • Nie, Shengdong
Phys Med Biol 2020 Journal Article, cited 0 times
Website
In order to assist doctors in arranging the postoperative treatments and re-examinations for non-small cell lung cancer (NSCLC) patients, this study was initiated to explore a prognostic analysis method for NSCLC based on computed tomography (CT) radiomics. The data of 173 NSCLC patients were collected retrospectively and the clinically meaningful 3-year survival was used as the predictive limit to predict the patient's prognosis survival time range. Firstly, lung tumors were segmented and the radiomics features were extracted. Secondly, the feature weighting algorithm was used to screen and optimize the extracted original feature data. Then, the selected feature data combining with the prognosis survival of patients were used to train machine learning classification models. Finally, a prognostic survival prediction model and radiomics prognostic factors were obtained to predict the prognosis survival time range of NSCLC patients. The classification accuracy rate under cross-validation was up to 88.7% in the prognosis survival analysis model. When verifying on an independent data set, the model also yielded a high prediction accuracy which is up to 79.6%. Inverse different moment, lobulation sign and angular second moment were NSCLC prognostic factors based on radiomics. This study proved that CT radiomics features could effectively assist doctors to make more accurate prognosis survival prediction for NSCLC patients, so as to help doctors to optimize treatment and re-examination for NSCLC patients to extend their survival time.

Data Analysis of the Lung Imaging Database Consortium and Image Database Resource Initiative

  • Wang, Weisheng
  • Luo, Jiawei
  • Yang, Xuedong
  • Lin, Hongli
Academic Radiology 2015 Journal Article, cited 5 times
Website
RATIONALE AND OBJECTIVES: The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) is the largest publicly available computed tomography (CT) image reference data set of lung nodules. In this article, a comprehensive data analysis of the data set and a uniform data model are presented with the purpose of facilitating potential researchers to have an in-depth understanding to and efficient use of the data set in their lung cancer-related investigations. MATERIALS AND METHODS: A uniform data model was designed for representation and organization of various types of information contained in different source data files. A software tool was developed for the processing and analysis of the database, which 1) automatically aligns and graphically displays the nodule outlines marked manually by radiologists onto the corresponding CT images; 2) extracts diagnostic nodule characteristics annotated by radiologists; 3) calculates a variety of nodule image features based on the outlines of nodules, including diameter, volume, and degree of roundness, and so forth; 4) integrates all the extracted nodule information into the uniform data model and stores it in a common and easy-to-access data format; and 5) analyzes and summarizes various feature distributions of nodules in several different categories. Using this data processing and analysis tool, all 1018 CT scans from the data set were processed and analyzed for their statistical distribution. RESULTS: The information contained in different source data files with different formats was extracted and integrated into a new and uniform data model. Based on the new data model, the statistical distributions of nodules in terms of nodule geometric features and diagnostic characteristics were summarized. In the LIDC/IDRI data set, 2655 nodules >/=3 mm, 5875 nodules <3 mm, and 7411 non-nodules are identified, respectively. Among the 2655 nodules, 1) 775, 488, 481, and 911 were marked by one, two, three, or four radiologists, respectively; 2) most of nodules >/=3 mm (85.7%) have a diameter <10.0 mm with the mean value of 6.72 mm; and 3) 10.87%, 31.4%, 38.8%, 16.4%, and 2.6% of nodules were assessed with a malignancy score of 1, 2, 3, 4, and 5, respectively. CONCLUSIONS: This study demonstrates the usefulness of the proposed software tool to the potential users for an in-depth understanding of the LIDC/IDRI data set, therefore likely to be beneficial to their future investigations. The analysis results also demonstrate the distribution diversity of nodules characteristics, therefore being useful as a reference resource for assessing the performance of a new and existing nodule detection and/or segmentation schemes.

Deep Learning for Automatic Identification of Nodule Morphology Features and Prediction of Lung Cancer

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Conference Paper, cited 0 times
Website
Lung Cancer is the most common and deadly cancer in the world. Correct prognosis affects the survival rate of patient. The most important symptom for early diagnosis is nodules images in CT scan. Diagnosis performed in hospital is divided into 2 steps : (1) Firstly, detect nodules from CT scan. (2) Secondly, evaluate the morphological features of nodules and give the diagnostic results. In this work, we proposed an automatic lung cancer prognosis system. The system has 3 steps : (1) In the first step, we trained two models, one based on convolutional neural network (CNN), and the other recurrent neural network (RNN), to detect nodules in CT scan. (2) In the second step, convolutional neural networks (CNN) are trained to evaluate the value of nine morphological features of nodules. (3) In the final step, logistic regression between values of features and cancer probability is trained using XGBoost model. In addition, we give an analysis of which features are important for cancer prediction. Overall, we achieved 82.39% accuracy for lung cancer prediction. By logistic regression analysis, we find that features of diameter, spiculation and lobulation are useful for reducing false positive.

Evaluation of Malignancy of Lung Nodules from CT Image Using Recurrent Neural Network

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Journal Article, cited 0 times
The efficacy of treatment of cancer depends largely on early detection and correct prognosis. It is more important in case of pulmonary cancer, where the detection is based on identifying malignant nodules in the Computed Tomography (CT) scans of the lung. There are two problems for making correct decision about malignancy: (1) At early stage, the nodule size is small (length 5 to 10 mm). As the CT scan covers a volume of 30cm.×30cm.×40cm., manually searching for nodules takes a very long time (approximately 10 minutes for an expert). (2) There are benign nodules and nodules due to other ailments like bronchitis, pneumonia, tuberculosis. To identify whether the nodule is carcinogenic needs long experience and expertise.In recent years, several works have been reported to classify lung cancer using not only the CT scan image, but also other features causing or related to cancer. In all recent works, for CT image analysis, 3-D Convolution Neural Network (CNN) is used to identify cancerous nodules. In spite of various preprocessing used to improve training efficiency, 3-D CNN is extremely slow. The aim of this work is to improve training efficiency by proposing a new deep NN model. It consists of a hierarchical (sliced) structure of recurrent neural network (RNN), where different layers of the hierarchy can be trained simultaneously, decreasing training time. In addition, selective attention (alignment) during training improves convergence rate. The result shows a 3-fold increase in training efficiency, compared to recent state-of-the-art work using 3-D CNN.

Correlation between CT based radiomics features and gene expression data in non-small cell lung cancer

  • Wang, Ting
  • Gong, Jing
  • Duan, Hui-Hong
  • Wang, Li-Jia
  • Ye, Xiao-Dan
  • Nie, Sheng-Dong
Journal of X-ray science and technology 2019 Journal Article, cited 0 times
Website

A multi-model based on radiogenomics and deep learning techniques associated with histological grade and survival in clear cell renal cell carcinoma

  • Wang, S.
  • Zhu, C.
  • Jin, Y.
  • Yu, H.
  • Wu, L.
  • Zhang, A.
  • Wang, B.
  • Zhai, J.
Insights Imaging 2023 Journal Article, cited 0 times
Website
OBJECTIVES: This study aims to evaluate the efficacy of multi-model incorporated by radiomics, deep learning, and transcriptomics features for predicting pathological grade and survival in patients with clear cell renal cell carcinoma (ccRCC). METHODS: In this study, data were collected from 177 ccRCC patients, including radiomics features, deep learning (DL) features, and RNA sequencing data. Diagnostic models were then created using these data through least absolute shrinkage and selection operator (LASSO) analysis. Additionally, a multi-model was developed by combining radiomics, DL, and transcriptomics features. The prognostic performance of the multi-model was evaluated based on progression-free survival (PFS) and overall survival (OS) outcomes, assessed using Harrell's concordance index (C-index). Furthermore, we conducted an analysis to investigate the relationship between the multi-model and immune cell infiltration. RESULTS: The multi-model demonstrated favorable performance in discriminating pathological grade, with area under the ROC curve (AUC) values of 0.946 (95% CI: 0.912-0.980) and 0.864 (95% CI: 0.734-0.994) in the training and testing cohorts, respectively. Additionally, it exhibited statistically significant prognostic performance for predicting PFS and OS. Furthermore, the high-grade group displayed a higher abundance of immune cells compared to the low-grade group. CONCLUSIONS: The multi-model incorporated radiomics, DL, and transcriptomics features demonstrated promising performance in predicting pathological grade and prognosis in patients with ccRCC. CRITICAL RELEVANCE STATEMENT: We developed a multi-model to predict the grade and survival in clear cell renal cell carcinoma and explored the molecular biological significance of the multi-model of different histological grades. KEY POINTS: 1. The multi-model achieved an AUC of 0.864 for assessing pathological grade. 2. The multi-model exhibited an association with survival in ccRCC patients. 3. The high-grade group demonstrated a greater abundance of immune cells.

Radiomics Analysis Based on Magnetic Resonance Imaging for Preoperative Overall Survival Prediction in Isocitrate Dehydrogenase Wild-Type Glioblastoma

  • Wang, S.
  • Xiao, F.
  • Sun, W.
  • Yang, C.
  • Ma, C.
  • Huang, Y.
  • Xu, D.
  • Li, L.
  • Chen, J.
  • Li, H.
  • Xu, H.
Front Neurosci 2021 Journal Article, cited 1 times
Website
Purpose: This study aimed to develop a radiomics signature for the preoperative prognosis prediction of isocitrate dehydrogenase (IDH)-wild-type glioblastoma (GBM) patients and to provide personalized assistance in the clinical decision-making for different patients. Materials and Methods: A total of 142 IDH-wild-type GBM patients classified using the new classification criteria of WHO 2021 from two centers were included in the study and randomly divided into a training set and a test set. Firstly, their clinical characteristics were screened using univariate Cox regression. Then, the radiomics features were extracted from the tumor and peritumoral edema areas on their contrast-enhanced T1-weighted image (CE-T1WI), T2-weighted image (T2WI), and T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) magnetic resonance imaging (MRI) images. Subsequently, inter- and intra-class correlation coefficient (ICC) analysis, Spearman's correlation analysis, univariate Cox, and the least absolute shrinkage and selection operator (LASSO) Cox regression were used step by step for feature selection and the construction of a radiomics signature. The combined model was established by integrating the selected clinical factors. Kaplan-Meier analysis was performed for the validation of the discrimination ability of the model, and the C-index was used to evaluate consistency in the prediction. Finally, a Radiomics + Clinical nomogram was generated for personalized prognosis analysis and then validated using the calibration curve. Results: Analysis of the clinical characteristics resulted in the screening of four risk factors. The combination of ICC, Spearman's correlation, and univariate and LASSO Cox resulted in the selection of eight radiomics features, which made up the radiomics signature. Both the radiomics and combined models can significantly stratify high- and low-risk patients (p < 0.001 and p < 0.05 for the training and test sets, respectively) and obtained good prediction consistency (C-index = 0.74-0.86). The calibration plots exhibited good agreement in both 1- and 2-year survival between the prediction of the model and the actual observation. Conclusion: Radiomics is an independent preoperative non-invasive prognostic tool for patients who were newly classified as having IDH-wild-type GBM. The constructed nomogram, which combined radiomics features with clinical factors, can predict the overall survival (OS) of IDH-wild-type GBM patients and could be a new supplement to treatment guidelines.

Integrating clinical access limitations into iPDT treatment planning with PDT-SPACE

  • Wang, Shuran
  • Saeidi, Tina
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2023 Journal Article, cited 0 times
PDT-SPACE is an open-source software tool that automates interstitial photodynamic therapy treatment planning by providing patient-specific placement of light sources to destroy a tumor while minimizing healthy tissue damage. This work extends PDT-SPACE in two ways. The first enhancement allows specification of clinical access constraints on light source insertion to avoid penetrating critical structures and to minimize surgical complexity. Constraining fiber access to a single burr hole of adequate size increases healthy tissue damage by 10%. The second enhancement generates an initial placement of light sources as a starting point for refinement, rather than requiring entry of a starting solution by the clinician. This feature improves productivity and also leads to solutions with 4.5% less healthy tissue damage. The two features are used in concert to perform simulations of various surgery options of virtual glioblastoma multiforme brain tumors.

AI-based MRI auto-segmentation of brain tumor in rodents, a multicenter study

  • Wang, Shuncong
  • Pang, Xin
  • de Keyzer, Frederik
  • Feng, Yuanbo
  • Swinnen, Johan V.
  • Yu, Jie
  • Ni, Yicheng
2023 Journal Article, cited 0 times
Website
Automatic segmentation of rodent brain tumor on magnetic resonance imaging (MRI) may facilitate biomedical research. The current study aims to prove the feasibility for automatic segmentation by artificial intelligence (AI), and practicability of AI-assisted segmentation. MRI images, including T2WI, T1WI and CE-T1WI, of brain tumor from 57 WAG/Rij rats in KU Leuven and 46 mice from the cancer imaging archive (TCIA) were collected. A 3D U-Net architecture was adopted for segmentation of tumor bearing brain and brain tumor. After training, these models were tested with both datasets after Gaussian noise addition. Reduction of inter-observer disparity by AI-assisted segmentation was also evaluated. The AI model segmented tumor-bearing brain well for both Leuven and TCIA datasets, with Dice similarity coefficients (DSCs) of 0.87 and 0.85 respectively. After noise addition, the performance remained unchanged when the signal-noise ratio (SNR) was higher than two or eight, respectively. For the segmentation of tumor lesions, AI-based model yielded DSCs of 0.70 and 0.61 for Leuven and TCIA datasets respectively. Similarly, the performance is uncompromised when the SNR was over two and eight respectively. AI-assisted segmentation could significantly reduce the inter-observer disparities and segmentation time in both rats and mice. Both AI models for segmenting brain or tumor lesions could improve inter-observer agreement and therefore contributed to the standardization of the following biomedical studies.

MC-Net: multi-scale Swin transformer and complementary self-attention fusion network for pancreas segmentation

  • Wang, Shunan
  • Fan, Jiancong
  • Batista, Paulo
  • Bilas Pachori, Ram
2023 Conference Paper, cited 0 times
Website
The pancreas is located deep in the abdominal cavity, and its structure and adjacent relationship are complex. It is very difficult to treat it accurately. In order to solve the problem of automatic segmentation of pancreatic tissue in CT images, we apply the multi-scale idea of convolution neural network to Transformer, and propose a Multi-Scale Swin Transformer and Complementary Self-Attention Fusion Network for Pancreas Segmentation. Specifically, the multi-scale Swin Transformer module constructs different receptive fields through different window sizes to obtain multi-scale information; the different features of the encoder and decoder are effectively fused through a complementary self-attention fusion module. By comparing experimental evaluations on the NIH-TCIA dataset, our method improves Dice, sensitivity, and IOU by 3.9%, 6.4%, and 5.3% respectively compared to the baseline, which outperforms current state-of-the-art medical image segmentation methods.

Automatic Brain Tumour Segmentation and Biophysics-Guided Survival Prediction

  • Wang, Shuo
  • Dai, Chengliang
  • Mo, Yuanhan
  • Angelini, Elsa
  • Guo, Yike
  • Bai, Wenjia
2020 Book Section, cited 0 times
Gliomas are the most common malignant brain tumours with intrinsic heterogeneity. Accurate segmentation of gliomas and their sub-regions on multi-parametric magnetic resonance images (mpMRI) is of great clinical importance, which defines tumour size, shape and appearance and provides abundant information for preoperative diagnosis, treatment planning and survival prediction. Recent developments on deep learning have significantly improved the performance of automated medical image segmentation. In this paper, we compare several state-of-the-art convolutional neural network models for brain tumour image segmentation. Based on the ensembled segmentation, we present a biophysics-guided prognostic model for patient overall survival prediction which outperforms a data-driven radiomics approach. Our method won the second place of the MICCAI 2019 BraTS Challenge for the overall survival prediction.

Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

  • Wang, Siqiu
Radiation Oncology 2022 Thesis, cited 0 times
Website
Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation.

Direct three-dimensional segmentation of prostate glands with nnU-Net

  • Wang, R.
  • Chow, S. S. L.
  • Serafin, R. B.
  • Xie, W.
  • Han, Q.
  • Baraznenok, E.
  • Lan, L.
  • Bishop, K. W.
  • Liu, J. T. C.
2024 Journal Article, cited 0 times
Website
SIGNIFICANCE: In recent years, we and others have developed non-destructive methods to obtain three-dimensional (3D) pathology datasets of clinical biopsies and surgical specimens. For prostate cancer risk stratification (prognostication), standard-of-care Gleason grading is based on examining the morphology of prostate glands in thin 2D sections. This motivates us to perform 3D segmentation of prostate glands in our 3D pathology datasets for the purposes of computational analysis of 3D glandular features that could offer improved prognostic performance. AIM: To facilitate prostate cancer risk assessment, we developed a computationally efficient and accurate deep learning model for 3D gland segmentation based on open-top light-sheet microscopy datasets of human prostate biopsies stained with a fluorescent analog of hematoxylin and eosin (H&E). APPROACH: For 3D gland segmentation based on our H&E-analog 3D pathology datasets, we previously developed a hybrid deep learning and computer vision-based pipeline, called image translation-assisted segmentation in 3D (ITAS3D), which required a complex two-stage procedure and tedious manual optimization of parameters. To simplify this procedure, we use the 3D gland-segmentation masks previously generated by ITAS3D as training datasets for a direct end-to-end deep learning-based segmentation model, nnU-Net. The inputs to this model are 3D pathology datasets of prostate biopsies rapidly stained with an inexpensive fluorescent analog of H&E and the outputs are 3D semantic segmentation masks of the gland epithelium, gland lumen, and surrounding stromal compartments within the tissue. RESULTS: nnU-Net demonstrates remarkable accuracy in 3D gland segmentations even with limited training data. Moreover, compared with the previous ITAS3D pipeline, nnU-Net operation is simpler and faster, and it can maintain good accuracy even with lower-resolution inputs. CONCLUSIONS: Our trained DL-based 3D segmentation model will facilitate future studies to demonstrate the value of computational 3D pathology for guiding critical treatment decisions for patients with prostate cancer.

S2FLNet: Hepatic steatosis detection network with body shape

  • Wang, Q.
  • Xue, W.
  • Zhang, X.
  • Jin, F.
  • Hahn, J.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
Fat accumulation in the liver cells can increase the risk of cardiac complications and cardiovascular disease mortality. Therefore, a way to quickly and accurately detect hepatic steatosis is critically important. However, current methods, e.g., liver biopsy, magnetic resonance imaging, and computerized tomography scan, are subject to high cost and/or medical complications. In this paper, we propose a deep neural network to estimate the degree of hepatic steatosis (low, mid, high) using only body shapes. The proposed network adopts dilated residual network blocks to extract refined features of input body shape maps by expanding the receptive field. Furthermore, to classify the degree of steatosis more accurately, we create a hybrid of the center loss and cross entropy loss to compact intra-class variations and separate inter-class differences. We performed extensive tests on the public medical dataset with various network parameters. Our experimental results show that the proposed network achieves a total accuracy of over 82% and offers an accurate and accessible assessment for hepatic steatosis.

Pixel-wise body composition prediction with a multi-task conditional generative adversarial network

  • Wang, Q.
  • Xue, W.
  • Zhang, X.
  • Jin, F.
  • Hahn, J.
J Biomed Inform 2021 Journal Article, cited 0 times
Website
The analysis of human body composition plays a critical role in health management and disease prevention. However, current medical technologies to accurately assess body composition such as dual energy X-ray absorptiometry, computed tomography, and magnetic resonance imaging have the disadvantages of prohibitive cost or ionizing radiation. Recently, body shape based techniques using body scanners and depth cameras, have brought new opportunities for improving body composition estimation by intelligently analyzing body shape descriptors. In this paper, we present a multi-task deep neural network method utilizing a conditional generative adversarial network to predict the pixel level body composition using only 3D body surfaces. The proposed method can predict 2D subcutaneous and visceral fat maps in a single network with a high accuracy. We further introduce an interpreted patch discriminator which optimizes the texture accuracy of the 2D fat maps. The validity and effectiveness of our new method are demonstrated experimentally on TCIA and LiTS datasets. Our proposed approach outperforms competitive methods by at least 41.3% for the whole body fat percentage, 33.1% for the subcutaneous and visceral fat percentage, and 4.1% for the regional fat predictions.

Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz

  • Wang, Qingzhu
  • Chen, Xiaoming
  • Wei, Mengying
  • Miao, Zhuang
BioMedical Engineering OnLine 2016 Journal Article, cited 1 times
Website

Simulated MRI Artifacts: Testing Machine Learning Failure Modes

  • Wang, Nicholas C.
  • Noll, Douglas C.
  • Srinivasan, Ashok
  • Gagnon-Bartsch, Johann
  • Kim, Michelle M.
  • Rao, Arvind
2022 Journal Article, cited 0 times
Website
Objective. Seven types of MRI artifacts, including acquisition and preprocessing errors, were simulated to test a machine learning brain tumor segmentation model for potential failure modes. Introduction. Real-world medical deployments of machine learning algorithms are less common than the number of medical research papers using machine learning. Part of the gap between the performance of models in research and deployment comes from a lack of hard test cases in the data used to train a model. Methods. These failure modes were simulated for a pretrained brain tumor segmentation model that utilizes standard MRI and used to evaluate the performance of the model under duress. These simulated MRI artifacts consisted of motion, susceptibility induced signal loss, aliasing, field inhomogeneity, sequence mislabeling, sequence misalignment, and skull stripping failures. Results. The artifact with the largest effect was the simplest, sequence mislabeling, though motion, field inhomogeneity, and sequence misalignment also caused significant performance decreases. The model was most susceptible to artifacts affecting the FLAIR (fluid attenuation inversion recovery) sequence. Conclusion. Overall, these simulated artifacts could be used to test other brain MRI models, but this approach could be used across medical imaging applications.

Proteogenomic and metabolomic characterization of human glioblastoma

  • Wang, Liang-Bo
  • Karpova, Alla
  • Gritsenko, Marina A
  • Kyle, Jennifer E
  • Cao, Song
  • Li, Yize
  • Rykunov, Dmitry
  • Colaprico, Antonio
  • Rothstein, Joseph H
  • Hong, Runyu
Cancer Cell 2021 Journal Article, cited 0 times
Website

Weighted Schatten p-norm minimization for impulse noise removal with TV regularization and its application to medical images

  • Wang, Li
  • Xiao, Di
  • Hou, Wen S.
  • Wu, Xiao Y.
  • Chen, Lin
Biomedical Signal Processing and Control 2021 Journal Article, cited 1 times
Website
Noise of impulse type was common in medical images. In this paper, we modeled the denoising problem for impulse noise by Weighted Schatten p-norm minimization (WSNM) with Robust Principal Component Analysis (RPCA). The anisotropic Total Variation (TV) regularization was incorporated to preserve edge information which was important for clinic detection and diagnosis. The alternating direction method of multipliers (ADMM) algorithm was adopted for solving the formulated nonconvex optimization problem. We tested the performance on both standard natural images and medical images with additive impulse noise in different levels. Experiment results implied its competitiveness compared to traditional denoising algorithms that validated to be state-of-the-art. The propose algorithm restored images with better structure information preservation outperformed the conventional techniques in terms of visual appearances. Quantitative metrics (PSNR, SSIM and FSIM) further objectively demonstrated the effectiveness of the proposed algorithm for impulse noise removal superior to the existing ones.

A multi-objective radiomics model for the prediction of locoregional recurrence in head and neck squamous cell cancer

  • Wang, K.
  • Zhou, Z.
  • Wang, R.
  • Chen, L.
  • Zhang, Q.
  • Sher, D.
  • Wang, J.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Locoregional recurrence (LRR) is the predominant pattern of relapse after nonsurgical treatment of head and neck squamous cell cancer (HNSCC). Therefore, accurately identifying patients with HNSCC who are at high risk for LRR is important for optimizing personalized treatment plans. In this work, we developed a multi-classifier, multi-objective, and multi-modality (mCOM) radiomics-based outcome prediction model for HNSCC LRR. METHODS: In mCOM, we considered sensitivity and specificity simultaneously as the objectives to guide the model optimization. We used multiple classifiers, comprising support vector machine (SVM), discriminant analysis (DA), and logistic regression (LR), to build the model. We used features from multiple modalities as model inputs, comprising clinical parameters and radiomics feature extracted from X-ray computed tomography (CT) images and positron emission tomography (PET) images. We proposed a multi-task multi-objective immune algorithm (mTO) to train the mCOM model and used an evidential reasoning (ER)-based method to fuse the output probabilities from different classifiers and modalities in mCOM. We evaluated the effectiveness of the developed method using a retrospective public pretreatment HNSCC dataset downloaded from The Cancer Imaging Archive (TCIA). The input for our model included radiomics features extracted from pretreatment PET and CT using an open source radiomics software and clinical characteristics such as sex, age, stage, primary disease site, human papillomavirus (HPV) status, and treatment paradigm. In our experiment, 190 patients from two institutions were used for model training while the remaining 87 patients from the other two institutions were used for testing. RESULTS: When we built the predictive model using features from single modality, the multi-classifier (MC) models achieved better performance over the models built with the three base-classifiers individually. When we built the model using features from multiple modalities, the proposed method achieved area under the receiver operating characteristic curve (AUC) values of 0.76 for the radiomics-only model, and 0.77 for the model built with radiomics and clinical features, which is significantly higher than the AUCs of models built with single-modality features. The statistical analysis was performed using MATLAB software. CONCLUSIONS: Comparisons with other methods demonstrated the efficiency of the mTO algorithm and the superior performance of the proposed mCOM model for predicting HNSCC LRR.

Breast cancer cell-derived microRNA-155 suppresses tumor progression via enhancing immune cell recruitment and anti-tumor function

  • Wang, Junfeng
  • Wang, Quanyi
  • Guan, Yinan
  • Sun, Yulu
  • Wang, Xiaozhi
  • Lively, Kaylie
  • Wang, Yuzhen
  • Luo, Ming
  • Kim, Julian A
  • Murphy, E Angela
2022 Journal Article, cited 0 times
Website

Deep learning based image reconstruction algorithm for limited-angle translational computed tomography

  • Wang, Jiaxi
  • Liang, Jun
  • Cheng, Jingye
  • Guo, Yumeng
  • Zeng, Li
PLoS One 2020 Journal Article, cited 0 times
Website

A Novel Brain Tumor Segmentation Approach Based on Deep Convolutional Neural Network and Level Set

  • Wang, Jingjing
  • Gao, Jun
  • Ren, Jinwen
  • Zhao, Yanhua
  • Zhang, Liren
2020 Conference Paper, cited 0 times
In recent years deep convolutional Neural Network (DCNN) gets a big success in brain tumor segmentation. But there are artifacts in the border region of segmentation results using DCNNs. To solve this question, we propose a hybrid model including DCNNs and traditional segmentation methods. First, we use U-Net and ResU-Net network in coarse segmentation. In order to deepen the network levels and improve the network performance, we add residual module to U-Net and comprise the ResU-Net. Second, we use level set in fine segmentation of tumor boundary. We take the intersection of the coarse segmentation outputs of U-Net and ResU-Net as input of level set module. The aim of taking the intersection of U-Net and ResU-Net outputs is to get better initialization information for the level set algorithm and accelerate the evolution of level set functions. The proposed approach is validated on the BraTS 2018 challenge dataset. The metrics used to evaluate the segmentation results are: Dice, Specificity, Sensitivity, Hausdorff distances (HD). We compare our approach with U-Net, ResU-Net and some other methods. The experimental results indicate our approach is better than some other deep networks.

A diagnostic classification of lung nodules using multiple-scale residual network

  • Wang, H.
  • Zhu, H.
  • Ding, L.
  • Yang, K.
2023 Journal Article, cited 0 times
Website
Computed tomography (CT) scans have been shown to be an effective way of improving diagnostic efficacy and reducing lung cancer mortality. However, distinguishing benign from malignant nodules in CT imaging remains challenging. This study aims to develop a multiple-scale residual network (MResNet) to automatically and precisely extract the general feature of lung nodules, and classify lung nodules based on deep learning. The MResNet aggregates the advantages of residual units and pyramid pooling module (PPM) to learn key features and extract the general feature for lung nodule classification. Specially, the MResNet uses the ResNet as a backbone network to learn contextual information and discriminate feature representation. Meanwhile, the PPM is used to fuse features under four different scales, including the coarse scale and the fine-grained scale to obtain more general lung features of the CT image. MResNet had an accuracy of 99.12%, a sensitivity of 98.64%, a specificity of 97.87%, a positive predictive value (PPV) of 99.92%, and a negative predictive value (NPV) of 97.87% in the training set. Additionally, its area under the receiver operating characteristic curve (AUC) was 0.9998 (0.99976-0.99991). MResNet's accuracy, sensitivity, specificity, PPV, NPV, and AUC in the testing set were 85.23%, 92.79%, 72.89%, 84.56%, 86.34%, and 0.9275 (0.91662-0.93833), respectively. The developed MResNet performed exceptionally well in estimating the malignancy risk of pulmonary nodules found on CT. The model has the potential to provide reliable and reproducible malignancy risk scores for clinicians and radiologists, thereby optimizing lung cancer screening management.

RECISTSup: Weakly-Supervised Lesion Volume Segmentation Using RECIST Measurement

  • Wang, H.
  • Yi, F.
  • Wang, J.
  • Yi, Z.
  • Zhang, H.
IEEE Trans Med Imaging 2022 Journal Article, cited 0 times
Website
Lesion volume segmentation in medical imaging is an effective tool for assessing lesion/tumor sizes and monitoring changes in growth. Since manually segmentation of lesion volume is not only time-consuming but also requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Although RECIST measurement is coarse compared with voxel-level annotation, it can reflect the lesion's location, length, and width, resulting in a possibility of segmenting lesion volume directly via RECIST measurement. In this study, a novel weakly-supervised method called RECISTSup is proposed to automatically segment lesion volume via RECIST measurement. Based on RECIST measurement, a new RECIST measurement propagation algorithm is proposed to generate pseudo masks, which are then used to train the segmentation networks. Due to the spatial prior knowledge provided by RECIST measurement, two new losses are also designed to make full use of it. In addition, the automatically segmented lesion results are used to supervise the model training iteratively for further improving segmentation performance. A series of experiments are carried out on three datasets to evaluate the proposed method, including ablation experiments, comparison of various methods, annotation cost analyses, visualization of results. Experimental results show that the proposed RECISTSup achieves the state-of-the-art result compared with other weakly-supervised methods. The results also demonstrate that RECIST measurement can produce similar performance to voxel-level annotation while significantly saving the annotation cost.

Global and Local Multi-scale Feature Fusion Enhancement for Brain Tumor Segmentation and Pancreas Segmentation

  • Wang, Huan
  • Wang, Guotai
  • Liu, Zijian
  • Zhang, Shaoting
2020 Book Section, cited 0 times
The fully convolutional networks (FCNs) have been widely applied in numerous medical image segmentation tasks. However, tissue regions usually have large variations of shape and scale, so the ability of neural networks to learn multi-scale features is important to the segmentation performance. In this paper, we improve the network for multi-scale feature fusion, in the medical image segmentation by introducing two feature fusion modules: i) global attention multi-scale feature fusion module (GMF); ii) local dense multi-scale feature fusion module (LMF). GMF aims to use global context information to guide the recalibration of low-level features from both spatial and channel aspects, so as to enhance the utilization of effective multi-scale features and suppress the noise of low-level features. LMF adopts bottom-up top-down structure to capture context information, to generate semantic features, and to fuse feature information at different scales. LMF can integrate local dense multi-scale context features layer by layer in the network, thus improving the ability of network to encode interdependent relationships among boundary pixels. Based on the above two modules, we propose a novel medical image segmentation framework (GLF-Net). We evaluated the proposed network and modules on challenging brain tumor segmentation and pancreas segmentation datasets, and very competitive performance has been achieved.

3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction

  • Wang, Feifan
  • Jiang, Runzhou
  • Zheng, Liqin
  • Meng, Chun
  • Biswal, Bharat
2020 Book Section, cited 0 times
Past few years have witnessed the prevalence of deep learning in many application scenarios, among which is medical image processing. Diagnosis and treatment of brain tumors requires an accurate and reliable segmentation of brain tumors as a prerequisite. However, such work conventionally requires brain surgeons significant amount of time. Computer vision techniques could provide surgeons a relief from the tedious marking procedure. In this paper, a 3D U-net based deep learning model has been trained with the help of brain-wise normalization and patching strategies for the brain tumor segmentation task in the BraTS 2019 competition. Dice coefficients for enhancing tumor, tumor core, and the whole tumor are 0.737, 0.807 and 0.894 respectively on the validation dataset. These three values on the test dataset are 0.778, 0.798 and 0.852. Furthermore, numerical features including ratio of tumor size to brain size and the area of tumor surface as well as age of subjects are extracted from predicted tumor labels and have been used for the overall survival days prediction task. The accuracy could be 0.448 on the validation dataset, and 0.551 on the final test dataset.

Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT

  • Wang, Dantong
  • Fong, Simon
  • Wong, Raymond K
  • Mohammed, Sabah
  • Fiaidhi, Jinan
  • Wong, Kelvin KL
Sci RepScientific reports 2017 Journal Article, cited 3 times
Website

Improving Generalizability in Limited-Angle CT Reconstruction with Sinogram Extrapolation

  • Wang, Ce
  • Zhang, Haimiao
  • Li, Qian
  • Shang, Kun
  • Lyu, Yuanyuan
  • Dong, Bin
  • Zhou, S. Kevin
2021 Conference Paper, cited 1 times
Website
Computed tomography (CT) reconstruction from X-ray projections acquired within a limited angle range is challenging, especially when the angle range is extremely small. Both analytical and iterative models need more projections for effective modeling. Deep learning methods have gained prevalence due to their excellent reconstruction performances, but such success is mainly limited within the same dataset and does not generalize across datasets with different distributions. Hereby we propose ExtraPolationNetwork for limited-angle CT reconstruction via the introduction of a sinogram extrapolation module, which is theoretically justified. The module complements extra sinogram information and boots model generalizability. Extensive experimental results show that our reconstruction model achieves state-of-the-art performance on NIH-AAPM dataset, similar to existing approaches. More importantly, we show that using such a sinogram extrapolation module significantly improves the generalization capability of the model on unseen datasets (e.g., COVID-19 and LIDC datasets) when compared to existing approaches. Keywords Limited-angle CT reconstruction Sinogram extrapolation Model generalizability

An effective deep network for automatic segmentation of complex lung tumors in CT images

  • Wang, B.
  • Chen, K.
  • Tian, X.
  • Yang, Y.
  • Zhang, X.
Med Phys 2021 Journal Article, cited 0 times
Website
PURPOSE: Accurate segmentation of complex tumors in lung computed tomography (CT) images is essential to improve the effectiveness and safety of lung cancer treatment. However, the characteristics of heterogeneity, blurred boundaries, and large-area adhesion to tissues with similar gray-scale features always make the segmentation of complex tumors difficult. METHODS: This study proposes an effective deep network for the automatic segmentation of complex lung tumors (CLT-Net). The network architecture uses an encoder-decoder model that combines long and short skip connections and a global attention unit to identify target regions using multiscale semantic information. A boundary-aware loss function integrating Tversky loss and boundary loss based on the level-set calculation is designed to improve the network's ability to perceive boundary positions of difficult-to-segment (DTS) tumors. We use a dynamic weighting strategy to balance the contributions of the two parts of the loss function. RESULTS: The proposed method was verified on a dataset consisting of 502 lung CT images containing DTS tumors. The experiments show that the Dice similarity coefficient and Hausdorff distance metric of the proposed method are improved by 13.2% and 8.5% on average, respectively, compared with state-of-the-art segmentation models. Furthermore, we selected three additional medical image datasets with different modalities to evaluate the proposed model. Compared with mainstream architectures, the Dice similarity coefficient is also improved to a certain extent, which demonstrates the effectiveness of our method for segmenting medical images. CONCLUSIONS: Quantitative and qualitative results show that our method outperforms current mainstream lung tumor segmentation networks in terms of Dice similarity coefficient and Hausdorff distance. Note that the proposed method is not limited to the segmentation of complex lung tumors but also performs in different modalities of medical image segmentation.

Methylation of L1RE1, RARB, and RASSF1 function as possible biomarkers for the differential diagnosis of lung cancer

  • Walter, RFH
  • Rozynek, P
  • Casjens, S
  • Werner, R
  • Mairinger, FD
  • Speel, EJM
  • Zur Hausen, A
  • Meier, S
  • Wohlschlaeger, J
  • Theegarten, D
PLoS One 2018 Journal Article, cited 1 times
Website

Segmentation of 71 Anatomical Structures Necessary for the Evaluation of Guideline-Conforming Clinical Target Volumes in Head and Neck Cancers

  • Walter, A.
  • Hoegen-Sassmannshausen, P.
  • Stanic, G.
  • Rodrigues, J. P.
  • Adeberg, S.
  • Jakel, O.
  • Frank, M.
  • Giske, K.
Cancers (Basel) 2024 Journal Article, cited 0 times
Website
The delineation of the clinical target volumes (CTVs) for radiation therapy is time-consuming, requires intensive training and shows high inter-observer variability. Supervised deep-learning methods depend heavily on consistent training data; thus, State-of-the-Art research focuses on making CTV labels more homogeneous and strictly bounding them to current standards. International consensus expert guidelines standardize CTV delineation by conditioning the extension of the clinical target volume on the surrounding anatomical structures. Training strategies that directly follow the construction rules given in the expert guidelines or the possibility of quantifying the conformance of manually drawn contours to the guidelines are still missing. Seventy-one anatomical structures that are relevant to CTV delineation in head- and neck-cancer patients, according to the expert guidelines, were segmented on 104 computed tomography scans, to assess the possibility of automating their segmentation by State-of-the-Art deep learning methods. All 71 anatomical structures were subdivided into three subsets of non-overlapping structures, and a 3D nnU-Net model with five-fold cross-validation was trained for each subset, to automatically segment the structures on planning computed tomography scans. We report the DICE, Hausdorff distance and surface DICE for 71 + 5 anatomical structures, for most of which no previous segmentation accuracies have been reported. For those structures for which prediction values have been reported, our segmentation accuracy matched or exceeded the reported values. The predictions from our models were always better than those predicted by the TotalSegmentator. The sDICE with 2 mm margin was larger than 80% for almost all the structures. Individual structures with decreased segmentation accuracy are analyzed and discussed with respect to their impact on the CTV delineation following the expert guidelines. No deviation is expected to affect the rule-based automation of the CTV delineation.

Muscle and adipose tissue segmentations at the third cervical vertebral level in patients with head and neck cancer

  • Wahid, K. A.
  • Olson, B.
  • Jain, R.
  • Grossberg, A. J.
  • El-Habashy, D.
  • Dede, C.
  • Salama, V.
  • Abobakr, M.
  • Mohamed, A. S. R.
  • He, R.
  • Jaskari, J.
  • Sahlsten, J.
  • Kaski, K.
  • Fuller, C. D.
  • Naser, M. A.
Sci Data 2022 Journal Article, cited 0 times
Website
The accurate determination of sarcopenia is critical for disease management in patients with head and neck cancer (HNC). Quantitative determination of sarcopenia is currently dependent on manually-generated segmentations of skeletal muscle derived from computed tomography (CT) cross-sectional imaging. This has prompted the increasing utilization of machine learning models for automated sarcopenia determination. However, extant datasets currently do not provide the necessary manually-generated skeletal muscle segmentations at the C3 vertebral level needed for building these models. In this data descriptor, a set of 394 HNC patients were selected from The Cancer Imaging Archive, and their skeletal muscle and adipose tissue was manually segmented at the C3 vertebral level using sliceOmatic. Subsequently, using publicly disseminated Python scripts, we generated corresponding segmentations files in Neuroimaging Informatics Technology Initiative format. In addition to segmentation data, additional clinical demographic data germane to body composition analysis have been retrospectively collected for these patients. These data are a valuable resource for studying sarcopenia and body composition analysis in patients with HNC.

Ultralow-parameter denoising: Trainable bilateral filter layers in computed tomography

  • Wagner, F.
  • Thies, M.
  • Gu, M.
  • Huang, Y.
  • Pechmann, S.
  • Patwari, M.
  • Ploner, S.
  • Aust, O.
  • Uderhardt, S.
  • Schett, G.
  • Christiansen, S.
  • Maier, A.
Med Phys 2022 Journal Article, cited 1 times
Website
BACKGROUND: Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE: Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS: This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS: Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS: Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.

Transfer Learning for Brain Tumor Segmentation

  • Wacker, Jonas
  • Ladeira, Marcelo
  • Nascimento, Jose Eduardo Vaz
2021 Book Section, cited 0 times
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery. Magnetic Resonance Imaging (MRI) is used by radiotherapists to manually segment brain lesions and to observe their development throughout the therapy. The manual image segmentation process is time-consuming and results tend to vary among different human raters. Therefore, there is a substantial demand for automatic image segmentation algorithms that produce a reliable and accurate segmentation of various brain tissue types. Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks. They have been successfully applied to the medical context including medical image segmentation. In particular, fully convolutional networks (FCNs) such as the U-Net produce state-of-the-art results in the automatic segmentation of brain tumors. MRI brain scans are volumetric and exist in various co-registered modalities that serve as input channels for these FCN architectures. Training algorithms for brain tumor segmentation on this complex input requires large amounts of computational resources and is prone to overfitting. In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances. We also test our method on a privately obtained clinical dataset.

Deep Learning Based Approach for Multiple Myeloma Detection

  • Vyshnav, M.T.
  • Sowmya, V.
  • Gopalakrishnan, E.A.
  • Variyar V.V., Sajith
  • Krishna Menon, Vijay
  • Soman, K.P.
2020 Conference Paper, cited 2 times
Website
Multiple myeloma cancer is caused by the abnormal growth of plasma cells in the bone marrow. The most commonly used method for diagnosis of multiple myeloma is Bone marrow aspiration, where the aspirate slide images are either observed visually or passed onto existing digital image processing software for the detection of myeloma cells. The current work explores the effectiveness of deep learning based object detection/segmentation algorithms such as Mask-RCNN and unet for the detection of multiple myeloma. The manual polygon annotation of the current dataset is performed using VGG image annotation software. The deep learning models were trained by monitoring the train and validation loss per epoch and the best model was selected based on the minimal loss for the validation data. From the comparison results obtained for both the models, it is observed that Mask-RCNN has competing results than unet and it addresses most of the challenges existing in multiple myeloma segmentation.

Quantification of the spatial distribution of primary tumors in the lung to develop new prognostic biomarkers for locally advanced NSCLC

  • Vuong, Diem
  • Bogowicz, Marta
  • Wee, Leonard
  • Riesterer, Oliver
  • Vlaskou Badra, Eugenia
  • D’Cruz, Louisa Abigail
  • Balermpas, Panagiotis
  • van Timmeren, Janita E.
  • Burgermeister, Simon
  • Dekker, André
  • De Ruysscher, Dirk
  • Unkelbach, Jan
  • Thierstein, Sandra
  • Eboulet, Eric I.
  • Peters, Solange
  • Pless, Miklos
  • Guckenberger, Matthias
  • Tanadini-Lang, Stephanie
Sci RepScientific reports 2021 Journal Article, cited 0 times
Website
The anatomical location and extent of primary lung tumors have shown prognostic value for overall survival (OS). However, its manual assessment is prone to interobserver variability. This study aims to use data driven identification of image characteristics for OS in locally advanced non-small cell lung cancer (NSCLC) patients. Five stage IIIA/IIIB NSCLC patient cohorts were retrospectively collected. Patients were treated either with radiochemotherapy (RCT): RCT1* (n = 107), RCT2 (n = 95), RCT3 (n = 37) or with surgery combined with radiotherapy or chemotherapy: S1* (n = 135), S2 (n = 55). Based on a deformable image registration (MIM Vista, 6.9.2.), an in-house developed software transferred each primary tumor to the CT scan of a reference patient while maintaining the original tumor shape. A frequency-weighted cumulative status map was created for both exploratory cohorts (indicated with an asterisk), where the spatial extent of the tumor was uni-labeled with 2 years OS. For the exploratory cohorts, a permutation test with random assignment of patient status was performed to identify regions with statistically significant worse OS, referred to as decreased survival areas (DSA). The minimal Euclidean distance between primary tumor to DSA was extracted from the independent cohorts (negative distance in case of overlap). To account for the tumor volume, the distance was scaled with the radius of the volume-equivalent sphere. For the S1 cohort, DSA were located at the right main bronchus whereas for the RCT1 cohort they further extended in cranio-caudal direction. In the independent cohorts, the model based on distance to DSA achieved performance: AUCRCT2 [95% CI] = 0.67 [0.55–0.78] and AUCRCT3 = 0.59 [0.39–0.79] for RCT patients, but showed bad performance for surgery cohort (AUCS2 = 0.52 [0.30–0.74]). Shorter distance to DSA was associated with worse outcome (p = 0.0074). In conclusion, this explanatory analysis quantifies the value of primary tumor location for OS prediction based on cumulative status maps. Shorter distance of primary tumor to a high-risk region was associated with worse prognosis in the RCT cohort.

Multi-decoder Networks with Multi-denoising Inputs for Tumor Segmentation

  • Vu, Minh H.
  • Nyholm, Tufve
  • Löfstedt, Tommy
2021 Book Section, cited 0 times
Automatic segmentation of brain glioma from multimodal MRI scans plays a key role in clinical trials and practice. Unfortunately, manual segmentation is very challenging, time-consuming, costly, and often inaccurate despite human expertise due to the high variance and high uncertainty in the human annotations. In the present work, we develop an end-to-end deep-learning-based segmentation method using a multi-decoder architecture by jointly learning three separate sub-problems using a partly shared encoder. We also propose to apply smoothing methods to the input images to generate denoised versions as additional inputs to the network. The validation performance indicates an improvement when using the proposed method. The proposed method was ranked 2nd in the task of Quantification of Uncertainty in Segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2020.

TuNet: End-to-End Hierarchical Brain Tumor Segmentation Using Cascaded Networks

  • Vu, Minh H.
  • Nyholm, Tufve
  • Löfstedt, Tommy
2020 Book Section, cited 0 times
Glioma is one of the most common types of brain tumors; it arises in the glial cells in the human brain and in the spinal cord. In addition to having a high mortality rate, glioma treatment is also very expensive. Hence, automatic and accurate segmentation and measurement from the early stages are critical in order to prolong the survival rates of the patients and to reduce the costs of the treatment. In the present work, we propose a novel end-to-end cascaded network for semantic segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2019 that utilizes the hierarchical structure of the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation modules after each convolution and concatenation block. By utilizing cross-validation, an average ensemble technique, and a simple post-processing technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor core, and enhancing tumor, respectively, on the online test set. The proposed method was ranked among the top in the task of Quantification of Uncertainty in Segmentation.

Auto‐segmentation of organs at risk for head and neck radiotherapy planning: from atlas‐based to deep learning methods

  • Vrtovec, Tomaž
  • Močnik, Domen
  • Strojan, Primož
  • Pernuš, Franjo
  • Ibragimov, Bulat
Medical Physics 2020 Journal Article, cited 2 times
Website

Mobile-based Application for COVID-19 Detection from Lung X-Ray Scans with Artificial Neural Networks (ANN)

  • Vong, Chanvichet
  • Chanchotisatien, Passara
2022 Conference Paper, cited 0 times
Website
In early 2020, the World Health Organization (WHO) identified a novel coronavirus referred to as SARS-CoV-2, which is associated with the now commonly known COVID-19 disease. COVID-19 was shortly later characterized as a pandemic. All countries around the globe have been severely affected and the disease has accumulated a total of over 200 million cases and more than five million deaths in the past two years. Symptoms associated with COVID-19 vary greatly in severity. Some infected with COVID-19 are asymptomatic, while others experience critical disease with life-threatening complications. In this paper, a mobile-based application has been created to help classify Covid-19 and non-Covid-19 lung when given an image of a Chest X-Ray (CXR). A variety of different artificial neural networks (ANN) including our baseline model, InceptionV3, MobileNetV2, MobileNetV3, VGG16, and VGG19 were tested to see which would provide the optimal results. It is concluded that MobileNetV3 gives the best test accuracy of 95.49% and is considered a lightweight model suitable for a mobile-based application.

Iron commensalism of mesenchymal glioblastoma promotes ferroptosis susceptibility upon dopamine treatment

  • Vo, Vu T. A.
  • Kim, Sohyun
  • Hua, Tuyen N. M.
  • Oh, Jiwoong
  • Jeong, Yangsik
Communications Biology 2022 Journal Article, cited 0 times
The heterogeneity of glioblastoma multiforme (GBM) leads to poor patient prognosis. Here, we aim to investigate the mechanism through which GBM heterogeneity is coordinated to promote tumor progression. We find that proneural (PN)-GBM stem cells (GSCs) secreted dopamine (DA) and transferrin (TF), inducing the proliferation of mesenchymal (MES)-GSCs and enhancing their susceptibility toward ferroptosis. PN-GSC-derived TF stimulates MES-GSC proliferation in an iron-dependent manner. DA acts in an autocrine on PN-GSC growth in a DA receptor D1-dependent manner, while in a paracrine it induces TF receptor 1 expression in MES-GSCs to assist iron uptake and thus enhance ferroptotic vulnerability. Analysis of public datasets reveals worse prognosis of patients with heterogeneous GBM with high iron uptake than those with other GBM subtypes. Collectively, the findings here provide evidence of commensalism symbiosis that causes MES-GSCs to become iron-addicted, which in turn provides a rationale for targeting ferroptosis to treat resistant MES GBM.

Inter-rater agreement in glioma segmentations on longitudinal MRI

  • Visser, M.
  • Muller, D. M. J.
  • van Duijn, R. J. M.
  • Smits, M.
  • Verburg, N.
  • Hendriks, E. J.
  • Nabuurs, R. J. A.
  • Bot, J. C. J.
  • Eijgelaar, R. S.
  • Witte, M.
  • van Herk, M. B.
  • Barkhof, F.
  • de Witt Hamer, P. C.
  • de Munck, J. C.
Neuroimage Clin 2019 Journal Article, cited 0 times
Website
BACKGROUND: Tumor segmentation of glioma on MRI is a technique to monitor, quantify and report disease progression. Manual MRI segmentation is the gold standard but very labor intensive. At present the quality of this gold standard is not known for different stages of the disease, and prior work has mainly focused on treatment-naive glioblastoma. In this paper we studied the inter-rater agreement of manual MRI segmentation of glioblastoma and WHO grade II-III glioma for novices and experts at three stages of disease. We also studied the impact of inter-observer variation on extent of resection and growth rate. METHODS: In 20 patients with WHO grade IV glioblastoma and 20 patients with WHO grade II-III glioma (defined as non-glioblastoma) both the enhancing and non-enhancing tumor elements were segmented on MRI, using specialized software, by four novices and four experts before surgery, after surgery and at time of tumor progression. We used the generalized conformity index (GCI) and the intra-class correlation coefficient (ICC) of tumor volume as main outcome measures for inter-rater agreement. RESULTS: For glioblastoma, segmentations by experts and novices were comparable. The inter-rater agreement of enhancing tumor elements was excellent before surgery (GCI 0.79, ICC 0.99) poor after surgery (GCI 0.32, ICC 0.92), and good at progression (GCI 0.65, ICC 0.91). For non-glioblastoma, the inter-rater agreement was generally higher between experts than between novices. The inter-rater agreement was excellent between experts before surgery (GCI 0.77, ICC 0.92), was reasonable after surgery (GCI 0.48, ICC 0.84), and good at progression (GCI 0.60, ICC 0.80). The inter-rater agreement was good between novices before surgery (GCI 0.66, ICC 0.73), was poor after surgery (GCI 0.33, ICC 0.55), and poor at progression (GCI 0.36, ICC 0.73). Further analysis showed that the lower inter-rater agreement of segmentation on postoperative MRI could only partly be explained by the smaller volumes and fragmentation of residual tumor. The median interquartile range of extent of resection between raters was 8.3% and of growth rate was 0.22mm/year. CONCLUSION: Manual tumor segmentations on MRI have reasonable agreement for use in spatial and volumetric analysis. Agreement in spatial overlap is of concern with segmentation after surgery for glioblastoma and with segmentation of non-glioblastoma by non-experts.

3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities

  • Villarini, B.
  • Asaturyan, H.
  • Kurugol, S.
  • Afacan, O.
  • Bell, J. D.
  • Thomas, E. L.
2021 Journal Article, cited 3 times
Website
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.

Classificação Multirrótulo na Anotação Automática de Nódulo Pulmonar Solitário

  • Villani, Leonardo
  • Prati, Ronaldo Cristiano
2012 Conference Proceedings, cited 0 times

An intelligent lung tumor diagnosis system using whale optimization algorithm and support vector machine

  • Vijh, Surbhi
  • Gaur, Deepak
  • Kumar, Sushil
International Journal of System Assurance Engineering and Management 2019 Journal Article, cited 0 times
Medical image processing technique are widely used for detection of tumor to increase the survival rate of patients. The development of computer-aided diagnosis system shows improvement in observing the medical image and determining the treatment stages. The earlier detection of tumor reduces the mortality of lung cancer by increasing the probability of successful treatment. In this paper, the intelligent lung tumor diagnosis system is developed using various image processing technique. The simulated steps involve image enhancement, image segmentation, post-processing, feature extraction, feature selection and classification using support vector machine (SVM) kernel. Gray level co-occurrence matrix method is used for extracting the 19 texture and statistical features of lung computed tomography (CT) image. Whale optimization algorithm (WOA) is considered for selection of best prominent feature subset. The contribution provided in this paper is the development of WOA_SVM to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal. An improved technique is developed using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model. The performance of proposed methodology is evaluated using accuracy, sensitivity and specificity and obtained as 95%, 100% and 92% using radial bias function support vector kernel.

Hyperthermia by Low Intensity Focused Ultrasound

  • Vielma, Manuel
  • Wahl, David
  • Wahl, François
2023 Conference Proceedings, cited 0 times
Website
We present the results of simulations of heating by low-intensity (non-ablating) focused ultrasound. The simulations are aimed at modelling hyperthermia treatment of organs affected by cancer [1] – particularly the prostate. The studies were carried out with the objective of developing low-cost medical devices for use in low- and middle-income countries (LMIC). Our innovation has been to favor the use of free and open-source tools, combining them so as to achieve realistic representations of the relevant tissue layers, regarding their geometric as well as their acoustic and thermal properties. The combination of tools we have selected are available to researchers in LMIC, to favor the emergence local research initiatives. To achieve precision in the shapes and locations of the models, we performed segmentation of Computed Tomography scan images obtained from public databases. The 3D representations thus generated were then inputted as voxelized matrix regions in a calculation grid of pressure field and heat simulations - using open source MATLAB® packages. We report on the results of simulations using this combination of software tools.

Detecting pulmonary diseases using deep features in X-ray images

  • Vieira, P.
  • Sousa, O.
  • Magalhaes, D.
  • Rabelo, R.
  • Silva, R.
Pattern Recognition 2021 Journal Article, cited 0 times
Website
COVID-19 leads to radiological evidence of lower respiratory tract lesions, which support analysis to screen this disease using chest X-ray. In this scenario, deep learning techniques are applied to detect COVID-19 pneumonia in X-ray images, aiding a fast and precise diagnosis. Here, we investigate seven deep learning architectures associated with data augmentation and transfer learning techniques to detect different pneumonia types. We also propose an image resizing method with the maximum window function that preserves anatomical structures of the chest. The results are promising, reaching an accuracy of 99.8% considering COVID-19, normal, and viral and bacterial pneumonia classes. The differentiation between viral pneumonia and COVID-19 achieved an accuracy of 99.8%, and 99.9% of accuracy between COVID-19 and bacterial pneumonia. We also evaluated the impact of the proposed image resizing method on classification performance comparing with the bilinear interpolation; this pre-processing increased the classification rate regardless of the deep learning architectures used. We c ompared our results with ten related works in the state-of-the-art using eight sets of experiments, which showed that the proposed method outperformed them in most cases. Therefore, we demonstrate that deep learning models trained with pre-processed X-ray images could precisely assist the specialist in COVID-19 detection.

Novel Framework for Breast Cancer Classification for Retaining Computational Efficiency and Precise Diagnosis

  • Vidya, K
  • Kurian, MZ
Communications Applied Electronics 2018 Journal Article, cited 0 times
Website
Classification of breast cancer is still an open-end challenge in medical image processing. The existing literatures were reviewed to found that existing solution are more pivotal towards accuracy in classification and less towards achieving computational effectiveness in classification process. Therefore, this paper presents a novel classification approach that bridges the trade-off between computational performances of classifier with its final response towards disease criticality. An analytical framework is built that takes the input of Magnetic Resonance Imaging (MRI) of breast cancer which is subjected to non-linear map-based filter for enhancing pre-processing operation. The algorithm also offers a novel integral transformation scheme that lets the filtered image to get itself transformed followed by precise extraction of foreground and background for assisting in reliable classification. A statistical-based approach is used for extracting feature followed by classifying using unsupervised learning algorithm. The study outcome shows superior performance compared to existing schemes of classification.

Using Radiomics to improve the 2-year survival of Non-Small Cell Lung Cancer Patients

  • Vial, Alanna Heather Therese
2022 Thesis, cited 0 times
Website
This thesis both exploits and further contributes enhancements to the utilization of radiomics (extracted quantitative features of radiological imaging data) for improving cancer survival prediction. Several machine learning methods were compared in this analysis, including but not limited to support vector machines, convolutional neural networks and logistic regression.A technique for analysing prognostic image characteristics, for non-small cell lung cancer based on the edge regions, as well as tissues immediately surrounding visible tumours is developed. Regions external to and neighbouring a tumour were shown to also have prognostic value. By using the additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, which has been determined by examining the outside rind tissue including the tumour compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important for survival analysis. Further, it was found that improved prediction resulted up to some 6 pixels outside the tumour volume, a distance of approximately 5mm outside the original gross tumour volume (GTV), when applying a support vector machine, which achieved the highest accuracy of 71.18%. This research indicates the periphery of the tumour is highly predictive of survival. To our knowledge this is the first study that has concentrically expanded and analysed the NSCLC rind for radiomic analysis.

Assessing the prognostic impact of 3D CT image tumour rind texture features on lung cancer survival modelling

  • Vial, A.
  • Stirling, D.
  • Field, M.
  • Ros, M.
  • Ritz, C.
  • Carolan, M
  • Holloway, L.
  • Miller, A. A.
2017 Conference Paper, cited 1 times
Website
In this paper we examine a technique for developing prognostic image characteristics, termed radiomics, for non-small cell lung cancer based on a tumour edge region-based analysis. Texture features were extracted from the rind of the tumour in a publicly available 3D CT data set to predict two-year survival. The derived models were compared against the previous methods of training radiomic signatures that are descriptive of the whole tumour volume. Radiomic features derived solely from regions external, but neighbouring, the tumour were shown to also have prognostic value. By using additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, upon examining the outside rind including the volume compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important.

Learning Shape Distributions from Large Databases of Healthy Organs: Applications to Zero-Shot and Few-Shot Abnormal Pancreas Detection

  • Vétil, Rebeca
  • Abi-Nader, Clément
  • Bône, Alexandre
  • Vullierme, Marie-Pierre
  • Rohé, Marc-Michel
  • Gori, Pietro
  • Bloch, Isabelle
2022 Conference Proceedings, cited 0 times
Website

Domain Generalization for Prostate Segmentation in Transrectal Ultrasound Images: A Multi-center Study

  • Vesal, S.
  • Gayo, I.
  • Bhattacharya, I.
  • Natarajan, S.
  • Marks, L. S.
  • Barratt, D. C.
  • Fan, R. E.
  • Hu, Y.
  • Sonn, G. A.
  • Rusu, M.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0+/-0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0+/-0.03; HD95: 3.7 mm and Dice: 82.0+/-0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.

Stable and Discriminatory Radiomic Features from the Tumor and Its Habitat Associated with Progression-Free Survival in Glioblastoma: A Multi-Institutional Study

  • Verma, R.
  • Hill, V. B.
  • Statsevych, V.
  • Bera, K.
  • Correa, R.
  • Leo, P.
  • Ahluwalia, M.
  • Madabhushi, A.
  • Tiwari, P.
American Journal of Neuroradiology 2022 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Glioblastoma is an aggressive brain tumor, with no validated prognostic biomarkers for survival before surgical resection. Although recent approaches have demonstrated the prognostic ability of tumor habitat (constituting necrotic core, enhancing lesion, T2/FLAIR hyperintensity subcompartments) derived radiomic features for glioblastoma survival on treatment-naive MR imaging scans, radiomic features are known to be sensitive to MR imaging acquisitions across sites and scanners. In this study, we sought to identify the radiomic features that are both stable across sites and discriminatory of poor and improved progression-free survival in glioblastoma tumors. MATERIALS AND METHODS: We used 150 treatment-naive glioblastoma MR imaging scans (Gadolinium-T1w, T2w, FLAIR) obtained from 5 sites. For every tumor subcompartment (enhancing tumor, peritumoral FLAIR-hyperintensities, necrosis), a total of 316 three-dimensional radiomic features were extracted. The training cohort constituted studies from 4 sites (n = 93) to select the most stable and discriminatory radiomic features for every tumor subcompartment. These features were used on a hold-out cohort (n = 57) to evaluate their ability to discriminate patients with poor survival from those with improved survival. RESULTS: Incorporating the most stable and discriminatory features within a linear discriminant analysis classifier yielded areas under the curve of 0.71, 0.73, and 0.76 on the test set for distinguishing poor and improved survival compared with discriminatory features alone (areas under the curve of 0.65, 0.54, 0.62) from the necrotic core, enhancing tumor, and peritumoral T2/FLAIR hyperintensity, respectively. CONCLUSIONS: Incorporating stable and discriminatory radiomic features extracted from tumors and associated habitats across multisite MR imaging sequences may yield robust prognostic classifiers of patient survival in glioblastoma tumors.

Medical image thresholding using WQPSO and maximum entropy

  • Venkatesan, Anusuya
  • Parthiban, Latha
2012 Conference Proceedings, cited 1 times
Website
Image thresholding is an important method of image segmentation to find the objects of interest. Maximum entropy is an image thresholding method that exploits entropy of the distribution in gray level of the image. The performance of this method can be improved by using swarm intelligence techniques such as Particle Swarm Optimization (PSO) and Quantum PSO (QPSO). QPSO has attracted the research community due to its simplicity, easy implementation and fast convergence. The convergence of QPSO is faster than PSO and global convergence is guaranteed. In this paper, we propose a new combination of mean updated QPSO referred to as weighted QPSO with maximum entropy to find optimal threshold for magnetic resonance images (MRI). The performance of this method outperforms other existing methods in literature in terms of convergence speed and accuracy.

Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features

  • Velazquez, Emmanuel Rios
  • Meier, Raphael
  • Dunn Jr, William D
  • Alexander, Brian
  • Wiest, Roland
  • Bauer, Stefan
  • Gutman, David A
  • Reyes, Mauricio
  • Aerts, Hugo JWL
Sci RepScientific reports 2015 Journal Article, cited 42 times
Website
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

Robustness of Deep Networks for Mammography: Replication Across Public Datasets

  • Velarde, Osvaldo M.
  • Lin, Clarissa
  • Eskreis-Winkler, Sarah
  • Parra, Lucas C.
2024 Journal Article, cited 0 times
Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (N = 826 exams). On the larger OMI-DB dataset (N = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.

Una metodología para el análisis y selección de características extraídas mediante Deep Learning de imágenes de Tomografía Computerizada de pulmón.

  • Vega Gonzalo, María
2018 Thesis, cited 0 times
Website
Este proyecto se enmarca dentro del proyecto de investigación europeo IASIS, en el cual participa el laboratorio de Análisis de Datos Médicos (MEDAL) del Centro de Tecnología Biómedica de la UPM. El proyecto IASIS pretende estructurar la información médica relacionada con el cáncer de pulmón y la enfermedad de Alzheimer, con el objetivo de analizarla y, a partir del conocimiento extraído, mejorar el diagnóstico y tratamiento de estas enfermedades. El objetivo del presente TFG es establecer una metodología que permita la reducción de la dimensionalidad de características extraídas mediante Deep Learning de imágenes de Tomografía Axial Computerizada. El motivo por el que se desea disminuir el número de variables de los datos, es que la extracción de dichos datos tiene como objetivo utilizarlos para clasificar los nódulos presentes en las imágenes mediante un clasificador. Sin embargo, la alta dimensionalidad de los datos puede perjudicar la precisión de la clasificación, además de suponer un alto coste computacional. (below as google translates:) This project is part of the IASIS European research project, in which the Medical Data Analysis Laboratory (MEDAL) of the Center for Biological Technology of the UPM participates. The IASIS project aims to structure medical information related to lung cancer and Alzheimer's disease, with the aim of analyzing it and, based on the knowledge extracted, improving the diagnosis and treatment of these diseases. The objective of this TFG is to establish a methodology that allows the reduction of the dimensionality of features extracted through Deep Learning of Computerized Axial Tomography images. The reason why we want to reduce the number of data variables is that the extraction of said data is intended to be used to classify the nodules present in the images by means of a classifier. However, the high dimensionality of the data can impair the accuracy of the classification, in addition to assuming a high computational cost.

Addressing architectural distortion in mammogram using AlexNet and support vector machine

  • Vedalankar, Aditi V.
  • Gupta, Shankar S.
  • Manthalkar, Ramchandra R.
Informatics in Medicine Unlocked 2021 Journal Article, cited 0 times
Website
Objective To address the architectural distortion (AD) which is an irregularity in the parenchymal pattern of breast. The nature of AD is extremely complex; still, the study is very much essential because AD is viewed as a primitive sign of breast cancer. In this study, a new convolutional neural network (CNN) based system is developed that performs classification of AD distorted mammograms and other mammograms. Methods In the first part, mammograms undergo pre-processing and image augmentation techniques. In the other half, learned and handcrafted features are retrieved. The AlexNet Pretrained CNN is utilized for extraction of learned features. The support vector machine (SVM) validates the existence of AD. For improved classification, the scheme is tested for various conditions. Results A sophisticated CNN based system is developed for stepwise analysis of AD. The maximum accuracy, sensitivity and specificity yielded as 92%, 81.50% and 90.83% respectively. The results outperform the conventional methods. Conclusion Based on the overall study, it is recommended that a combination of CNN pre-trained network and support vector machine is a good option for identification of AD. The study will motivate researchers to find improved methods of high performance. Further, it will also help the radiologists. Significance The AD can develop up to two years before the growth of any anomaly. The proposed system will play an essential role in the detection of early manifestations of breast cancer. The system will aid society to go for better treatment options for women all over the world and curtail the mortality rate.

Identification and classification of DICOM files with burned-in text content

  • Vcelak, Petr
  • Kryl, Martin
  • Kratochvil, Michal
  • Kleckova, Jana
International Journal of Medical Informatics 2019 Journal Article, cited 0 times
Website
Background Protected health information burned in pixel data is not indicated for various reasons in DICOM. It complicates the secondary use of such data. In recent years, there have been several attempts to anonymize or de-identify DICOM files. Existing approaches have different constraints. No completely reliable solution exists. Especially for large datasets, it is necessary to quickly analyse and identify files potentially violating privacy. Methods Classification is based on adaptive-iterative algorithm designed to identify one of three classes. There are several image transformations, optical character recognition, and filters; then a local decision is made. A confirmed local decision is the final one. The classifier was trained on a dataset composed of 15,334 images of various modalities. Results The false positive rates are in all cases below 4.00%, and 1.81% in the mission-critical problem of detecting protected health information. The classifier's weighted average recall was 94.85%, the weighted average inverse recall was 97.42% and Cohen's Kappa coefficient was 0.920. Conclusion The proposed novel approach for classification of burned-in text is highly configurable and able to analyse images from different modalities with a noisy background. The solution was validated and is intended to identify DICOM files that need to have restricted access or be thoroughly de-identified due to privacy issues. Unlike with existing tools, the recognised text, including its coordinates, can be further used for de-identification.

A repository of grade 1 and 2 meningioma MRIs in a public dataset for radiomics reproducibility tests

  • Vassantachart, April
  • Cao, Yufeng
  • Shen, Zhilei
  • Cheng, Karen
  • Gribble, Michael
  • Ye, Jason C.
  • Zada, Gabriel
  • Hurth, Kyle
  • Mathew, Anna
  • Guzman, Samuel
  • Yang, Wensha
Medical Physics 2023 Journal Article, cited 0 times
Purpose Meningiomas are the most common primary brain tumors in adults with management varying widely based on World Health Organization (WHO) grade. However, there are limited datasets available for researchers to develop and validate radiomic models. The purpose of our manuscript is to report on the first dataset of meningiomas in The Cancer Imaging Archive (TCIA). Acquisition and validation methods The dataset consists of pre-operative MRIs from 96 patients with meningiomas who underwent resection from 2010–2019 and include axial T1post and T2-FLAIR sequences—55 grade 1 and 41 grade 2. Meningioma grade was confirmed based on the 2016 WHO Bluebook classification guideline by two neuropathologists and one neuropathology fellow. The hyperintense T1post tumor and hyperintense T2-FLAIR regions were manually contoured on both sequences and resampled to an isotropic resolution of 1 × 1 × 1 mm3. The entire dataset was reviewed by a certified medical physicist. Data format and usage notes The data was imported into TCIA for storage and can be accessed at https://doi.org/10.7937/0TKV-1A36. The total size of the dataset is 8.8GB, with 47 519 individual Digital Imaging and Communications in Medicine (DICOM) files consisting of 384 image series, and 192 structures. Potential applications Grade 1 and 2 meningiomas have different treatment paradigms and are often treated based on radiologic diagnosis alone. Therefore, predicting grade prior to treatment is essential in clinical decision-making. This dataset will allow researchers to create models to auto-differentiate grade 1 and 2 meningiomas as well as evaluate for other pathologic features including mitotic index, brain invasion, and atypical features. Limitations of this study are the small sample size and inclusion of only two MRI sequences. However, there are no meningioma datasets on TCIA and limited datasets elsewhere although meningiomas are the most common intracranial tumor in adults.

Classification of benign and malignant lung nodules using image processing techniques

  • Vas, Moffy Crispin
  • Dessai, Amita
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website
Cancer is the second leading cause of most number of deaths worldwide after the heart disease, out of which, lung cancer is the leading cause of deaths among all the cancer types. Hence, the lung cancer issue is of global concern and thus this work deals with detection of malignant lung cancer nodules and tries to distinguish it from the benign nodules by processing the Computer tomography (CT) images with the help of Haar wavelet decomposition, Haralick feature extraction followed by artificial neural networks (ANN) .

Multi-centre radiomics for prediction of recurrence following radical radiotherapy for head and neck cancers: Consequences of feature selection, machine learning classifiers and batch-effect harmonization

  • Varghese, Amal Joseph
  • Gouthamchand, Varsha
  • Sasidharan, Balu Krishna
  • Wee, Leonard
  • Sidhique, Sharief K
  • Rao, Julia Priyadarshini
  • Dekker, Andre
  • Hoebers, Frank
  • Devakumar, Devadhas
  • Irodi, Aparna
  • Balasingh, Timothy Peace
  • Godson, Henry Finlay
  • Joel, T
  • Mathew, Manu
  • Gunasingam Isiah, Rajesh
  • Pavamani, Simon Pradeep
  • Thomas, Hannah Mary T
Phys Imaging Radiat Oncol 2023 Journal Article, cited 1 times
Website
BACKGROUND AND PURPOSE: Radiomics models trained with limited single institution data are often not reproducible and generalisable. We developed radiomics models that predict loco-regional recurrence within two years of radiotherapy with private and public datasets and their combinations, to simulate small and multi-institutional studies and study the responsiveness of the models to feature selection, machine learning algorithms, centre-effect harmonization and increased dataset sizes. MATERIALS AND METHODS: 562 patients histologically confirmed and treated for locally advanced head-and-neck cancer (LA-HNC) from two public and two private datasets; one private dataset exclusively reserved for validation. Clinical contours of primary tumours were not recontoured and were used for Pyradiomics based feature extraction. ComBat harmonization was applied, and LASSO-Logistic Regression (LR) and Support Vector Machine (SVM) models were built. 95% confidence interval (CI) of 1000 bootstrapped area-under-the-Receiver-operating-curves (AUC) provided predictive performance. Responsiveness of the models' performance to the choice of feature selection methods, ComBat harmonization, machine learning classifier, single and pooled data was evaluated. RESULTS: LASSO and SelectKBest selected 14 and 16 features, respectively; three were overlapping. Without ComBat, the LR and SVM models for three institutional data showed AUCs (CI) of 0.513 (0.481-0.559) and 0.632 (0.586-0.665), respectively. Performances following ComBat revealed AUCs of 0.559 (0.536-0.590) and 0.662 (0.606-0.690), respectively. Compared to single cohort AUCs (0.562-0.629), SVM models from pooled data performed significantly better at AUC = 0.680. CONCLUSIONS: Multi-institutional retrospective data accentuates the existing variabilities that affect radiomics. Carefully designed prospective, multi-institutional studies and data sharing are necessary for clinically relevant head-and-neck cancer prognostication models.

Radiogenomics of High-Grade Serous Ovarian Cancer: Multireader Multi-Institutional Study from the Cancer Genome Atlas Ovarian Cancer Imaging Research Group

  • Vargas, Hebert Alberto
  • Huang, Erich P
  • Lakhman, Yulia
  • Ippolito, Joseph E
  • Bhosale, Priya
  • Mellnick, Vincent
  • Shinagare, Atul B
  • Anello, Maria
  • Kirby, Justin
  • Fevrier-Sullivan, Brenda
RadiologyRadiology 2017 Journal Article, cited 3 times
Website

An Optimized Deep Learning Technique for Detecting Lung Cancer from CT Images

  • Vanitha, M.
  • Mangayarkarasi, R.
  • Angulakshmi, M.
  • Deepa, M.
2023 Book Section, cited 0 times
Of all the cancer diseases that have existed lung cancer also contributes to human deaths among all the cancers. Today, the number of people getting affected is increasing rapidly. India reports 70,000 cases per year. Currently, the technological improvements in the medical domain help the physician to detect the symptoms associated with the diseases precisely in a cost-effective manner. The asymptomatic nature of the disease makes it impossible to detect in the early stage. For any chronic disease, early prediction is essential for saving lives. In this chapter, a novel optimized CNN-based classifier is presented to alleviate the practical hindrances in the existing techniques such as overfitting Pre-processing, data augmentation, and detection of lung cancer from CT images using CNN is performed on the LIDC-IDRI dataset. The tested results show that the presented CNN-based classifier results are good compared to the results from the machine learning techniques in terms of quantitative metrics with an accuracy of 98% for lung cancer detection.

Brain Tumor Classification using Support Vector Machine

  • Vani, N
  • Sowmya, A
  • Jayamma, N
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Evaluating Glioma Growth Predictions as a Forward Ranking Problem

  • van Garderen, Karin A.
  • van der Voort, Sebastian R.
  • Wijnenga, Maarten M. J.
  • Incekara, Fatih
  • Kapsas, Georgios
  • Gahrmann, Renske
  • Alafandi, Ahmad
  • Smits, Marion
  • Klein, Stefan
2022 Book Section, cited 0 times
The problem of tumor growth prediction is challenging, but promising results have been achieved with both model-driven and statistical methods. In this work, we present a framework for the evaluation of growth predictions that focuses on the spatial infiltration patterns, and specifically evaluating a prediction of future growth. We propose to frame the problem as a ranking problem rather than a segmentation problem. Using the average precision as a metric, we can evaluate the results with segmentations while using the full spatiotemporal prediction. Furthermore, by applying a biophysical tumor growth model to 21 patient cases we compare two schemes for fitting and evaluating predictions. By carefully designing a scheme that separates the prediction from the observations used for fitting the model, we show that a better fit of model parameters does not guarantee a better predictive power.

Combined molecular subtyping, grading, and segmentation of glioma using multi-task deep learning

  • van der Voort, S. R.
  • Incekara, F.
  • Wijnenga, M. M. J.
  • Kapsas, G.
  • Gahrmann, R.
  • Schouten, J. W.
  • Nandoe Tewarie, R.
  • Lycklama, G. J.
  • De Witt Hamer, P. C.
  • Eijgelaar, R. S.
  • French, P. J.
  • Dubbink, H. J.
  • Vincent, Ajpe
  • Niessen, W. J.
  • van den Bent, M. J.
  • Smits, M.
  • Klein, S.
2022 Journal Article, cited 0 times
Website
BACKGROUND: Accurate characterization of glioma is crucial for clinical decision making. A delineation of the tumor is also desirable in the initial decision stages but is time-consuming. Previously, deep learning methods have been developed that can either non-invasively predict the genetic or histological features of glioma, or that can automatically delineate the tumor, but not both tasks at the same time. Here, we present our method that can predict the molecular subtype and grade, while simultaneously providing a delineation of the tumor. METHODS: We developed a single multi-task convolutional neural network that uses the full 3D, structural, pre-operative MRI scans to predict the IDH mutation status, the 1p/19q co-deletion status, and the grade of a tumor, while simultaneously segmenting the tumor. We trained our method using a patient cohort containing 1508 glioma patients from 16 institutes. We tested our method on an independent dataset of 240 patients from 13 different institutes. RESULTS: In the independent test set we achieved an IDH-AUC of 0.90, an 1p/19q co-deletion AUC of 0.85, and a grade AUC of 0.81 (grade II/III/IV). For the tumor delineation, we achieved a mean whole tumor DICE score of 0.84. CONCLUSIONS: We developed a method that non-invasively predicts multiple, clinically relevant features of glioma. Evaluation in an independent dataset shows that the method achieves a high performance and that it generalizes well to the broader clinical population. This first of its kind method opens the door to more generalizable, instead of hyper-specialized, AI methods.

Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm

  • van der Voort, Sebastian R
  • Incekara, Fatih
  • Wijnenga, Maarten MJ
  • Kapsas, Georgios
  • Gardeniers, Mayke
  • Schouten, Joost W
  • Starmans, Martijn PA
  • Tewarie, Rishie Nandoe
  • Lycklama, Geert J
  • French, Pim J
Clinical Cancer Research 2019 Journal Article, cited 0 times

Generating Artificial Artifacts for Motion Artifact Detection in Chest CT

  • van der Ham, Guus
  • Latisenko, Rudolfs
  • Tsiaousis, Michail
  • van Tulder, Gijs
2022 Conference Proceedings, cited 0 times
Website

Eliminating biasing signals in lung cancer images for prognosis predictions with deep learning.

  • van Amsterdam, W. A. C.
  • Verhoeff, J. J. C.
  • de Jong, P. A.
  • Leiner, T.
  • Eijkemans, M. J. C.
NPJ Digit Med 2019 Journal Article, cited 0 times
Website
Deep learning has shown remarkable results for image analysis and is expected to aid individual treatment decisions in health care. Treatment recommendations are predictions with an inherently causal interpretation. To use deep learning for these applications in the setting of observational data, deep learning methods must be made compatible with the required causal assumptions. We present a scenario with real-world medical images (CT-scans of lung cancer) and simulated outcome data. Through the data simulation scheme, the images contain two distinct factors of variation that are associated with survival, but represent a collider (tumor size) and a prognostic factor (tumor heterogeneity), respectively. When a deep network would use all the information available in the image to predict survival, it would condition on the collider and thereby introduce bias in the estimation of the treatment effect. We show that when this collider can be quantified, unbiased individual prognosis predictions are attainable with deep learning. This is achieved by (1) setting a dual task for the network to predict both the outcome and the collider and (2) enforcing a form of linear independence of the activation distributions of the last layer. Our method provides an example of combining deep learning and structural causal models to achieve unbiased individual prognosis predictions. Extensions of machine learning methods for applications to causal questions are required to attain the long-standing goal of personalized medicine supported by artificial intelligence.

Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

  • Vallières, Martin
  • Laberge, Sébastien
  • Diamant, André
  • El Naqa, Issam
Physics in Medicine & Biology 2017 Journal Article, cited 3 times
Website
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice ('span'). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of [Formula: see text] in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters ([Formula: see text]), with an average AUC of [Formula: see text]. Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.

Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer

  • Vallières, Martin
  • Kay-Rivest, Emily
  • Perrin, Léo Jean
  • Liem, Xavier
  • Furstoss, Christophe
  • Aerts, Hugo JWL
  • Khaouam, Nader
  • Nguyen-Tan, Phuc Felix
  • Wang, Chang-Shu
  • Sultanem, Khalil
arXiv preprint arXiv:1703.08516Sci Rep-Uk 2017 Journal Article, cited 32 times
Website
Quantitative extraction of high-dimensional mineable data from medical images is a process known as radiomics. Radiomics is foreseen as an essential prognostic tool for cancer risk assessment and the quantification of intratumoural heterogeneity. In this work, 1615 radiomic features (quantifying tumour image intensity, shape, texture) extracted from pre-treatment FDG-PET and CT images of 300 patients from four different cohorts were analyzed for the risk assessment of locoregional recurrences (LR) and distant metastases (DM) in head-and-neck cancer. Prediction models combining radiomic and clinical variables were constructed via random forests and imbalance-adjustment strategies using two of the four cohorts. Independent validation of the prediction and prognostic performance of the models was carried out on the other two cohorts (LR: AUC = 0.69 and CI = 0.67; DM: AUC = 0.86 and CI = 0.88). Furthermore, the results obtained via Kaplan-Meier analysis demonstrated the potential of radiomics for assessing the risk of specific tumour outcomes using multiple stratification groups. This could have important clinical impact, notably by allowing for a better personalization of chemo-radiation treatments for head-and-neck cancer patients from different risk groups.

A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities

  • Vallières, Martin
  • Freeman, CR
  • Skamene, SR
  • El Naqa, I
2015 Journal Article, cited 199 times
Website
This study aims at developing a joint FDG-PET and MRI texture-based model for the early evaluation of lung metastasis risk in soft-tissue sarcomas (STSs). We investigate if the creation of new composite textures from the combination of FDG-PET and MR imaging information could better identify aggressive tumours. Towards this goal, a cohort of 51 patients with histologically proven STSs of the extremities was retrospectively evaluated. All patients had pre-treatment FDG-PET and MRI scans comprised of T1-weighted and T2-weighted fat-suppression sequences (T2FS). Nine non-texture features (SUV metrics and shape features) and forty-one texture features were extracted from the tumour region of separate (FDG-PET, T1 and T2FS) and fused (FDG-PET/T1 and FDG-PET/T2FS) scans. Volume fusion of the FDG-PET and MRI scans was implemented using the wavelet transform. The influence of six different extraction parameters on the predictive value of textures was investigated. The incorporation of features into multivariable models was performed using logistic regression. The multivariable modeling strategy involved imbalance-adjusted bootstrap resampling in the following four steps leading to final prediction model construction: (1) feature set reduction; (2) feature selection; (3) prediction performance estimation; and (4) computation of model coefficients. Univariate analysis showed that the isotropic voxel size at which texture features were extracted had the most impact on predictive value. In multivariable analysis, texture features extracted from fused scans significantly outperformed those from separate scans in terms of lung metastases prediction estimates. The best performance was obtained using a combination of four texture features extracted from FDG-PET/T1 and FDGPET/T2FS scans. This model reached an area under the receiver-operating characteristic curve of 0.984 +/- 0.002, a sensitivity of 0.955 +/- 0.006, and a specificity of 0.926 +/- 0.004 in bootstrapping evaluations. Ultimately, lung metastasis risk assessment at diagnosis of STSs could improve patient outcomes by allowing better treatment adaptation.

Efficient CT Image Reconstruction in a GPU Parallel Environment

  • Valencia Pérez, Tomas A
  • Hernández López, Javier M
  • Moreno-Barbosa, Eduardo
  • de Celis Alonso, Benito
  • Palomino Merino, Martin R
  • Castaño Meneses, Victor M
Tomography 2020 Journal Article, cited 0 times
Website
Computed tomography is nowadays an indispensable tool in medicine used to diagnose multiple diseases. In clinical and emergency room environments, the speed of acquisition and information processing are crucial. CUDA is a software architecture used to work with NVIDIA graphics processing units. In this paper a methodology to accelerate tomographic image reconstruction based on maximum likelihood expectation maximization iterative algorithm and combined with the use of graphics processing units programmed in CUDA framework is presented. Implementations developed here are used to reconstruct images with clinical use. Timewise, parallel versions showed improvement with respect to serial implementations. These differences reached, in some cases, 2 orders of magnitude in time while preserving image quality. The image quality and reconstruction times were not affected significantly by the addition of Poisson noise to projections. Furthermore, our implementations showed good performance when compared with reconstruction methods provided by commercial software. One of the goals of this work was to provide a fast, portable, simple, and cheap image reconstruction system, and our results support the statement that the goal was achieved.

Implementación de algoritmos de reconstrucción tomográfica mediante programación paralela (CUDA)

  • VALENCIA PéREZ, Tomás Antonio
2020 Thesis, cited 0 times
Website
“La reconstrucción de imágenes médicas es clave en una amplia gama de tecnologías. Para los sistemas de tomografía computarizada clásica, la cantidad de señales medidas por segundo aumentó exponencialmente en las últimas cuatro décadas, mientras que la complejidad computacional de la mayoría de los algoritmos utilizados no ha cambiado significativamente. Un gran interés y desafío es proporcionar una calidad de imagen óptima con la menor dosis posible de radiación al paciente. Una solución y un campo de investigación activo para resolver ese problema son los métodos iterativos de reconstrucción de imágenes médicas. Su complejidad es múltiple en comparación con los métodos analíticos clásicos que se utilizaron en casi todos los sistemas disponibles comercialmente. En esta tesis se investiga el uso de tarjetas gráficas en el campo de la reconstrucción iterativa de imágenes médicas. Se presentan y evalúan los diferentes enfoques de algoritmos de reconstrucción de imagen acelerados por la GPU (Unidad de Procesamiento Gráfico, por sus siglas en inglés).”

Novel approaches for glioblastoma treatment: Focus on tumor heterogeneity, treatment resistance, and computational tools

  • Valdebenito, Silvana
  • D'Amico, Daniela
  • Eugenin, Eliseo
Cancer Reports 2019 Journal Article, cited 0 times
Background Glioblastoma (GBM) is a highly aggressive primary brain tumor. Currently, the suggested line of action is the surgical resection followed by radiotherapy and treatment with the adjuvant temozolomide, a DNA alkylating agent. However, the ability of tumor cells to deeply infiltrate the surrounding tissue makes complete resection quite impossible, and, in consequence, the probability of tumor recurrence is high, and the prognosis is not positive. GBM is highly heterogeneous and adapts to treatment in most individuals. Nevertheless, these mechanisms of adaption are unknown. Recent findings In this review, we will discuss the recent discoveries in molecular and cellular heterogeneity, mechanisms of therapeutic resistance, and new technological approaches to identify new treatments for GBM. The combination of biology and computer resources allow the use of algorithms to apply artificial intelligence and machine learning approaches to identify potential therapeutic pathways and to identify new drug candidates. Conclusion These new approaches will generate a better understanding of GBM pathogenesis and will result in novel treatments to reduce or block the devastating consequences of brain cancers.

Cancer Risk Assessment Using Quantitative Imaging Features from Solid Tumors and Surrounding Structures

  • Uthoff, Johanna Mariah.
2019 Thesis, cited 0 times
Website
Medical imaging is a powerful tool for clinical practice allowing in-vivo insight into a patient’s disease state. Many modalities exist, allowing for the collection of diverse information about the underlying tissue structure and/or function. Traditionally, medical professionals use visual assessment of scans to search for disease, assess relevant disease predictors and propose clinical intervention steps. However, the imaging data contain potentially useful information beyond visual assessment by trained professional. To better use the full depth of information contained in the image sets, quantitative imaging characteristics (QICs), can be extracted using mathematical and statistical operations on regions or volumes of interests. The process of using QICs is a pipeline typically involving image acquisition, segmentation, feature extraction, set qualification and analysis of informatics. These descriptors can be integrated into classification methods focused on differentiating between disease states. Lung cancer, a leading cause of death worldwide, is a clear application for advanced in-vivo imaging based classification methods. We hypothesize that QICs extracted from spatially-linked and size-standardized regions of surrounding lung tissue can improve risk assessment quality over features extracted from only the lung tumor, or nodule, regions. We require a robust and flexible pipeline for the extraction and selection of disease QICs in computed tomography (CT). This includes creating an optimized method for feature extraction, reduction, selection, and predictive analysis which could be applied to a multitude of disease imaging problems. This thesis expanded a developmental pipeline for machine learning using a large multicenter controlled CT dataset of lung nodules to extract CT QICs from the nodule, surrounding parenchyma, and greater lung volume and explore CT feature interconnectivity. Furthermore, it created a validated pipeline that is more computationally and time efficient and with stability of performance. The modularity of the optimized pipeline facilitates broader application of the tool for applications beyond CT identified pulmonary nodules. We have developed a flexible and robust pipeline for the extraction and selection of Quantitative Imaging Characteristics for Risk Assessment from the Tumor and its Environment (QIC-RATE). The results presented in this thesis support our hypothesis, showing that classification of lung and breast tumors is improved through inclusion of peritumoral signal. Optimal performance in the lung application achieved with the QIC-RATE tool incorporating 75% of the nodule diameter equivalent in perinodular parenchyma with a development performance of 100% accuracy. The stability of performance was reflected in the maintained high accuracy (98%) in the independent validation dataset of 100 CT from a separate institution. In the breast QIC-RATE application, optimal performance was achieved using 25% of the tumor diameter in breast tissue with 90% accuracy in development, 82% in validation. We address the need for more complex assessments of medically imaged tumors through the QIC-RATE pipeline; a modular, scalable, transferrable pipeline for extracting, reducing and selecting, and training a classification tool based on QICs. Altogether, this research has resulted in a risk assessment methodology that is validated, stable, high performing, adaptable, and transparent.

Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT

  • Uthoff, J.
  • Stephens, M. J.
  • Newell, J. D., Jr.
  • Hoffman, E. A.
  • Larson, J.
  • Koehn, N.
  • De Stefano, F. A.
  • Lusk, C. M.
  • Wenzlaff, A. S.
  • Watza, D.
  • Neslund-Dudas, C.
  • Carr, L. L.
  • Lynch, D. A.
  • Schwartz, A. G.
  • Sieren, J. C.
Med Phys 2019 Journal Article, cited 62 times
Website
PURPOSE: Computed tomography (CT) is an effective method for detecting and characterizing lung nodules in vivo. With the growing use of chest CT, the detection frequency of lung nodules is increasing. Noninvasive methods to distinguish malignant from benign nodules have the potential to decrease the clinical burden, risk, and cost involved in follow-up procedures on the large number of false-positive lesions detected. This study examined the benefit of including perinodular parenchymal features in machine learning (ML) tools for pulmonary nodule assessment. METHODS: Lung nodule cases with pathology confirmed diagnosis (74 malignant, 289 benign) were used to extract quantitative imaging characteristics from computed tomography scans of the nodule and perinodular parenchyma tissue. A ML tool development pipeline was employed using k-medoids clustering and information theory to determine efficient predictor sets for different amounts of parenchyma inclusion and build an artificial neural network classifier. The resulting ML tool was validated using an independent cohort (50 malignant, 50 benign). RESULTS: The inclusion of parenchymal imaging features improved the performance of the ML tool over exclusively nodular features (P < 0.01). The best performing ML tool included features derived from nodule diameter-based surrounding parenchyma tissue quartile bands. We demonstrate similar high-performance values on the independent validation cohort (AUC-ROC = 0.965). A comparison using the independent validation cohort with the Fleischner pulmonary nodule follow-up guidelines demonstrated a theoretical reduction in recommended follow-up imaging and procedures. CONCLUSIONS: Radiomic features extracted from the parenchyma surrounding lung nodules contain valid signals with spatial relevance for the task of lung cancer risk classification. Through standardization of feature extraction regions from the parenchyma, ML tool validation performance of 100% sensitivity and 96% specificity was achieved.

Information theory optimization based feature selection in breast mammography lesion classification

  • Uthoff, Johanna
  • Sieren, Jessica C.
2018 Conference Paper, cited 0 times
Quantitative imaging features of intensity, texture, and shape were extracted from breast lesions and surrounding tissue in 287 mammograms (150 malignant, 137 benign). A feature set reduction method to remove highly intra-correlated features was devised using k-medoids clustering and k-fold cross validation. A novel feature selection method using information theory was introduced which builds a feature set for classification by determining a group of class-informative features with low set co-information. An artificial neural network was built from the selected feature set using 10-hidden layer nodes and the tanh activation function. The resulting computer-aided diagnosis tool achieved a training accuracy of 96.2%, sensitivity of 97.6%, specificity of 95.2%, and area-under-the-curve of 0.971 along with 97.1% sensitivity and 94.9% specificity a blinded validation set.

Efficacy of exponentiation method with a convolutional neural network for classifying lung nodules on CT images by malignancy level

  • Usuzaki, Takuma
  • Takahashi, Kengo
  • Takagi, Hidenobu
  • Ishikuro, Mami
  • Obara, Taku
  • Yamaura, Takumi
  • Kamimoto, Masahiro
  • Majima, Kazuhiro
European Radiology 2023 Journal Article, cited 0 times
Website
Objectives The aim of this study was to examine the performance of a convolutional neural network (CNN) combined with exponentiating each pixel value in classifying benign and malignant lung nodules on computed tomography (CT) images. Materials and methods Images in the Lung Image Database Consortium-Image Database Resource Initiative (LIDC-IDRI) were analyzed. Four CNN models were then constructed to classify the lung nodules by malignancy level (malignancy level 1 vs. 2, malignancy level 1 vs. 3, malignancy level 1 vs. 4, and malignancy level 1 vs. 5). The exponentiation method was applied for exponent values of 1.0 to 10.0 in increments of 0.5. Accuracy, sensitivity, specificity, and area under the curve of receiver operating characteristics (AUC-ROC) were calculated. These statistics were compared between an exponent value of 1.0 and all other exponent values in each model by the Mann–Whitney U-test. Results In malignancy 1 vs. 4, maximum test accuracy (MTA; exponent value = 2.0, 3.0, 3.5, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, and 10.0) and specificity (6.5, 7.0, and 9.0) were improved by up to 0.012 and 0.037, respectively. In malignancy 1 vs. 5, MTA (6.5 and 7.0) and sensitivity (1.5) were improved by up to 0.030 and 0.0040, respectively. Conclusions The exponentiation method improved the performance of the CNN in the task of classifying lung nodules on CT images as benign or malignant. The exponentiation method demonstrated two advantages: improved accuracy, and the ability to adjust sensitivity and specificity by selecting an appropriate exponent value. Clinical relevance statement Adjustment of sensitivity and specificity by selecting an exponent value enables the construction of proper CNN models for screening, diagnosis, and treatment processes among patients with lung nodules. Key Points • The exponentiation method improved the performance of the convolutional neural network. • Contrast accentuation by the exponentiation method may derive features of lung nodules. • Sensitivity and specificity can be adjusted by selecting an exponent value.

Identifying key factors for predicting O6-Methylguanine-DNA methyltransferase status in adult patients with diffuse glioma: a multimodal analysis of demographics, radiomics, and MRI by variable Vision Transformer

  • Usuzaki, T.
  • Takahashi, K.
  • Inamori, R.
  • Morishita, Y.
  • Shizukuishi, T.
  • Takagi, H.
  • Ishikuro, M.
  • Obara, T.
  • Takase, K.
Neuroradiology 2024 Journal Article, cited 0 times
Website
PURPOSE: This study aimed to perform multimodal analysis by vision transformer (vViT) in predicting O6-methylguanine-DNA methyl transferase (MGMT) promoter status among adult patients with diffuse glioma using demographics (sex and age), radiomic features, and MRI. METHODS: The training and test datasets contained 122 patients with 1,570 images and 30 patients with 484 images, respectively. The radiomic features were extracted from enhancing tumors (ET), necrotic tumor cores (NCR), and the peritumoral edematous/infiltrated tissues (ED) using contrast-enhanced T1-weighted images (CE-T1WI) and T2-weighted images (T2WI). The vViT had 9 sectors; 1 demographic sector, 6 radiomic sectors (CE-T1WI ET, CE-T1WI NCR, CE-T1WI ED, T2WI ET, T2WI NCR, and T2WI ED), 2 image sectors (CE-T1WI, and T2WI). Accuracy and area under the curve of receiver-operating characteristics (AUC-ROC) were calculated for the test dataset. The performance of vViT was compared with AlexNet, GoogleNet, VGG16, and ResNet by McNemar and Delong test. Permutation importance (PI) analysis with the Mann-Whitney U test was performed. RESULTS: The accuracy was 0.833 (95% confidence interval [95%CI]: 0.714-0.877) and the area under the curve of receiver-operating characteristics was 0.840 (0.650-0.995) in the patient-based analysis. The vViT had higher accuracy than VGG16 and ResNet, and had higher AUC-ROC than GoogleNet (p<0.05). The ED radiomic features extracted from the T2-weighted image demonstrated the highest importance (PI=0.239, 95%CI: 0.237-0.240) among all other sectors (p<0.0001). CONCLUSION: The vViT is a competent deep learning model in predicting MGMT status. The ED radiomic features of the T2-weighted image demonstrated the most dominant contribution.

Brain tumor classification from multi-modality MRI using wavelets and machine learning

  • Usman, Khalid
  • Rajpoot, Kashif
Pattern Analysis and Applications 2017 Journal Article, cited 17 times
Website

Prognostic value of multimodal MRI tumor features in Glioblastoma multiforme using textural features analysis

  • Upadhaya, Taman
  • Morvan, Yannick
  • Stindel, Eric
  • Reste, Le
  • Hatt, Mathieu
2015 Conference Proceedings, cited 12 times
Website
Image-derived features (“radiomics”) are increasingly being considered for patient management in (neuro)oncology and radiotherapy. In Glioblastoma multiforme (GBM), simple features are often used by clinicians in clinical practice, such as the size of the tumor or the relative sizes of the necrosis and active tumor. First order statistics provide a limited characterization power because they do not incorporate spatial information and thus cannot differentiate patterns. In this work, we present the methodological framework for building a prognostic model based on heterogeneity textural features of multimodal MRI sequences (T1, T1-contrast, T2 and FLAIR) in GBM. The proposed workflow consists in i) registering the available 3D multimodal MR images and segmenting the tumor volume, ii) extracting image features such as heterogeneity metrics and shape indices, iii) building a prognostic model using Support Vector Machine by selecting, ranking and combining optimal features. We present preliminary results obtained for the classification of 40 patients into short (≤ 15 months) or long (> 15 months) overall survival, validated using leave-one-out cross-validation. Our results suggest that several textural features in each MR sequence have prognostic value in GBM, classification accuracy of 90% (sensitivity 85%, specificity 95%) being obtained by combining both T1 sequences. Future work will consist in i) adding more patients for validation using training and testing groups, ii) considering additional features, iii) building a fully multimodal MRI model by combining features from more than two sequences, iv) consider survival as a continuous variable and v) combine image-derived features with clinical and histopatholoigcal data to build an even more accurate model.

Prognosis classification in glioblastoma multiforme using multimodal MRI derived heterogeneity textural features: impact of pre-processing choices

  • Upadhaya, Taman
  • Morvan, Yannick
  • Stindel, Eric
  • Le Reste, Pierre-Jean
  • Hatt, Mathieu
2016 Conference Paper, cited 4 times
Website

A framework for multimodal imaging-based prognostic model building: Preliminary study on multimodal MRI in Glioblastoma Multiforme

  • Upadhaya, T
  • Morvan, Y
  • Stindel, E
  • Le Reste, P-J
  • Hatt, M
IRBM 2015 Journal Article, cited 11 times
Website
In Glioblastoma Multiforme (GBM) image-derived features ("radiomics") could help in individualizing patient management. Simple geometric features of tumors (necrosis, edema, active tumor) and first-order statistics in Magnetic Resonance Imaging (MRI) are used in clinical practice. However, these features provide limited characterization power because they do not incorporate spatial information and thus cannot differentiate patterns. The aim of this work is to develop and evaluate a methodological framework dedicated to building a prognostic model based on heterogeneity textural features of multimodal MRI sequences (T1. T1-contrast. T2 and FLAIR) in GBM. The proposed workflow consists in i) registering the available 3D multimodal MR images and segmenting the tumor volume, ii) extracting image features such as heterogeneity metrics and iii) building a prognostic model by selecting, ranking and combining optimal features through machine learning (Support Vector Machine). This framework was applied to 40 histologically proven GBM patients with the endpoint being overall survival (OS) classified as above or below the median survival (15 months). The models combining features from a maximum of two modalities were evaluated using leave-one-out cross-validation (LOOCV). A classification accuracy of 90% (sensitivity 85%, specificity 95%) was obtained by combining features from T1 pre-contrast and T1 post-contrast sequences. Our results suggest that several textural features in each MR sequence have prognostic value in GBM. (C) 2015 AGBM. Published by Elsevier Masson SAS. All rights reserved.

Enabling machine learning in X-ray-based procedures via realistic simulation of image formation

  • Unberath, Mathias
  • Zaech, Jan-Nico
  • Gao, Cong
  • Bier, Bastian
  • Goldmann, Florian
  • Lee, Sing Chun
  • Fotouhi, Javad
  • Taylor, Russell
  • Armand, Mehran
  • Navab, Nassir
International Journal of Computer Assisted Radiology and Surgery 2019 Journal Article, cited 0 times

Super-Resolution Imaging of Mammograms Based on the Super-Resolution Convolutional Neural Network

  • Umehara, Kensuke
  • Ota, Junko
  • Ishida, Takayuki
Open Journal of Medical Imaging 2017 Journal Article, cited 0 times
Website

Impact of image preprocessing on the scanner dependence of multi-parametric MRI radiomic features and covariate shift in multi-institutional glioblastoma datasets

  • Um, Hyemin
  • Tixier, Florent
  • Bermudez, Dalton
  • Deasy, Joseph O
  • Young, Robert J
  • Veeraraghavan, Harini
Physics in Medicine & Biology 2019 Journal Article, cited 0 times
Website
Recent advances in radiomics have enhanced the value of medical imaging in various aspects of clinical practice, but a crucial component that remains to be investigated further is the robustness of quantitative features to imaging variations and across multiple institutions. In the case of MRI, signal intensity values vary according to the acquisition parameters used, yet no consensus exists on which preprocessing techniques are favorable in reducing scanner-dependent variability of image-based features. Hence, the purpose of this study was to assess the impact of common image preprocessing methods on the scanner dependence of MRI radiomic features in multi-institutional glioblastoma multiforme (GBM) datasets. Two independent GBM cohorts were analyzed: 50 cases from the TCGA-GBM dataset and 111 cases acquired in our institution, and each case consisted of 3 MRI sequences viz. FLAIR, T1-weighted, and T1-weighted post-contrast. Five image preprocessing techniques were examined: 8-bit global rescaling, 8-bit local rescaling, bias field correction, histogram standardization, and isotropic resampling. A total of 420 features divided into 8 categories representing texture, shape, edge, and intensity histogram were extracted. Two distinct imaging parameters were considered: scanner manufacturer and scanner magnetic field strength. Wilcoxon tests identified features robust to the considered acquisition parameters under the selected image preprocessing techniques. A machine learning-based strategy was implemented to measure the covariate shift between the analyzed datasets using features computed using the aforementioned preprocessing methods. Finally, radiomic scores (rad-scores) were constructed by identifying features relevant to patients' overall survival after eliminating those impacted by scanner variability. These were then evaluated for their prognostic significance through Kaplan-Meier and Cox hazards regression analyses. Our results demonstrate that overall, histogram standardization contributes the most in reducing radiomic feature variability as it is the technique to reduce the covariate shift for 3 feature categories and successfully discriminate patients into groups of different survival risks.

Brain MR Image Enhancement for Tumor Segmentation Using 3D U-Net

  • Ullah, F.
  • Ansari, S. U.
  • Hanif, M.
  • Ayari, M. A.
  • Chowdhury, M. E. H.
  • Khandakar, A. A.
  • Khan, M. S.
Sensors (Basel) 2021 Journal Article, cited 0 times
MRI images are visually inspected by domain experts for the analysis and quantification of the tumorous tissues. Due to the large volumetric data, manual reporting on the images is subjective, cumbersome, and error prone. To address these problems, automatic image analysis tools are employed for tumor segmentation and other subsequent statistical analysis. However, prior to the tumor analysis and quantification, an important challenge lies in the pre-processing. In the present study, permutations of different pre-processing methods are comprehensively investigated. In particular, the study focused on Gibbs ringing artifact removal, bias field correction, intensity normalization, and adaptive histogram equalization (AHE). The pre-processed MRI data is then passed onto 3D U-Net for automatic segmentation of brain tumors. The segmentation results demonstrated the best performance with the combination of two techniques, i.e., Gibbs ringing artifact removal and bias-field correction. The proposed technique achieved mean dice score metrics of 0.91, 0.86, and 0.70 for the whole tumor, tumor core, and enhancing tumor, respectively. The testing mean dice scores achieved by the system are 0.90, 0.83, and 0.71 for the whole tumor, core tumor, and enhancing tumor, respectively. The novelty of this work concerns a robust pre-processing sequence for improving the segmentation accuracy of MR images. The proposed method overcame the testing dice scores of the state-of-the-art methods. The results are benchmarked with the existing techniques used in the Brain Tumor Segmentation Challenge (BraTS) 2018 challenge.

Towards survival prediction of cancer patients using medical images

  • Ul Haq, Nazeef
  • Tahir, Bilal
  • Firdous, Samar
  • Amir Mehmood, Muhammad
PeerJ Computer SciencePeerJ Computer Science 2022 Journal Article, cited 0 times
Website
Survival prediction of a patient is a critical task in clinical medicine for physicians and patients to make an informed decision. Several survival and risk scoring methods have been developed to estimate the survival score of patients using clinical information. For instance, the Global Registry of Acute Coronary Events (GRACE) and Thrombolysis in Myocardial Infarction (TIMI) risk scores are developed for the survival prediction of heart patients. Recently, state-of-the-art medical imaging and analysis techniques have paved the way for survival prediction of cancer patients by understanding key features extracted from Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scanned images with the help of image processing and machine learning techniques. However, survival prediction is a challenging task due to the complexity in benchmarking of image features, feature selection methods, and machine learning models. In this article, we evaluate the performance of 156 visual features from radiomic and hand-crafted feature classes, six feature selection methods, and 10 machine learning models to benchmark their performance. In addition, MRI scanned Brain Tumor Segmentation (BraTS) and CT scanned non-small cell lung cancer (NSCLC) datasets are used to train classification and regression models. Our results highlight that logistic regression outperforms for the classification with 66 and 54% accuracy for BraTS and NSCLC datasets, respectively. Moreover, our analysis of best-performing features shows that age is a common and significant feature for survival prediction. Also, gray level and shape-based features play a vital role in regression. We believe that the study can be helpful for oncologists, radiologists, and medical imaging researchers to understand and automate the procedure of decision-making and prognosis of cancer patients.

Deep learning for end-to-end kidney cancer diagnosis on multi-phase abdominal computed tomography

  • Uhm, K. H.
  • Jung, S. W.
  • Choi, M. H.
  • Shin, H. K.
  • Yoo, J. I.
  • Oh, S. W.
  • Kim, J. Y.
  • Kim, H. G.
  • Lee, Y. J.
  • Youn, S. Y.
  • Hong, S. H.
  • Ko, S. J.
NPJ Precis Oncol 2021 Journal Article, cited 0 times
Website
In 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.

Development and validation of a deep learning model for detection of breast cancers in mammography from multi-institutional datasets

  • Ueda, D.
  • Yamamoto, A.
  • Onoda, N.
  • Takashima, T.
  • Noda, S.
  • Kashiwagi, S.
  • Morisaki, T.
  • Fukumoto, S.
  • Shiba, M.
  • Morimura, M.
  • Shimono, T.
  • Kageyama, K.
  • Tatekawa, H.
  • Murai, K.
  • Honjo, T.
  • Shimazaki, A.
  • Kabata, D.
  • Miki, Y.
PLoS One 2022 Journal Article, cited 0 times
Website
OBJECTIVES: The objective of this study was to develop and validate a state-of-the-art, deep learning (DL)-based model for detecting breast cancers on mammography. METHODS: Mammograms in a hospital development dataset, a hospital test dataset, and a clinic test dataset were retrospectively collected from January 2006 through December 2017 in Osaka City University Hospital and Medcity21 Clinic. The hospital development dataset and a publicly available digital database for screening mammography (DDSM) dataset were used to train and to validate the RetinaNet, one type of DL-based model, with five-fold cross-validation. The model's sensitivity and mean false positive indications per image (mFPI) and partial area under the curve (AUC) with 1.0 mFPI for both test datasets were externally assessed with the test datasets. RESULTS: The hospital development dataset, hospital test dataset, clinic test dataset, and DDSM development dataset included a total of 3179 images (1448 malignant images), 491 images (225 malignant images), 2821 images (37 malignant images), and 1457 malignant images, respectively. The proposed model detected all cancers with a 0.45-0.47 mFPI and had partial AUCs of 0.93 in both test datasets. CONCLUSIONS: The DL-based model developed for this study was able to detect all breast cancers with a very low mFPI. Our DL-based model achieved the highest performance to date, which might lead to improved diagnosis for breast cancer.

Deriving quantitative information from multiparametric MRI via Radiomics: Evaluation of the robustness and predictive value of radiomic features in the discrimination of low-grade versus high-grade gliomas with machine learning

  • Ubaldi, Leonardo
  • Saponaro, Sara
  • Giuliano, Alessia
  • Talamonti, Cinzia
  • Retico, Alessandra
Phys Med 2023 Journal Article, cited 0 times
Website
PURPOSE: Analysis pipelines based on the computation of radiomic features on medical images are widely used exploration tools across a large variety of image modalities. This study aims to define a robust processing pipeline based on Radiomics and Machine Learning (ML) to analyze multiparametric Magnetic Resonance Imaging (MRI) data to discriminate between high-grade (HGG) and low-grade (LGG) gliomas. METHODS: The dataset consists of 158 multiparametric MRI of patients with brain tumor publicly available on The Cancer Imaging Archive, preprocessed by the BraTS organization committee. Three different types of image intensity normalization algorithms were applied and 107 features were extracted for each tumor region, setting the intensity values according to different discretization levels. The predictive power of radiomic features in the LGG versus HGG categorization was evaluated by using random forest classifiers. The impact of the normalization techniques and of the different settings in the image discretization was studied in terms of the classification performances. A set of MRI-reliable features was defined selecting the features extracted according to the most appropriate normalization and discretization settings. RESULTS: The results show that using MRI-reliable features improves the performance in glioma grade classification (AUC=0.93+/-0.05) with respect to the use of raw (AUC=0.88+/-0.08) and robust features (AUC=0.83+/-0.08), defined as those not depending on image normalization and intensity discretization. CONCLUSIONS: These results confirm that image normalization and intensity discretization strongly impact the performance of ML classifiers based on radiomic features. Thus, special attention should be provided in the image preprocessing step before typical radiomic and ML analysis are carried out.

Lung Tumor Segmentation Using a 3D Densely Connected Convolutional Neural Network

  • Tyagi, Shweta
  • Talbar, Sanjay N.
2022 Book Section, cited 0 times
Website
Lung cancer, being one of the most fatal diseases across the globe today, poses a great threat to human beings. An early diagnosis is significant for better treatment analysis, which is very challenging. The treatment in later stages becomes even more difficult. Due to increasing number of cancer cases, the radiologists are overburdened. The lung cancer diagnosis is mostly dependent on the accurate detection and segmentation of the lung tumor regions. To assist the medical experts with a second opinion and to perform the lung tumor segmentation task in the lung computed tomography (CT) scan images, the authors have proposed an approach based on a densely connected convolutional neural network. In this approach, a 3D densely connected convolutional neural network is used in which dense connections are provided between two convolutional layers, which helps to reuse the features across the layers and is also beneficial for solving the vanishing gradient problem. The proposed network consists of an encoder to capture the features in the CT image and a decoder that reconstructs the desired segmentation masks. This proposed approach is evaluated on an online available dataset for lung tumor, non-small-cell lung cancer-Radiomics dataset, and a dice similarity coefficient of 67.34% is achieved. The proposed approach will assist the radiologists to mark the lung cancer regions in a more efficient manner, and it can be utilized in an automatic computer-aided diagnosis system for lung cancer detection.

Effectiveness of synthetic data generation for capsule endoscopy images

  • Turan, Mehmet
Medicine Science | International Medical Journal 2021 Journal Article, cited 0 times
Website
With advances in digital healthcare technologies, optional therapeutic modules and tasks such as depth estimation, visual localization, active control, automatic navigation, and targeted drug delivery are desirable for the next generation of capsule endoscopy devices to diagnose and treat gastrointestinal diseases. Although deep learning applications promise many advanced functions for capsule endoscopes, some limitations and challenges are encountered during the implementation of data-driven algorithms, with the difficulty of obtaining real endoscopy images and the limited availability of annotated data being the most common problems. In addition, some artefacts in endoscopy images due to lighting conditions, reflections as well as camera view can significantly affect the performance of artificial intelligence methods, making it difficult to develop a robust model. Realistic simulations that generate synthetic data have emerged as a solution to develop data-driven algorithms by addressing these problems. In this study, synthetic data for different organs of the GI tract are generated using a simulation environment to investigate the utility and generalizability of the synthetic data for various medical image analysis tasks using the state-of-the-art Endo-SfMLearner model, and the performance of the models is evaluated with both real and synthetic images. The extensive qualitative and quantitative results demonstrate that the use of synthetic data in training improves the performance of pose and depth estimation and that the model can be accurately generalized to real medical data. Keywords: Synthetic data generation, capsule endoscopy, depth and pose estimation

Extraction of Tumor in Brain MRI using Support Vector Machine and Performance Evaluation

  • Tunga, Prakash
Visvesvaraya Technological University Journal of Engineering Sciences and Management 2019 Journal Article, cited 0 times
Website
In this article, we discuss mainly the extraction of tumor in brain MRI (Magnetic Resonance Imaging) images based on Support Vector Machine (SVM) technique. The work forms computer assisted demarcation of tumor from brain MRI and aims to be a part of routine which would otherwise performed manually by specialists. Here we focus on one of the common types of brain tumors, the Gliomas. These tumors have proved to be life threatening in advanced stages. MRI being a non-invasive procedure, can provide very good soft tissue contrast and so, forms a suitable imaging method for processing which leads to brain tumor detection and description. At first, we preprocess the given MRI image using anisotropic diffusion method, and then SVM technique is applied which classifies the image into tumor and non-tumorous regions. Next, we do the extraction of tumor, referred as Region of Interest (ROI) and describe it by calculating its size and position in the image. The remaining part, i.e., brain region with no tumor presence, refers to Non Region of Interest (NROI). Separation of ROI and NROI parts aids further processing such as ROI based compression. We also calculate the parameters that reflect the performance of the approach.

Implementing multiphysics models in FEniCS: Viscoelastic flows, poroelasticity, and tumor growth

  • Tunç, Birkan
  • Rodin, Gregory J.
  • Yankeelov, Thomas E.
2023 Journal Article, cited 1 times
Website
The open-source finite element code FEniCS is considered as an alternative to commercial finite element codes for evaluating complex constitutive models of multiphysics phenomena. FEniCS deserves this consideration because it is well-suited for encoding weak forms corresponding to partial differential equations arising from the fundamental balance laws and constitutive equations. It is shown how FEniCS can be adopted for solving boundary-value problems describing viscoelastic flows, poroelasticity, and tumor growth. Those problems span a wide range of models of continuum mechanics, and involve Eulerian, Lagrangian, and combined Eulerian-Lagrangian descriptions. Thus it is demonstrated that FEniCS is a viable computational tool capable of transcending traditional barriers between computational fluid and solid mechanics. Furthermore, it is shown that FEniCS implementations are straightforward, and do not require advanced knowledge of finite element methods and/or coding skills.

Stability and reproducibility of computed tomography radiomic features extracted from peritumoral regions of lung cancer lesions

  • Tunali, Ilke
  • Hall, Lawrence O
  • Napel, Sandy
  • Cherezov, Dmitry
  • Guvenis, Albert
  • Gillies, Robert J
  • Schabath, Matthew B
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Recent efforts have demonstrated that radiomic features extracted from the peritumoral region, the area surrounding the tumor parenchyma, have clinical utility in various cancer types. However, as like any radiomic features, peritumoral features could also be unstable and/or nonreproducible. Hence, the purpose of this study was to assess the stability and reproducibility of computed tomography (CT) radiomic features extracted from the peritumoral regions of lung lesions where stability was defined as the consistency of a feature by different segmentations, and reproducibility was defined as the consistency of a feature to different image acquisitions. METHODS: Stability was measured utilizing the "moist run" dataset and reproducibility was measured utilizing the Reference Image Database to Evaluate Therapy Response test-retest dataset. Peritumoral radiomic features were extracted from incremental distances of 3-12 mm outside the tumor segmentation. A total of 264 statistical, histogram, and texture radiomic features were assessed from the selected peritumoral region-of-interests (ROIs). All features (except wavelet texture features) were extracted using standardized algorithms defined by the Image Biomarker Standardisation Initiative. Stability and reproducibility of features were assessed using the concordance correlation coefficient. The clinical utility of stable and reproducible peritumoral features was tested in three previously published lung cancer datasets using overall survival as the endpoint. RESULTS: Features found to be stable and reproducible, regardless of the peritumoral distances, included statistical, histogram, and a subset of texture features suggesting that these features are less affected by changes (e.g., size or shape) of the peritumoral region due to different segmentations and image acquisitions. The stability and reproducibility of Laws and wavelet texture features were inconsistent across all peritumoral distances. The analyses also revealed that a subset of features were consistently stable irrespective of the initial parameters (e.g., seed point) for a given segmentation algorithm. No significant differences were found in stability for features that were extracted from ROIs bounded by a lung parenchyma mask versus ROIs that were not bounded by a lung parenchyma mask (i.e., peritumoral regions that extended outside of lung parenchyma). After testing the clinical utility of peritumoral features, stable and reproducible features were shown to be more likely to create repeatable models than unstable and nonreproducible features. CONCLUSIONS: This study identified a subset of stable and reproducible CT radiomic features extracted from the peritumoral region of lung lesions. The stable and reproducible features identified in this study could be applied to a feature selection pipeline for CT radiomic analyses. According to our findings, top performing features in survival models were more likely to be stable and reproducible hence, it may be best practice to utilize them to achieve repeatable studies and reduce the chance of overfitting.

Federated Learning Using Variable Local Training for Brain Tumor Segmentation

  • Tuladhar, Anup
  • Tyagi, Lakshay
  • Souza, Raissa
  • Forkert, Nils D.
2022 Book Section, cited 0 times
Website
The potential for deep learning to improve medical image analysis is often stymied by the difficulty in acquiring and collecting sufficient data to train models. One major barrier to data acquisition is the private and sensitive nature of the data in question, as concerns about patient privacy, among others, make data sharing between institutions difficult. Distributed learning avoids the need to share data centrally by training models locally. One approach to distributed learning is federated learning, where models are trained in parallel at local institutions and aggregated together into a global model. The 2021 Federated Tumor Segmentation (FeTS) challenge focuses on federated learning for brain tumor segmentation using magnetic resonance imaging scans collected from a real-world federation of collaborating institutions. We developed a federated training algorithm that uses a combination of variable local epochs in each federated round, a decaying learning rate, and an ensemble weight aggregation function. When testing on unseen validation data our model trained with federated learning achieves very similar performance (average DSC score of 0.674) to a central model trained on pooled data (average DSC score 0.685). When our federated learning algorithm was evaluated on unseen training and testing data, it achieved similar performances on the FeTS challenge leaderboards 1 and 2 (average DSC scores of 0.623 and 0.608, respectively). This federated learning algorithm offers an approach to training deep learning learning models without the need to share private and sensitive patient data.

Automatic fissure detection in CT images based on the genetic algorithm

  • Tseng, Lin-Yu
  • Huang, Li-Chin
2010 Conference Proceedings, cited 5 times
Website
Lung cancer is one of the most frequently occurring cancer and has a very low five-year survival rate. Computer-aided diagnosis (CAD) helps reducing the burden of radiologists and improving the accuracy of abnormality detection during CT image interpretations. Owing to rapid development of the scanner technology, the volume of medical imaging data is becoming huger and huger. Automated segmentations of the target organ region are always required by the CAD systems. Although the analysis of lung fissures provides important information for treatment, it is still a challenge to extract fissures automatically based on the CT values because the appearance of lung fissures is very fuzzy and indefinite. Since the oblique fissures can be visualized more easily among other fissures on the chest CT images, they are used to check the exact localization of the lesions. In this paper, we propose a fully automatic fissure detection method based on the genetic algorithm to identify the oblique fissures. The accurate rates of identifying the oblique fissures in the right lung and the left lung are 97% and 86%, respectively when the method was tested on 87 slices.

RadGenNets: Deep learning-based radiogenomics model for gene mutation prediction in lung cancer

  • Tripathi, Satvik
  • Moyer, Ethan Jacob
  • Augustin, Alisha Isabelle
  • Zavalny, Alex
  • Dheer, Suhani
  • Sukumaran, Rithvik
  • Schwartz, Daniel
  • Gorski, Brandon
  • Dako, Farouk
  • Kim, Edward
Informatics in Medicine Unlocked 2022 Journal Article, cited 0 times
In this paper, we present our methodology that can be used for predicting gene mutation in patients with non-small cell lung cancer (NSCLC). There are three major types of gene mutations that a NSCLC patient’s gene structure can change to: epidermal growth factor receptor (EGFR), Kirsten rat sarcoma virus (KRAS), and Anaplastic lymphoma kinase (ALK). We worked with the clinical and genomics data for each of the 130 patients as well as their corresponding PET/CT scans. We preprocessed all of the data and then built a novel pipeline to integrate both the image and tabular data. We built a novel pipeline that used a fusion of Convolutional Neural Networks and Dense Neural Networks. Also, using a search approach, we picked an ensemble of deep learning models to classify the separate gene mutations. These models include EfficientNets, SENet, and ResNeXt WSL, among others. Our model achieved a high area under curve (AUC) score of 94% in predicting gene mutation.

EfficientNet for Brain-Lesion Classification

  • Trinh, Quoc-Huy
  • Mau, Trong-Hieu Nguyen
  • Zosimov, Radmir
  • Nguyen, Minh-Van
2022 Conference Paper, cited 0 times
Website
In the development of technology, there are increasing cases of brain disease, there are more treatments proposed and achieved a positive result. However, with Brain-Lesion, the early diagnoses can improve the possibility for successful treatment and can help patients recuperate better. From this reason, Brain-Lesion is one of the controversial topics in medical images analysis nowadays. With the improvement of the architecture, there is a variety of methods that are proposed and achieve competitive scores. In this paper, we proposed a technique that uses efficient-net for 3D images, especially the Efficient-net B0 for Brain-Lesion classification task solution, and achieve the competitive score. Moreover, we also proposed the method to use Multiscale-EfficientNet to classify the slices of the MRI data.

The Efficacy of Shape Radiomics and Deep Features for Glioblastoma Survival Prediction by Deep Learning

  • Trinh, D. L.
  • Kim, S. H.
  • Yang, H. J.
  • Lee, G. S.
2022 Journal Article, cited 0 times
Glioblastoma (known as glioblastoma multiforme) is one of the most aggressive brain malignancies, accounting for 48% of all primary brain tumors. For that reason, overall survival prediction plays a vital role in diagnosis and treatment planning for glioblastoma patients. The main target of our research is to demonstrate the effectiveness of features extracted from the combination of the whole tumor and enhancing tumor to the overall survival prediction. By the proposed method, there are two kinds of features, including shape radiomics and deep features, which is utilized for this task. Firstly, optimal shape radiomics features, consisting of sphericity, maximum 3D diameter, and surface area, are selected using the Cox proportional hazard model. Secondly, deep features are extracted by ResNet18 directly from magnetic resonance images. Finally, the combination of selected shape features, deep features, and clinical information fits the regression model for overall survival prediction. The proposed method achieves promising results, which obtained 57.1% and 97,531.8 for accuracy and mean squared error metrics, respectively. Furthermore, using selected features, the result on the mean squared error metric is slightly better than the competing methods. The experiments are conducted on the Brain Tumor Segmentation Challenge (BraTS) 2018 validation dataset.

Molecular physiology of contrast enhancement in glioblastomas: An analysis of The Cancer Imaging Archive (TCIA)

  • Treiber, Jeffrey M
  • Steed, Tyler C
  • Brandel, Michael G
  • Patel, Kunal S
  • Dale, Anders M
  • Carter, Bob S
  • Chen, Clark C
J Clin Neurosci 2018 Journal Article, cited 2 times
Website
The physiologic processes underlying MRI contrast enhancement in glioblastoma patients remain poorly understood. MRIs of 148 glioblastoma subjects from The Cancer Imaging Archive were segmented using Iterative Probabilistic Voxel Labeling (IPVL). Three aspects of contrast enhancement (CE) were parametrized: the mean intensity of all CE voxels (CEi), the intensity heterogeneity in CE (CEh), and volumetric ratio of CE to necrosis (CEr). Associations between these parameters and patterns of gene expression were analyzed using DAVID functional enrichment analysis. Glioma CpG island methylator phenotype (G-CIMP) glioblastomas were poorly enhancing. Otherwise, no differences in CE parameters were found between proneural, neural, mesenchymal, and classical glioblastomas. High CEi was associated with expression of genes that mediate inflammatory responses. High CEh was associated with increased expression of genes that regulate remodeling of extracellular matrix (ECM) and endothelial permeability. High CEr was associated with increased expression of genes that mediate cellular response to stressful metabolic states, including hypoxia and starvation. Our results indicate that CE in glioblastoma is associated with distinct biological processes involved in inflammatory response and tissue hypoxia. Integrative analysis of these CE parameters may yield meaningful information pertaining to the biologic state of glioblastomas and guide future therapeutic paradigms.

Development of a Prognostic AI-Monitor for Metastatic Urothelial Cancer Patients Receiving Immunotherapy

  • Trebeschi, S.
  • Bodalal, Z.
  • van Dijk, N.
  • Boellaard, T. N.
  • Apfaltrer, P.
  • Tareco Bucho, T. M.
  • Nguyen-Kim, T. D. L.
  • van der Heijden, M. S.
  • Aerts, Hjwl
  • Beets-Tan, R. G. H.
Front Oncol 2021 Journal Article, cited 0 times
Website
Background: Immune checkpoint inhibitor efficacy in advanced cancer patients remains difficult to predict. Imaging is the only technique available that can non-invasively provide whole body information of a patient's response to treatment. We hypothesize that quantitative whole-body prognostic information can be extracted by leveraging artificial intelligence (AI) for treatment monitoring, superior and complementary to the current response evaluation methods. Methods: To test this, a cohort of 74 stage-IV urothelial cancer patients (37 in the discovery set, 37 in the independent test, 1087 CTs), who received anti-PD1 or anti-PDL1 were retrospectively collected. We designed an AI system [named prognostic AI-monitor (PAM)] able to identify morphological changes in chest and abdominal CT scans acquired during follow-up, and link them to survival. Results: Our findings showed significant performance of PAM in the independent test set to predict 1-year overall survival from the date of image acquisition, with an average area under the curve (AUC) of 0.73 (p < 0.001) for abdominal imaging, and 0.67 AUC (p < 0.001) for chest imaging. Subanalysis revealed higher accuracy of abdominal imaging around and in the first 6 months of treatment, reaching an AUC of 0.82 (p < 0.001). Similar accuracy was found by chest imaging, 5-11 months after start of treatment. Univariate comparison with current monitoring methods (laboratory results and radiological assessments) revealed higher or similar prognostic performance. In multivariate analysis, PAM remained significant against all other methods (p < 0.001), suggesting its complementary value in current clinical settings. Conclusions: Our study demonstrates that a comprehensive AI-based method such as PAM, can provide prognostic information in advanced urothelial cancer patients receiving immunotherapy, leveraging morphological changes not only in tumor lesions, but also tumor spread, and side-effects. Further investigations should focus beyond anatomical imaging. Prospective studies are warranted to test and validate our findings.

Number of Useful Components in Gaussian Mixture Models for Patch-Based Image Denoising

  • Tran, Dai-Viet
  • Li-Thiao-Té, Sébastien
  • Luong, Marie
  • Le-Tien, Thuong
  • Dibos, Françoise
2018 Conference Proceedings, cited 0 times
Website

3D/2D model-to-image registration by imitation learning for cardiac procedures

  • Toth, Daniel
  • Miao, Shun
  • Kurzendorfer, Tanja
  • Rinaldi, Christopher A
  • Liao, Rui
  • Mansi, Tommaso
  • Rhode, Kawal
  • Mountney, Peter
International Journal of Computer Assisted Radiology and Surgery 2018 Journal Article, cited 1 times
Website

End-to-End Non-Small-Cell Lung Cancer Prognostication Using Deep Learning Applied to Pretreatment Computed Tomography

  • Torres, Felipe Soares
  • Akbar, Shazia
  • Raman, Srinivas
  • Yasufuku, Kazuhiro
  • Schmidt, Carola
  • Hosny, Ahmed
  • Baldauf-Lenschen, Felix
  • Leighl, Natasha B
JCO Clin Cancer Inform 2021 Journal Article, cited 0 times
Website
PURPOSE: Clinical TNM staging is a key prognostic factor for patients with lung cancer and is used to inform treatment and monitoring. Computed tomography (CT) plays a central role in defining the stage of disease. Deep learning applied to pretreatment CTs may offer additional, individualized prognostic information to facilitate more precise mortality risk prediction and stratification. METHODS: We developed a fully automated imaging-based prognostication technique (IPRO) using deep learning to predict 1-year, 2-year, and 5-year mortality from pretreatment CTs of patients with stage I-IV lung cancer. Using six publicly available data sets from The Cancer Imaging Archive, we performed a retrospective five-fold cross-validation using pretreatment CTs of 1,689 patients, of whom 1,110 were diagnosed with non-small-cell lung cancer and had available TNM staging information. We compared the association of IPRO and TNM staging with patients' survival status and assessed an Ensemble risk score that combines IPRO and TNM staging. Finally, we evaluated IPRO's ability to stratify patients within TNM stages using hazard ratios (HRs) and Kaplan-Meier curves. RESULTS: IPRO showed similar prognostic power (concordance index [C-index] 1-year: 0.72, 2-year: 0.70, 5-year: 0.68) compared with that of TNM staging (C-index 1-year: 0.71, 2-year: 0.71, 5-year: 0.70) in predicting 1-year, 2-year, and 5-year mortality. The Ensemble risk score yielded superior performance across all time points (C-index 1-year: 0.77, 2-year: 0.77, 5-year: 0.76). IPRO stratified patients within TNM stages, discriminating between highest- and lowest-risk quintiles in stages I (HR: 8.60), II (HR: 5.03), III (HR: 3.18), and IV (HR: 1.91). CONCLUSION: Deep learning applied to pretreatment CT combined with TNM staging enhances prognostication and risk stratification in patients with lung cancer.

Open-source curation of a pancreatic ductal adenocarcinoma gene expression analysis platform (pdacR) supports a two-subtype model

  • Torre-Healy, L. A.
  • Kawalerski, R. R.
  • Oh, K.
  • Chrastecka, L.
  • Peng, X. L.
  • Aguirre, A. J.
  • Rashid, N. U.
  • Yeh, J. J.
  • Moffitt, R. A.
Commun Biol 2023 Journal Article, cited 0 times
Website
Pancreatic ductal adenocarcinoma (PDAC) is an aggressive disease for which potent therapies have limited efficacy. Several studies have described the transcriptomic landscape of PDAC tumors to provide insight into potentially actionable gene expression signatures to improve patient outcomes. Despite centralization efforts from multiple organizations and increased transparency requirements from funding agencies and publishers, analysis of public PDAC data remains difficult. Bioinformatic pitfalls litter public transcriptomic data, such as subtle inclusion of low-purity and non-adenocarcinoma cases. These pitfalls can introduce non-specificity to gene signatures without appropriate data curation, which can negatively impact findings. To reduce barriers to analysis, we have created pdacR ( http://pdacR.bmi.stonybrook.edu , https://github.com/rmoffitt/pdacR), an open-source software package and web-tool with annotated datasets from landmark studies and an interface for user-friendly analysis in clustering, differential expression, survival, and dimensionality reduction. Using this tool, we present a multi-dataset analysis of PDAC transcriptomics that confirms the basal-like/classical model over alternatives.

Domain Transform Network for Photoacoustic Tomography from Limited-view and Sparsely Sampled Data

  • Tong, Tong
  • Huang, Wenhui
  • Wang, Kun
  • He, Zicong
  • Yin, Lin
  • Yang, Xin
  • Zhang, Shuixing
  • Tian, Jie
Photoacoustics 2020 Journal Article, cited 7 times
Website
Medical image reconstruction methods based on deep learning have recently demonstrated powerful performance in photoacoustic tomography (PAT) from limited-view and sparse data. However, because most of these methods must utilize conventional linear reconstruction methods to implement signal-to-image transformations, their performance is restricted. In this paper, we propose a novel deep learning reconstruction approach that integrates appropriate data pre-processing and training strategies. The Feature Projection Network (FPnet) presented herein is designed to learn this signal-to-image transformation through data-driven learning rather than through direct use of linear reconstruction. To further improve reconstruction results, our method integrates an image post-processing network (U-net). Experiments show that the proposed method can achieve high reconstruction quality from limited-view data with sparse measurements. When employing GPU acceleration, this method can achieve a reconstruction speed of 15 frames per second.

On-cloud decision-support system for non-small cell lung cancer histology characterization from thorax computed tomography scans

  • Tomassini, S.
  • Falcionelli, N.
  • Bruschi, G.
  • Sbrollini, A.
  • Marini, N.
  • Sernani, P.
  • Morettini, M.
  • Muller, H.
  • Dragoni, A. F.
  • Burattini, L.
Comput Med Imaging Graph 2023 Journal Article, cited 0 times
Website
Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information.

Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks

  • Toğaçar, Mesut
  • Ergen, Burhan
  • Cömert, Zafer
Biocybernetics and Biomedical Engineering 2019 Journal Article, cited 0 times
Lung cancer is a disease caused by the involuntary increase of cells in the lung tissue. Early detection of cancerous cells is of vital importance in the lungs providing oxygen to the human body and excretion of carbon dioxide in the body as a result of vital activities. In this study, the detection of lung cancers is realized using LeNet, AlexNet and VGG-16 deep learning models. The experiments were carried out on an open dataset composed of Computed Tomography (CT) images. In the experiment, convolutional neural networks (CNNs) were used for feature extraction and classification purposes. In order to increase the success rate of the classification, the image augmentation techniques, such as cutting, zooming, horizontal turning and filling, were applied to the dataset during the training of the models. Because of the outstanding success of AlexNet model, the features obtained from the last fully-connected layer of the model were separately applied as the input to linear regression (LR), linear discriminant analysis (LDA), decision tree (DT), support vector machine (SVM), -nearest neighbor (kNN) and softmax classifiers. A combination of AlexNet model and NN classifier achieved the most efficient classification accuracy as 98.74 %. Then, the minimum redundancy maximum relevance (mRMR) feature selection method was applied to the deep feature set to choose the most efficient features. Consequently, the success rate was yielded as 99.51 % by reclassifying the dataset with the selected features and NN model. The proposed model is consistent diagnosis model for lung cancer detection using chest CT images.

Reliability of tumor segmentation in glioblastoma: impact on the robustness of MRI‐radiomic features

  • Tixier, Florent
  • Um, Hyemin
  • Young, Robert J
  • Veeraraghavan, Harini
Med Phys 2019 Journal Article, cited 0 times
Website
Purpose The use of radiomic features as biomarkers of treatment response and outcome or as correlates to genomic variations requires that the computed features are robust and reproducible. Segmentation, a crucial step in radiomic analysis, is a major source of variability in the computed radiomic features. Therefore, we studied the impact of tumor segmentation variability on the robustness of MRI radiomic features. Method Fluid‐attenuated inversion recovery (FLAIR) and contrast‐enhanced T1‐weighted (T1WICE) MRI of 90 patients diagnosed with glioblastoma were segmented using a semi‐automatic algorithm and an interactive segmentation with two different raters. We analyzed the robustness of 108 radiomic features from 5 categories (intensity histogram, gray‐level co‐occurrence matrix, gray‐level size‐zone matrix (GLSZM), edge maps and shape) using intra‐class correlation coefficient (ICC) and Bland and Altman analysis. Results Our results show that both segmentation methods are reliable with ICC ≥ 0.96 and standard deviation (SD) of mean differences between the two raters (SDdiffs) ≤ 30%. Features computed from the histogram and co‐occurrence matrices were found to be the most robust (ICC ≥ 0.8 and SDdiffs ≤ 30% for most features in these groups). Features from GLSZM were shown to have mixed robustness. Edge, shape and GLSZM features were the most impacted by the choice of segmentation method with the interactive method resulting in more robust features than the semi‐automatic method. Finally, features computed from T1WICE and FLAIR images were found to have similar robustness when computed with the interactive segmentation method. Conclusion Semi‐automatic and interactive segmentation methods using two raters are both reliable. The interactive method produced more robust features than the semi‐automatic method. We also found that the robustness of radiomic features varied by categories. Therefore, this study could help motivate segmentation methods and feature selection in MRI radiomic studies.

DCE-MRI based Breast Intratumor Heterogeneity Analysis via Dual Attention Deep Clustering Network and its Application in Molecular Typing

  • Tianxu Lv,
  • Xiang Pan,
  • Lihua Li
2020 Conference Paper, cited 0 times
Website
More attention has been paid to the precision and personalized treatment of breast cancer, which is a primary risk factor that threatens the females lives. It is momentous for diagnosis, analysis and therapy of tumors to lucubrate breast intratumor heterogeneity. We propose a DCE-MRI dynamic mode based self-supervised dual attention deep clustering network (DADCN) which is utilized to achieve the individual precise segmentation of breast intratumor heterogeneity region in this paper. The specific representations learned by the graph attention network are consciously combined with the deep abstract features extracted from the deep convolutional neural network. Then the structural information of the voxel in breast tumor is mined by spreading on the graph. The model is self-supervised by dual relative loss and residual loss and the clustering graph is measured by graph cut loss. We also employ Pearson, Spearman and Kendall analysis to evaluate degree of correlation between clustering results and intratumor heterogeneity represented by molecular typing. We ultimately detect that the degree of intratumor heterogeneity is automatically determined via segmentation of the heterogeneity region, to accomplish the noninvasive individual molecular typing prediction of breast cancer. The number of clusters in breast intratumor heterogeneity region is an independent biomarker for the diagnosis of benign and malignant tumors and prediction of basal-like molecular typing.

Axial Attention Convolutional Neural Network for Brain Tumor Segmentation with Multi-Modality MRI Scans

  • Tian, Weiwei
  • Li, Dengwang
  • Lv, Mengyu
  • Huang, Pu
Brain Sciences 2023 Journal Article, cited 0 times
Website
Accurately identifying tumors from MRI scans is of the utmost importance for clinical diagnostics and when making plans regarding brain tumor treatment. However, manual segmentation is a challenging and time-consuming process in practice and exhibits a high degree of variability between doctors. Therefore, an axial attention brain tumor segmentation network was established in this paper, automatically segmenting tumor subregions from multi-modality MRIs. The axial attention mechanism was employed to capture richer semantic information, which makes it easier for models to provide local-global contextual information by incorporating local and global feature representations while simplifying the computational complexity. The deep supervision mechanism is employed to avoid vanishing gradients and guide the AABTS-Net to generate better feature representations. The hybrid loss is employed in the model to handle the class imbalance of the dataset. Furthermore, we conduct comprehensive experiments on the BraTS 2019 and 2020 datasets. The proposed AABTS-Net shows greater robustness and accuracy, which signifies that the model can be employed in clinical practice and provides a new avenue for medical image segmentation systems.

The Immune Landscape of Cancer

  • Thorsson, Vésteinn
  • Gibbs, David L.
  • Brown, Scott D.
  • Wolf, Denise
  • Bortone, Dante S.
  • Ou Yang, Tai-Hsien
  • Porta-Pardo, Eduard
  • Gao, Galen F.
  • Plaisier, Christopher L.
  • Eddy, James A.
  • Ziv, Elad
  • Culhane, Aedin C.
  • Paull, Evan O.
  • Sivakumar, I.K. Ashok
  • Gentles, Andrew J.
  • Malhotra, Raunaq
  • Farshidfar, Farshad
  • Colaprico, Antonio
  • Parker, Joel S.
  • Mose, Lisle E.
  • Vo, Nam Sy
  • Liu, Jianfang
  • Liu, Yuexin
  • Rader, Janet
  • Dhankani, Varsha
  • Reynolds, Sheila M.
  • Bowlby, Reanne
  • Califano, Andrea
  • Cherniack, Andrew D.
  • Anastassiou, Dimitris
  • Bedognetti, Davide
  • Rao, Arvind
  • Chen, Ken
  • Krasnitz, Alexander
  • Hu, Hai
  • Malta, Tathiane M.
  • Noushmehr, Houtan
  • Pedamallu, Chandra Sekhar
  • Bullman, Susan
  • Ojesina, Akinyemi I.
  • Lamb, Andrew
  • Zhou, Wanding
  • Shen, Hui
  • Choueiri, Toni K.
  • Weinstein, John N.
  • Guinney, Justin
  • Saltz, Joel
  • Holt, Robert A.
  • Rabkin, Charles E.
  • Caesar-Johnson, Samantha J.
  • Demchok, John A.
  • Felau, Ina
  • Kasapi, Melpomeni
  • Ferguson, Martin L.
  • Hutter, Carolyn M.
  • Sofia, Heidi J.
  • Tarnuzzer, Roy
  • Wang, Zhining
  • Yang, Liming
  • Zenklusen, Jean C.
  • Zhang, Jiashan (Julia)
  • Chudamani, Sudha
  • Liu, Jia
  • Lolla, Laxmi
  • Naresh, Rashi
  • Pihl, Todd
  • Sun, Qiang
  • Wan, Yunhu
  • Wu, Ye
  • Cho, Juok
  • DeFreitas, Timothy
  • Frazer, Scott
  • Gehlenborg, Nils
  • Getz, Gad
  • Heiman, David I.
  • Kim, Jaegil
  • Lawrence, Michael S.
  • Lin, Pei
  • Meier, Sam
  • Noble, Michael S.
  • Saksena, Gordon
  • Voet, Doug
  • Zhang, Hailei
  • Bernard, Brady
  • Chambwe, Nyasha
  • Dhankani, Varsha
  • Knijnenburg, Theo
  • Kramer, Roger
  • Leinonen, Kalle
  • Liu, Yuexin
  • Miller, Michael
  • Reynolds, Sheila
  • Shmulevich, Ilya
  • Thorsson, Vesteinn
  • Zhang, Wei
  • Akbani, Rehan
  • Broom, Bradley M.
  • Hegde, Apurva M.
  • Ju, Zhenlin
  • Kanchi, Rupa S.
  • Korkut, Anil
  • Li, Jun
  • Liang, Han
  • Ling, Shiyun
  • Liu, Wenbin
  • Lu, Yiling
  • Mills, Gordon B.
  • Ng, Kwok-Shing
  • Rao, Arvind
  • Ryan, Michael
  • Wang, Jing
  • Weinstein, John N.
  • Zhang, Jiexin
  • Abeshouse, Adam
  • Armenia, Joshua
  • Chakravarty, Debyani
  • Chatila, Walid K.
  • de Bruijn, Ino
  • Gao, Jianjiong
  • Gross, Benjamin E.
  • Heins, Zachary J.
  • Kundra, Ritika
  • La, Konnor
  • Ladanyi, Marc
  • Luna, Augustin
  • Nissan, Moriah G.
  • Ochoa, Angelica
  • Phillips, Sarah M.
  • Reznik, Ed
  • Sanchez-Vega, Francisco
  • Sander, Chris
  • Schultz, Nikolaus
  • Sheridan, Robert
  • Sumer, S. Onur
  • Sun, Yichao
  • Taylor, Barry S.
  • Wang, Jioajiao
  • Zhang, Hongxin
  • Anur, Pavana
  • Peto, Myron
  • Spellman, Paul
  • Benz, Christopher
  • Stuart, Joshua M.
  • Wong, Christopher K.
  • Yau, Christina
  • Hayes, D. Neil
  • Parker, Joel S.
  • Wilkerson, Matthew D.
  • Ally, Adrian
  • Balasundaram, Miruna
  • Bowlby, Reanne
  • Brooks, Denise
  • Carlsen, Rebecca
  • Chuah, Eric
  • Dhalla, Noreen
  • Holt, Robert
  • Jones, Steven J.M.
  • Kasaian, Katayoon
  • Lee, Darlene
  • Ma, Yussanne
  • Marra, Marco A.
  • Mayo, Michael
  • Moore, Richard A.
  • Mungall, Andrew J.
  • Mungall, Karen
  • Robertson, A. Gordon
  • Sadeghi, Sara
  • Schein, Jacqueline E.
  • Sipahimalani, Payal
  • Tam, Angela
  • Thiessen, Nina
  • Tse, Kane
  • Wong, Tina
  • Berger, Ashton C.
  • Beroukhim, Rameen
  • Cherniack, Andrew D.
  • Cibulskis, Carrie
  • Gabriel, Stacey B.
  • Gao, Galen F.
  • Ha, Gavin
  • Meyerson, Matthew
  • Schumacher, Steven E.
  • Shih, Juliann
  • Kucherlapati, Melanie H.
  • Kucherlapati, Raju S.
  • Baylin, Stephen
  • Cope, Leslie
  • Danilova, Ludmila
  • Bootwalla, Moiz S.
  • Lai, Phillip H.
  • Maglinte, Dennis T.
  • Van Den Berg, David J.
  • Weisenberger, Daniel J.
  • Auman, J. Todd
  • Balu, Saianand
  • Bodenheimer, Tom
  • Fan, Cheng
  • Hoadley, Katherine A.
  • Hoyle, Alan P.
  • Jefferys, Stuart R.
  • Jones, Corbin D.
  • Meng, Shaowu
  • Mieczkowski, Piotr A.
  • Mose, Lisle E.
  • Perou, Amy H.
  • Perou, Charles M.
  • Roach, Jeffrey
  • Shi, Yan
  • Simons, Janae V.
  • Skelly, Tara
  • Soloway, Matthew G.
  • Tan, Donghui
  • Veluvolu, Umadevi
  • Fan, Huihui
  • Hinoue, Toshinori
  • Laird, Peter W.
  • Shen, Hui
  • Zhou, Wanding
  • Bellair, Michelle
  • Chang, Kyle
  • Covington, Kyle
  • Creighton, Chad J.
  • Dinh, Huyen
  • Doddapaneni, HarshaVardhan
  • Donehower, Lawrence A.
  • Drummond, Jennifer
  • Gibbs, Richard A.
  • Glenn, Robert
  • Hale, Walker
  • Han, Yi
  • Hu, Jianhong
  • Korchina, Viktoriya
  • Lee, Sandra
  • Lewis, Lora
  • Li, Wei
  • Liu, Xiuping
  • Morgan, Margaret
  • Morton, Donna
  • Muzny, Donna
  • Santibanez, Jireh
  • Sheth, Margi
  • Shinbrot, Eve
  • Wang, Linghua
  • Wang, Min
  • Wheeler, David A.
  • Xi, Liu
  • Zhao, Fengmei
  • Hess, Julian
  • Appelbaum, Elizabeth L.
  • Bailey, Matthew
  • Cordes, Matthew G.
  • Ding, Li
  • Fronick, Catrina C.
  • Fulton, Lucinda A.
  • Fulton, Robert S.
  • Kandoth, Cyriac
  • Mardis, Elaine R.
  • McLellan, Michael D.
  • Miller, Christopher A.
  • Schmidt, Heather K.
  • Wilson, Richard K.
  • Crain, Daniel
  • Curley, Erin
  • Gardner, Johanna
  • Lau, Kevin
  • Mallery, David
  • Morris, Scott
  • Paulauskis, Joseph
  • Penny, Robert
  • Shelton, Candace
  • Shelton, Troy
  • Sherman, Mark
  • Thompson, Eric
  • Yena, Peggy
  • Bowen, Jay
  • Gastier-Foster, Julie M.
  • Gerken, Mark
  • Leraas, Kristen M.
  • Lichtenberg, Tara M.
  • Ramirez, Nilsa C.
  • Wise, Lisa
  • Zmuda, Erik
  • Corcoran, Niall
  • Costello, Tony
  • Hovens, Christopher
  • Carvalho, Andre L.
  • de Carvalho, Ana C.
  • Fregnani, José H.
  • Longatto-Filho, Adhemar
  • Reis, Rui M.
  • Scapulatempo-Neto, Cristovam
  • Silveira, Henrique C.S.
  • Vidal, Daniel O.
  • Burnette, Andrew
  • Eschbacher, Jennifer
  • Hermes, Beth
  • Noss, Ardene
  • Singh, Rosy
  • Anderson, Matthew L.
  • Castro, Patricia D.
  • Ittmann, Michael
  • Huntsman, David
  • Kohl, Bernard
  • Le, Xuan
  • Thorp, Richard
  • Andry, Chris
  • Duffy, Elizabeth R.
  • Lyadov, Vladimir
  • Paklina, Oxana
  • Setdikova, Galiya
  • Shabunin, Alexey
  • Tavobilov, Mikhail
  • McPherson, Christopher
  • Warnick, Ronald
  • Berkowitz, Ross
  • Cramer, Daniel
  • Feltmate, Colleen
  • Horowitz, Neil
  • Kibel, Adam
  • Muto, Michael
  • Raut, Chandrajit P.
  • Malykh, Andrei
  • Barnholtz-Sloan, Jill S.
  • Barrett, Wendi
  • Devine, Karen
  • Fulop, Jordonna
  • Ostrom, Quinn T.
  • Shimmel, Kristen
  • Wolinsky, Yingli
  • Sloan, Andrew E.
  • De Rose, Agostino
  • Giuliante, Felice
  • Goodman, Marc
  • Karlan, Beth Y.
  • Hagedorn, Curt H.
  • Eckman, John
  • Harr, Jodi
  • Myers, Jerome
  • Tucker, Kelinda
  • Zach, Leigh Anne
  • Deyarmin, Brenda
  • Hu, Hai
  • Kvecher, Leonid
  • Larson, Caroline
  • Mural, Richard J.
  • Somiari, Stella
  • Vicha, Ales
  • Zelinka, Tomas
  • Bennett, Joseph
  • Iacocca, Mary
  • Rabeno, Brenda
  • Swanson, Patricia
  • Latour, Mathieu
  • Lacombe, Louis
  • Têtu, Bernard
  • Bergeron, Alain
  • McGraw, Mary
  • Staugaitis, Susan M.
  • Chabot, John
  • Hibshoosh, Hanina
  • Sepulveda, Antonia
  • Su, Tao
  • Wang, Timothy
  • Potapova, Olga
  • Voronina, Olga
  • Desjardins, Laurence
  • Mariani, Odette
  • Roman-Roman, Sergio
  • Sastre, Xavier
  • Stern, Marc-Henri
  • Cheng, Feixiong
  • Signoretti, Sabina
  • Berchuck, Andrew
  • Bigner, Darell
  • Lipp, Eric
  • Marks, Jeffrey
  • McCall, Shannon
  • McLendon, Roger
  • Secord, Angeles
  • Sharp, Alexis
  • Behera, Madhusmita
  • Brat, Daniel J.
  • Chen, Amy
  • Delman, Keith
  • Force, Seth
  • Khuri, Fadlo
  • Magliocca, Kelly
  • Maithel, Shishir
  • Olson, Jeffrey J.
  • Owonikoko, Taofeek
  • Pickens, Alan
  • Ramalingam, Suresh
  • Shin, Dong M.
  • Sica, Gabriel
  • Van Meir, Erwin G.
  • Zhang, Hongzheng
  • Eijckenboom, Wil
  • Gillis, Ad
  • Korpershoek, Esther
  • Looijenga, Leendert
  • Oosterhuis, Wolter
  • Stoop, Hans
  • van Kessel, Kim E.
  • Zwarthoff, Ellen C.
  • Calatozzolo, Chiara
  • Cuppini, Lucia
  • Cuzzubbo, Stefania
  • DiMeco, Francesco
  • Finocchiaro, Gaetano
  • Mattei, Luca
  • Perin, Alessandro
  • Pollo, Bianca
  • Chen, Chu
  • Houck, John
  • Lohavanichbutr, Pawadee
  • Hartmann, Arndt
  • Stoehr, Christine
  • Stoehr, Robert
  • Taubert, Helge
  • Wach, Sven
  • Wullich, Bernd
  • Kycler, Witold
  • Murawa, Dawid
  • Wiznerowicz, Maciej
  • Chung, Ki
  • Edenfield, W. Jeffrey
  • Martin, Julie
  • Baudin, Eric
  • Bubley, Glenn
  • Bueno, Raphael
  • De Rienzo, Assunta
  • Richards, William G.
  • Kalkanis, Steven
  • Mikkelsen, Tom
  • Noushmehr, Houtan
  • Scarpace, Lisa
  • Girard, Nicolas
  • Aymerich, Marta
  • Campo, Elias
  • Giné, Eva
  • Guillermo, Armando López
  • Van Bang, Nguyen
  • Hanh, Phan Thi
  • Phu, Bui Duc
  • Tang, Yufang
  • Colman, Howard
  • Evason, Kimberley
  • Dottino, Peter R.
  • Martignetti, John A.
  • Gabra, Hani
  • Juhl, Hartmut
  • Akeredolu, Teniola
  • Stepa, Serghei
  • Hoon, Dave
  • Ahn, Keunsoo
  • Kang, Koo Jeong
  • Beuschlein, Felix
  • Breggia, Anne
  • Birrer, Michael
  • Bell, Debra
  • Borad, Mitesh
  • Bryce, Alan H.
  • Castle, Erik
  • Chandan, Vishal
  • Cheville, John
  • Copland, John A.
  • Farnell, Michael
  • Flotte, Thomas
  • Giama, Nasra
  • Ho, Thai
  • Kendrick, Michael
  • Kocher, Jean-Pierre
  • Kopp, Karla
  • Moser, Catherine
  • Nagorney, David
  • O’Brien, Daniel
  • O’Neill, Brian Patrick
  • Patel, Tushar
  • Petersen, Gloria
  • Que, Florencia
  • Rivera, Michael
  • Roberts, Lewis
  • Smallridge, Robert
  • Smyrk, Thomas
  • Stanton, Melissa
  • Thompson, R. Houston
  • Torbenson, Michael
  • Yang, Ju Dong
  • Zhang, Lizhi
  • Brimo, Fadi
  • Ajani, Jaffer A.
  • Gonzalez, Ana Maria Angulo
  • Behrens, Carmen
  • Bondaruk, Jolanta
  • Broaddus, Russell
  • Czerniak, Bogdan
  • Esmaeli, Bita
  • Fujimoto, Junya
  • Gershenwald, Jeffrey
  • Guo, Charles
  • Lazar, Alexander J.
  • Logothetis, Christopher
  • Meric-Bernstam, Funda
  • Moran, Cesar
  • Ramondetta, Lois
  • Rice, David
  • Sood, Anil
  • Tamboli, Pheroze
  • Thompson, Timothy
  • Troncoso, Patricia
  • Tsao, Anne
  • Wistuba, Ignacio
  • Carter, Candace
  • Haydu, Lauren
  • Hersey, Peter
  • Jakrot, Valerie
  • Kakavand, Hojabr
  • Kefford, Richard
  • Lee, Kenneth
  • Long, Georgina
  • Mann, Graham
  • Quinn, Michael
  • Saw, Robyn
  • Scolyer, Richard
  • Shannon, Kerwin
  • Spillane, Andrew
  • Stretch, onathan
  • Synott, Maria
  • Thompson, John
  • Wilmott, James
  • Al-Ahmadie, Hikmat
  • Chan, Timothy A.
  • Ghossein, Ronald
  • Gopalan, Anuradha
  • Levine, Douglas A.
  • Reuter, Victor
  • Singer, Samuel
  • Singh, Bhuvanesh
  • Tien, Nguyen Viet
  • Broudy, Thomas
  • Mirsaidi, Cyrus
  • Nair, Praveen
  • Drwiega, Paul
  • Miller, Judy
  • Smith, Jennifer
  • Zaren, Howard
  • Park, Joong-Won
  • Hung, Nguyen Phi
  • Kebebew, Electron
  • Linehan, W. Marston
  • Metwalli, Adam R.
  • Pacak, Karel
  • Pinto, Peter A.
  • Schiffman, Mark
  • Schmidt, Laura S.
  • Vocke, Cathy D.
  • Wentzensen, Nicolas
  • Worrell, Robert
  • Yang, Hannah
  • Moncrieff, Marc
  • Goparaju, Chandra
  • Melamed, Jonathan
  • Pass, Harvey
  • Botnariuc, Natalia
  • Caraman, Irina
  • Cernat, Mircea
  • Chemencedji, Inga
  • Clipca, Adrian
  • Doruc, Serghei
  • Gorincioi, Ghenadie
  • Mura, Sergiu
  • Pirtac, Maria
  • Stancul, Irina
  • Tcaciuc, Diana
  • Albert, Monique
  • Alexopoulou, Iakovina
  • Arnaout, Angel
  • Bartlett, John
  • Engel, Jay
  • Gilbert, Sebastien
  • Parfitt, Jeremy
  • Sekhon, Harman
  • Thomas, George
  • Rassl, Doris M.
  • Rintoul, Robert C.
  • Bifulco, Carlo
  • Tamakawa, Raina
  • Urba, Walter
  • Hayward, Nicholas
  • Timmers, Henri
  • Antenucci, Anna
  • Facciolo, Francesco
  • Grazi, Gianluca
  • Marino, Mirella
  • Merola, Roberta
  • de Krijger, Ronald
  • Gimenez-Roqueplo, Anne-Paule
  • Piché, Alain
  • Chevalier, Simone
  • McKercher, Ginette
  • Birsoy, Kivanc
  • Barnett, Gene
  • Brewer, Cathy
  • Farver, Carol
  • Naska, Theresa
  • Pennell, Nathan A.
  • Raymond, Daniel
  • Schilero, Cathy
  • Smolenski, Kathy
  • Williams, Felicia
  • Morrison, Carl
  • Borgia, Jeffrey A.
  • Liptay, Michael J.
  • Pool, Mark
  • Seder, Christopher W.
  • Junker, Kerstin
  • Omberg, Larsson
  • Dinkin, Mikhail
  • Manikhas, George
  • Alvaro, Domenico
  • Bragazzi, Maria Consiglia
  • Cardinale, Vincenzo
  • Carpino, Guido
  • Gaudio, Eugenio
  • Chesla, David
  • Cottingham, Sandra
  • Dubina, Michael
  • Moiseenko, Fedor
  • Dhanasekaran, Renumathy
  • Becker, Karl-Friedrich
  • Janssen, Klaus-Peter
  • Slotta-Huspenina, Julia
  • Abdel-Rahman, Mohamed H.
  • Aziz, Dina
  • Bell, Sue
  • Cebulla, Colleen M.
  • Davis, Amy
  • Duell, Rebecca
  • Elder, J. Bradley
  • Hilty, Joe
  • Kumar, Bahavna
  • Lang, James
  • Lehman, Norman L.
  • Mandt, Randy
  • Nguyen, Phuong
  • Pilarski, Robert
  • Rai, Karan
  • Schoenfield, Lynn
  • Senecal, Kelly
  • Wakely, Paul
  • Hansen, Paul
  • Lechan, Ronald
  • Powers, James
  • Tischler, Arthur
  • Grizzle, William E.
  • Sexton, Katherine C.
  • Kastl, Alison
  • Henderson, Joel
  • Porten, Sima
  • Waldmann, Jens
  • Fassnacht, Martin
  • Asa, Sylvia L.
  • Schadendorf, Dirk
  • Couce, Marta
  • Graefen, Markus
  • Huland, Hartwig
  • Sauter, Guido
  • Schlomm, Thorsten
  • Simon, Ronald
  • Tennstedt, Pierre
  • Olabode, Oluwole
  • Nelson, Mark
  • Bathe, Oliver
  • Carroll, Peter R.
  • Chan, June M.
  • Disaia, Philip
  • Glenn, Pat
  • Kelley, Robin K.
  • Landen, Charles N.
  • Phillips, Joanna
  • Prados, Michael
  • Simko, Jeffry
  • Smith-McCune, Karen
  • VandenBerg, Scott
  • Roggin, Kevin
  • Fehrenbach, Ashley
  • Kendler, Ady
  • Sifri, Suzanne
  • Steele, Ruth
  • Jimeno, Antonio
  • Carey, Francis
  • Forgie, Ian
  • Mannelli, Massimo
  • Carney, Michael
  • Hernandez, Brenda
  • Campos, Benito
  • Herold-Mende, Christel
  • Jungk, Christin
  • Unterberg, Andreas
  • von Deimling, Andreas
  • Bossler, Aaron
  • Galbraith, Joseph
  • Jacobus, Laura
  • Knudson, Michael
  • Knutson, Tina
  • Ma, Deqin
  • Milhem, Mohammed
  • Sigmund, Rita
  • Godwin, Andrew K.
  • Madan, Rashna
  • Rosenthal, Howard G.
  • Adebamowo, Clement
  • Adebamowo, Sally N.
  • Boussioutas, Alex
  • Beer, David
  • Giordano, Thomas
  • Mes-Masson, Anne-Marie
  • Saad, Fred
  • Bocklage, Therese
  • Landrum, Lisa
  • Mannel, Robert
  • Moore, Kathleen
  • Moxley, Katherine
  • Postier, Russel
  • Walker, Joan
  • Zuna, Rosemary
  • Feldman, Michael
  • Valdivieso, Federico
  • Dhir, Rajiv
  • Luketich, James
  • Pinero, Edna M. Mora
  • Quintero-Aguilo, Mario
  • Carlotti, Carlos Gilberto, Jr.
  • Dos Santos, Jose Sebastião
  • Kemp, Rafael
  • Sankarankuty, Ajith
  • Tirapelli, Daniela
  • Catto, James
  • Agnew, Kathy
  • Swisher, Elizabeth
  • Creaney, Jenette
  • Robinson, Bruce
  • Shelley, Carl Simon
  • Godwin, Eryn M.
  • Kendall, Sara
  • Shipman, Cassaundra
  • Bradford, Carol
  • Carey, Thomas
  • Haddad, Andrea
  • Moyer, Jeffey
  • Peterson, Lisa
  • Prince, Mark
  • Rozek, Laura
  • Wolf, Gregory
  • Bowman, Rayleen
  • Fong, Kwun M.
  • Yang, Ian
  • Korst, Robert
  • Rathmell, W. Kimryn
  • Fantacone-Campbell, J. Leigh
  • Hooke, Jeffrey A.
  • Kovatich, Albert J.
  • Shriver, Craig D.
  • DiPersio, John
  • Drake, Bettina
  • Govindan, Ramaswamy
  • Heath, Sharon
  • Ley, Timothy
  • Van Tine, Brian
  • Westervelt, Peter
  • Rubin, Mark A.
  • Lee, Jung Il
  • Aredes, Natália D.
  • Mariamidze, Armaz
  • Lazar, Alexander J.
  • Serody, Jonathan S.
  • Demicco, Elizabeth G.
  • Disis, Mary L.
  • Vincent, Benjamin G.
  • Shmulevich, llya
Immunity 2018 Journal Article, cited 84 times
Website
We performed an extensive immunogenomic analysis of more than 10,000 tumors comprising 33 diverse cancer types by utilizing data compiled by TCGA. Across cancer types, we identified six immune subtypes—wound healing, IFN-γ dominant, inflammatory, lymphocyte depleted, immunologically quiet, and TGF-β dominant—characterized by differences in macrophage or lymphocyte signatures, Th1:Th2 cell ratio, extent of intratumoral heterogeneity, aneuploidy, extent of neoantigen load, overall cell proliferation, expression of immunomodulatory genes, and prognosis. Specific driver mutations correlated with lower (CTNNB1, NRAS, or IDH1) or higher (BRAF, TP53, or CASP8) leukocyte levels across all cancers. Multiple control modalities of the intracellular and extracellular networks (transcription, microRNAs, copy number, and epigenetic processes) were involved in tumor-immune cell interactions, both across and within immune subtypes. Our immunogenomics pipeline to characterize these heterogeneous tumors and the resulting data are intended to serve as a resource for future targeted studies to further advance the field.

2Be3-Net: Combining 2D and 3D Convolutional Neural Networks for 3D PET Scans Predictions

  • Thomas, Ronan
  • Schalck, Elsa
  • Fourure, Damien
  • Bonnefoy, Antoine
  • Cervera-Marzal, Inaki
2021 Conference Paper, cited 0 times
Website
Radiomics - high-dimensional features extracted from clinical images - is the main approach used to develop predictive models based on 3D Positron Emission Tomography (PET) scans of patients suffering from cancer. Radiomics extraction relies on an accurate segmentation of the tumoral region, which is a time consuming task subject to inter-observer variability. On the other hand, data driven approaches such as deep convolutional neural networks (CNN) struggle to achieve great performances on PET images due to the absence of available large PET datasets combined to the size of 3D networks. In this paper, we assemble several public datasets to create a PET dataset large of 2800 scans and propose a deep learning architecture named “2Be3-Net” associating a 2D feature extractor to a 3D CNN predictor. First, we take advantage of a 2D pre-trained model to extract feature maps out of 2D PET slices. Then we apply a 3D CNN on top of the concatenation of the previously extracted feature maps to compute patient-wise predictions. Experiments suggest that 2Be3-Net has an improved ability to exploit spatial information compared to 2D or 3D-only CNN solutions. We also evaluate our network on the prediction of clinical outcomes of head-and-neck cancer. The proposed pipeline outperforms PET radiomics approaches on the prediction of loco-regional recurrences and overall survival. Innovative deep learning architectures combining a pre-trained network with a 3D CNN could therefore be a great alternative to traditional CNN and radiomics approaches while empowering small and medium sized datasets.

Reproducibility in Radiomics: A Comparison of Feature Extraction Methods and Two Independent Datasets

  • Thomas, Hannah Mary T.
  • Wang, Helen Y. C.
  • Varghese, Amal Joseph
  • Donovan, Ellen M.
  • South, Chris P.
  • Saxby, Helen
  • Nisbet, Andrew
  • Prakash, Vineet
  • Sasidharan, Balu Krishna
  • Pavamani, Simon Pradeep
  • Devadhas, Devakumar
  • Mathew, Manu
  • Isiah, Rajesh Gunasingam
  • Evans, Philip M.
Applied Sciences 2023 Journal Article, cited 0 times
Website
Radiomics involves the extraction of information from medical images that are not visible to the human eye. There is evidence that these features can be used for treatment stratification and outcome prediction. However, there is much discussion about the reproducibility of results between different studies. This paper studies the reproducibility of CT texture features used in radiomics, comparing two feature extraction implementations, namely the MATLAB toolkit and Pyradiomics, when applied to independent datasets of CT scans of patients: (i) the open access RIDER dataset containing a set of repeat CT scans taken 15 min apart for 31 patients (RIDER Scan 1 and Scan 2, respectively) treated for lung cancer; and (ii) the open access HN1 dataset containing 137 patients treated for head and neck cancer. Gross tumor volume (GTV), manually outlined by an experienced observer available on both datasets, was used. The 43 common radiomics features available in MATLAB and Pyradiomics were calculated using two intensity-level quantization methods with and without an intensity threshold. Cases were ranked for each feature for all combinations of quantization parameters, and the Spearman’s rank coefficient, rs, calculated. Reproducibility was defined when a highly correlated feature in the RIDER dataset also correlated highly in the HN1 dataset, and vice versa. A total of 29 out of the 43 reported stable features were found to be highly reproducible between MATLAB and Pyradiomics implementations, having a consistently high correlation in rank ordering for RIDER Scan 1 and RIDER Scan 2 (rs > 0.8). 18/43 reported features were common in the RIDER and HN1 datasets, suggesting they may be agnostic to disease site. Useful radiomics features should be selected based on reproducibility. This study identified a set of features that meet this requirement and validated the methodology for evaluating reproducibility between datasets.

Automated detection of glioblastoma tumor in brain magnetic imaging using ANFIS classifier

  • Thirumurugan, P
  • Ramkumar, D
  • Batri, K
  • Sundhara Raja, D
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2016 Journal Article, cited 3 times
Website
This article proposes a novel and efficient methodology for the detection of Glioblastoma tumor in brain MRI images. The proposed method consists of the following stages as preprocessing, Non-subsampled Contourlet transform (NSCT), feature extraction and Adaptive neuro fuzzy inference system classification. Euclidean direction algorithm is used to remove the impulse noise from the brain image during image acquisition process. NSCT decomposes the denoised brain image into approximation bands and high frequency bands. The features mean, standard deviation and energy are computed for the extracted coefficients and given to the input of the classifier. The classifier classifies the brain MRI image into normal or Glioblastoma tumor image based on the feature set. The proposed system achieves 99.8% sensitivity, 99.7% specificity, and 99.8% accuracy with respect to the ground truth images available in the dataset.

Lung cancer classification using exponential mean saturation linear unit activation function in various generative adversarial network models

  • Thirumagal, Egambaram
  • Saruladha, Krishnamurthy
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2022 Journal Article, cited 0 times
Website
Nowadays, the mortality rate due to lung cancer increases rapidly worldwide as it can be classified only at the later stages. Early classification of lung cancer will help patients to take treatment and decrease the death rate. The limited dataset and diversity of data samples are the bottlenecks for early classification. In this paper, robust deep learning generative adversarial network (GAN) models are employed to enhance the dataset and to increase classification accuracy. The activation function plays an important feature-learning role in neural networks. Since the existing activation functions suffer from various drawbacks such as vanishing gradient, dead neurons, output offset, etc., this paper proposes a novel activation function exponential mean saturation linear unit (EMSLU), which aims to speed up training, reduce network running time, and improve classification accuracy. The experiments were conducted using vanilla GAN, Wasserstein generative adversarial network, Wasserstein generative adversarial network with gradient penalty, conditional generative adversarial network, and deep convolutional generative adversarial network. Each GAN is tested with rectified linear unit, exponential linear unit, and proposed EMSLU activation functions. The results show that all the GAN's with EMSLU yields improved precision, recall, F1-score, and accuracy.

Lung Nodules Detection Using Inverse Surface Adaptive Thresholding (ISAT) and Artificial Neural Network

  • Thasarathan Gunasegaran,
  • Haniza Yazid,
  • Khairul Salleh Basaruddin,
  • Wan Irnawati Wan Ab Rahman
2022 Conference Paper, cited 0 times
Website
Early detection of lung nodules is important since it increases the probability of survival for the lung cancer’s patient. Conventionally, the radiologists will manually examine the lung Computed Tomography (CT) scan images and determine the possibility of having malignant nodules (cancerous). This process consumes a lot of time since they have to examine each of the CT images and marking the lesion (nodules) manually. In addition, the radiologist may experience fatigue due to large number of images to be analysed. Therefore, automated detection is proposed to assist the radiologist in detecting the nodules. In this paper, the main novelty is the implementation of image processing methods to segment and classify the lung nodules. In this work, several image processing methods are utilized namely the median filter, histogram adjustment, Inverse Surface Adaptive Thresholding (ISAT) to segment the nodules in CT scan images. Then, 13 features are extracted and given as input to the Back Propagation Neural Network (BPNN) to classify the image either benign or malignant. Based on the result obtained, it showed that ISAT segmentation achieved 99.9% in term of accuracy. The extracted features were given as input to the Back Propagation Neural Network (BPNN) to classify the image either benign or malignant. Lung nodules that are less than 3 mm are considered as benign (non-cancerous) and if their size is more than 3 mm, there are considered as malignant (cancerous). The results showed that the proposed methods obtained 90.30% in term of accuracy.

Building a X-ray Database for Mammography on Vietnamese Patients and automatic Detecting ROI Using Mask-RCNN

  • Thang, Nguyen Duc
  • Dung, Nguyen Viet
  • Duc, Tran Vinh
  • Nguyen, Anh
  • Nguyen, Quang H.
  • Anh, Nguyen Tu
  • Cuong, Nguyen Ngoc
  • Linh, Le Tuan
  • Hanh, Bui My
  • Phu, Phan Huy
  • Phuong, Nguyen Hoang
2021 Book Section, cited 0 times
This paper describes the method of building a X-ray database for Mammography on Vietnamese patients that we collected at Hanoi Medical University Hospital. This dataset has 4664 images (Dicom) corresponding to 1161 standard patients with uniform distribution according to BIRAD from 0 to 5. This paper also presents the method of detecting Region of Interest (ROI) in mammogram based on Mask R-CNN architecture. The method of determining the ROI for accuracy mAP@0.5 = 0.8109 and the accuracy of classification BIRAD levels is 58.44%.

Optimization of Deep Learning Based Brain Extraction in MRI for Low Resource Environments

  • Thakur, Siddhesh P.
  • Pati, Sarthak
  • Panchumarthy, Ravi
  • Karkada, Deepthi
  • Wu, Junwen
  • Kurtaev, Dmitry
  • Sako, Chiharu
  • Shah, Prashant
  • Bakas, Spyridon
2022 Conference Paper, cited 0 times
Website
Brain extraction is an indispensable step in neuro-imaging with a direct impact on downstream analyses. Most such methods have been developed for non-pathologically affected brains, and hence tend to suffer in performance when applied on brains with pathologies, e.g., gliomas, multiple sclerosis, traumatic brain injuries. Deep Learning (DL) methodologies for healthcare have shown promising results, but their clinical translation has been limited, primarily due to these methods suffering from i) high computational cost, and ii) specific hardware requirements, e.g., DL acceleration cards. In this study, we explore the potential of mathematical optimizations, towards making DL methods amenable to application in low resource environments. We focus on both the qualitative and quantitative evaluation of such optimizations on an existing DL brain extraction method, designed for pathologically-affected brains and agnostic to the input modality. We conduct direct optimizations and quantization of the trained model (i.e., prior to inference on new data). Our results yield substantial gains, in terms of speedup, latency, throughput, and reduction in memory usage, while the segmentation performance of the initial and the optimized models remains stable, i.e., as quantified by both the Dice Similarity Coefficient and the Hausdorff Distance. These findings support post-training optimizations as a promising approach for enabling the execution of advanced DL methodologies on plain commercial-grade CPUs, and hence contributing to their translation in limited- and low- resource clinical environments.

Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning

  • Thakur, S. P.
  • Doshi, J.
  • Pati, S.
  • Ha, S. M.
  • Sako, C.
  • Talbar, S.
  • Kulkarni, U.
  • Davatzikos, C.
  • Erus, G.
  • Bakas, S.
Brainlesion 2019 Journal Article, cited 0 times
Website
Skull-stripping is an essential pre-processing step in computational neuro-imaging directly impacting subsequent analyses. Existing skull-stripping methods have primarily targeted non-pathologicallyaffected brains. Accordingly, they may perform suboptimally when applied on brain Magnetic Resonance Imaging (MRI) scans that have clearly discernible pathologies, such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. Here we present a performance evaluation of publicly available implementations of established 3D Deep Learning architectures for semantic segmentation (namely DeepMedic, 3D U-Net, FCN), with a particular focus on identifying a skull-stripping approach that performs well on brain tumor scans, and also has a low computational footprint. We have identified a retrospective dataset of 1,796 mpMRI brain tumor scans, with corresponding manually-inspected and verified gold-standard brain tissue segmentations, acquired during standard clinical practice under varying acquisition protocols at the Hospital of the University of Pennsylvania. Our quantitative evaluation identified DeepMedic as the best performing method (Dice = 97.9, Hausdorf f (95) = 2.68). We release this pre-trained model through the Cancer Imaging Phenomics Toolkit (CaPTk) platform.

Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training

  • Thakur, S.
  • Doshi, J.
  • Pati, S.
  • Rathore, S.
  • Sako, C.
  • Bilello, M.
  • Ha, S. M.
  • Shukla, G.
  • Flanders, A.
  • Kotrotsou, A.
  • Milchenko, M.
  • Liem, S.
  • Alexander, G. S.
  • Lombardo, J.
  • Palmer, J. D.
  • LaMontagne, P.
  • Nazeri, A.
  • Talbar, S.
  • Kulkarni, U.
  • Marcus, D.
  • Colen, R.
  • Davatzikos, C.
  • Erus, G.
  • Bakas, S.
Neuroimage 2020 Journal Article, cited 0 times
Website
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach(1) obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.

AI Based Classification Framework For Cancer Detection Using Brain MRI Images

  • Thachayani, M.
  • Kurian, Sneha
2021 Conference Paper, cited 0 times
Website
Brain imaging technologies plays an important role in medical diagnosis by providing new views of the brain anatomy giving greater insight into brain condition and functions. Image processing is used in the area of medical science to assist the early detection and treatment of life-critical illness. In this paper, cancer detection based on the brain magnetic resonance imaging (MRI) images using a combination of convolutional neural network (CNN) and sparse stacked auto encoder is presented. This combination is found to provide a significant effect in improving the accuracy and effectiveness of the classification process. The proposed method is coded in MATLAB and verified with the dataset consisting of 120 MRI images. The results obtained had shown that the proposed classifier is very much effective in classifying and grading the brain tumor MRI images.

Handling images of patient postures in arms up and arms down position using a biomechanical skeleton model

  • Teske, Hendrik
  • Bartelheimer, Kathrin
  • Bendl, Rolf
  • Stoiber, Eva M
  • Giske, Kristina
Current Directions in Biomedical Engineering 2017 Journal Article, cited 0 times
Website

A multi-encoder variational autoencoder controls multiple transformational features in single-cell image analysis

  • Ternes, L.
  • Dane, M.
  • Gross, S.
  • Labrie, M.
  • Mills, G.
  • Gray, J.
  • Heiser, L.
  • Chang, Y. H.
Commun Biol 2022 Journal Article, cited 0 times
Website
Image-based cell phenotyping relies on quantitative measurements as encoded representations of cells; however, defining suitable representations that capture complex imaging features is challenged by the lack of robust methods to segment cells, identify subcellular compartments, and extract relevant features. Variational autoencoder (VAE) approaches produce encouraging results by mapping an image to a representative descriptor, and outperform classical hand-crafted features for morphology, intensity, and texture at differentiating data. Although VAEs show promising results for capturing morphological and organizational features in tissue, single cell image analyses based on VAEs often fail to identify biologically informative features due to uninformative technical variation. Here we propose a multi-encoder VAE (ME-VAE) in single cell image analysis using transformed images as a self-supervised signal to extract transform-invariant biologically meaningful features, including emergent features not obvious from prior knowledge. We show that the proposed architecture improves analysis by making distinct cell populations more separable compared to traditional and recent extensions of VAE architectures and intensity measurements by enhancing phenotypic differences between cells and by improving correlations to other analytic modalities. Better feature extraction and image analysis methods enabled by the ME-VAE will advance our understanding of complex cell biology and enable discoveries previously hidden behind image complexity ultimately improving medical outcomes and drug discovery.

Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?

  • Teoh, Suliana
  • Fiorini, Francesca
  • George, Ben
  • Vallis, Katherine A
  • Van den Heuvel, Frank
Br J Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.

Proton vs photon: A model-based approach to patient selection for reduction of cardiac toxicity in locally advanced lung cancer

  • Teoh, S.
  • Fiorini, F.
  • George, B.
  • Vallis, K. A.
  • Van den Heuvel, F.
Radiother Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE/OBJECTIVE: To use a model-based approach to identify a sub-group of patients with locally advanced lung cancer who would benefit from proton therapy compared to photon therapy for reduction of cardiac toxicity. MATERIAL/METHODS: Volumetric modulated arc photon therapy (VMAT) and robust-optimised intensity modulated proton therapy (IMPT) plans were generated for twenty patients with locally advanced lung cancer to give a dose of 70Gy (relative biological effectiveness (RBE)) in 35 fractions. Cases were selected to represent a range of anatomical locations of disease. Contouring, treatment planning and organs-at-risk constraints followed RTOG-1308 protocol. Whole heart and ub-structure doses were compared. Risk estimates of grade3 cardiac toxicity were calculated based on normal tissue complication probability (NTCP) models which incorporated dose metrics and patients baseline risk-factors (pre-existing heart disease (HD)). RESULTS: There was no statistically significant difference in target coverage between VMAT and IMPT. IMPT delivered lower doses to the heart and cardiac substructures (mean, heart V5 and V30, P<.05). In VMAT plans, there were statistically significant positive correlations between heart dose and the thoracic vertebral level that corresponded to the most inferior limit of the disease. The median level at which the superior aspect of the heart contour began was the T7 vertebrae. There was a statistically significant difference in dose (mean, V5 and V30) to the heart and all substructures (except mean dose to left coronary artery and V30 to sino-atrial node) when disease overlapped with or was inferior to the T7 vertebrae. In the presence of pre-existing HD and disease overlapping with or inferior to the T7 vertebrae, the mean estimated relative risk reduction of grade3 toxicities was 24-59%. CONCLUSION: IMPT is expected to reduce cardiac toxicity compared to VMAT by reducing dose to the heart and substructures. Patients with both pre-existing heart disease and tumour and nodal spread overlapping with or inferior to the T7 vertebrae are likely to benefit most from proton over photon therapy.

Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?

  • Teoh, S.
  • Fiorini, F.
  • George, B.
  • Vallis, K. A.
  • Van den Heuvel, F.
Br J Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.

Automated, fast, robust brain extraction on contrast-enhanced T1-weighted MRI in presence of brain tumors: an optimized model based on multi-center datasets

  • Teng, Y.
  • Chen, C.
  • Shu, X.
  • Zhao, F.
  • Zhang, L.
  • Xu, J.
Eur Radiol 2023 Journal Article, cited 0 times
Website
OBJECTIVES: Existing brain extraction models should be further optimized to provide more information for oncological analysis. We aimed to develop an nnU-Net-based deep learning model for automated brain extraction on contrast-enhanced T1-weighted (T1CE) images in presence of brain tumors. METHODS: This is a multi-center, retrospective study involving 920 patients. A total of 720 cases with four types of intracranial tumors from private institutions were collected and set as the training group and the internal test group. Mann-Whitney U test (U test) was used to investigate if the model performance was associated with pathological types and tumor characteristics. Then, the generalization of model was independently tested on public datasets consisting of 100 glioma and 100 vestibular schwannoma cases. RESULTS: In the internal test, the model achieved promising performance with median Dice similarity coefficient (DSC) of 0.989 (interquartile range (IQR), 0.988-0.991), and Hausdorff distance (HD) of 6.403 mm (IQR, 5.099-8.426 mm). U test suggested a slightly descending performance in meningioma and vestibular schwannoma group. The results of U test also suggested that there was a significant difference in peritumoral edema group, with median DSC of 0.990 (IQR, 0.989-0.991, p = 0.002), and median HD of 5.916 mm (IQR, 5.000-8.000 mm, p = 0.049). In the external test, our model also showed to be robust performance, with median DSC of 0.991 (IQR, 0.983-0.998) and HD of 8.972 mm (IQR, 6.164-13.710 mm). CONCLUSIONS: For automated processing of MRI neuroimaging data presence of brain tumors, the proposed model can perform brain extraction including important superficial structures for oncological analysis. CLINICAL RELEVANCE STATEMENT: The proposed model serves as a radiological tool for image preprocessing in tumor cases, focusing on superficial brain structures, which could streamline the workflow and enhance the efficiency of subsequent radiological assessments. KEY POINTS: * The nnU-Net-based model is capable of segmenting significant superficial structures in brain extraction. * The proposed model showed feasible performance, regardless of pathological types or tumor characteristics. * The model showed generalization in the public datasets.

Noninvasive imaging signatures of HER2 and HR using ADC in invasive breast cancer: repeatability, reproducibility, and association with pathological complete response to neoadjuvant chemotherapy

  • Teng, X.
  • Zhang, J.
  • Zhang, X.
  • Fan, X.
  • Zhou, T.
  • Huang, Y. H.
  • Wang, L.
  • Lee, E. Y. P.
  • Yang, R.
  • Cai, J.
Breast Cancer Res 2023 Journal Article, cited 0 times
Website
BACKGROUND: The immunohistochemical test (IHC) of HER2 and HR can provide prognostic information and treatment guidance for invasive breast cancer patients. We aimed to develop noninvasive image signatures IS(HER2) and IS(HR) of HER2 and HR, respectively. We independently evaluate their repeatability, reproducibility, and association with pathological complete response (pCR) to neoadjuvant chemotherapy. METHODS: Pre-treatment DWI, IHC receptor status HER2/HR, and pCR to neoadjuvant chemotherapy of 222 patients from the multi-institutional ACRIN 6698 trial were retrospectively collected. They were pre-separated for development, independent validation, and test-retest. 1316 image features were extracted from DWI-derived ADC maps within manual tumor segmentations. IS(HER2) and IS(HR) were developed by RIDGE logistic regression using non-redundant and test-retest reproducible features relevant to IHC receptor status. We evaluated their association with pCR using area under receiver operating curve (AUC) and odds ratio (OR) after binarization. Their reproducibility was further evaluated using the test-retest set with intra-class coefficient of correlation (ICC). RESULTS: A 5-feature IS(HER2) targeting HER2 was developed (AUC = 0.70, 95% CI 0.59 to 0.82) and validated (AUC = 0.72, 95% CI 0.58 to 0.86) with high perturbation repeatability (ICC = 0.92) and test-retest reproducibility (ICC = 0.83). IS(HR) was developed using 5 features with higher association with HR during development (AUC = 0.75, 95% CI 0.66 to 0.84) and validation (AUC = 0.74, 95% CI 0.61 to 0.86) and similar repeatability (ICC = 0.91) and reproducibility (ICC = 0.82). Both image signatures showed significant associations with pCR with AUC of 0.65 (95% CI 0.50 to 0.80) for IS(HER2) and 0.64 (95% CI 0.50 to 0.78) for IS(HER2) in the validation cohort. Patients with high IS(HER2) were more likely to achieve pCR to neoadjuvant chemotherapy with validation OR of 4.73 (95% CI 1.64 to 13.65, P value = 0.006). Low IS(HR) patients had higher pCR with OR = 0.29 (95% CI 0.10 to 0.81, P value = 0.021). Molecular subtypes derived from the image signatures showed comparable pCR prediction values to IHC-based molecular subtypes (P value > 0.05). CONCLUSION: Robust ADC-based image signatures were developed and validated for noninvasive evaluation of IHC receptors HER2 and HR. We also confirmed their value in predicting treatment response to neoadjuvant chemotherapy. Further evaluations in treatment guidance are warranted to fully validate their potential as IHC surrogates.

Improving radiomic model reliability and generalizability using perturbations in head and neck carcinoma

  • Teng, Xinzhi
2023 Thesis, cited 0 times
Website
Background: Radiomic models for clinical applications need to be reliable. However, the model reliability is conventionally established in prospective settings, requiring proposal and special design of a separate study. As prospective studies are rare, the reliability of most proposed models is unknown. Facilitating the assessment of radiomic model reliability during development would help to identify the most promising models for prospective studies. Purpose: This thesis aims to propose a framework to build reliable radiomic models using perturbation method. The aim was separated to three studies: 1) develop a perturbation-based assessment method to quantitatively evaluate the reliability of radiomic models, 2) evaluate perturbation-based method against test-retest method for developing reliable radiomic model, and 3) evaluate radiomic model reliability and generalizability after removing low-reliable radiomics features. Methods and Materials: Four publicly available head-and-neck carcinoma (HNC) datasets and one breast cancer dataset, in total of 1,641 patients, were retrospectively recruited from The Cancer Image Archive (TCIA). The computed tomography (CT) images, their gross tumor volume (GTV) segmentations, distant metastasis (DM) and local-/regional-recurrence (LR) after definitive treatment were collected from HNC datasets. Multi-parametric diffusion-weighted images (DWI), test-retest DWI scans, pathological complete response (pCR) were collected from breast cancer dataset. For the development of reliability assessment method for radiomic model, one dataset with DM outcome as clinical task was used to build the survival model. Sixty perturbed datasets were simulated by randomly translating, rotating, and adding noise to the original image and randomizing GTV segmentation. The perturbed features were subsequently extracted from the perturbed datasets. The radiomic survival model was developed for DM risk prediction, and its reliability was quantified with intra-class coefficient of correlation (ICC) to evaluate the model prediction consistency on perturbed features. In addition, the sensitivity analysis was performed to verify the variation between input feature reliability and output prediction reliability. Then, a new radiomic model to predict pCR with DWI-derived apparent diffusion coefficient (ADC) map was developed, and its reliability was quantified with ICC to quantify the model prediction consistency on perturbed image features and test-retest image features respectively. Following the establishment of perturbation-based model reliability assessment (ICC), the model reliability and generalizability after removing low-reliable features (ICC thresholds of 0, 0.75 and 0.95) was evaluated under a repeated stratified cross-validation with HNC datasets. The model reliability is evaluated with perturbation-based ICC and the model generalizability is evaluated by the average train-test area under the receiver operating characteristic curve (AUC) difference in cross-validation. The experiment was conducted on all four HNC datasets, two clinical outcomes and five classification algorithms. Results: In development of model reliability assessment method, the reliability index ICC was used to quantify the model output consistency in features extracted from the perturbed images and segmentations. In a six-feature radiomic model, the concordance indexes (C-indexes) of the survival model were 0.742 and 0.769 for the training and testing cohorts, respectively. For the perturbed training and testing datasets, the respective mean C-indexes were 0.686 and 0.678. This yielded ICC values of 0.565 (0.518–0.615) and 0.596 (0.527–0.670) for the perturbed training and testing datasets, respectively. When only highly reliable features were used for radiomic modeling, the model’s ICC increased to 0.782 (0.759–0.815) and 0.825 (0.782–0.867) and its C-index decreased to 0.712 and 0.642 for the training and testing data, respectively. It shows our assessment method is sensitive to the reliability of the input. In the comparison experiment between perturbation-based and test-retest method, the perturbation method achieved radiomic model with comparable reliability (ICC: 0.90 vs. 0.91, P-value > 0.05) and classification performance (AUC: 0.76 vs. 0.77, P-value > 0.05) to test-retest method. For the model reliability and generalizability evaluation after removing low-reliable features, the average model reliability ICC showed significant improvements from 0.65 to 0.78 (ICC threshold 0 vs 0.75, P-value < 0.01) and 0.91 (ICC threshold 0 vs. 0.95, P-value < 0.01) under the increasing reliability thresholds. Additionally, model generalizability has increased substantially, as the mean train-test AUC difference was reduced from 0.21 to 0.18 (P-value < 0.01) and 0.12 (P-value < 0.01), and the testing AUCs were maintained at the same level (P-value > 0.05). Conclusions: We proposed a perturbation-based framework to evaluate radiomic model reliability and to develop more reliable and generalizable radiomic model. The perturbation-based method is a practical alternative to test-retest scans in assessing radiomic model reliability. Our results also suggest the pre-screening of low-reliable radiomics features prior to modeling is a necessary step to improve final model reliability and generalizability to the unseen dataset.

Scalable and flexible management of medical image big data

  • Teng, Dejun
  • Kong, Jun
  • Wang, Fusheng
Distributed and Parallel Databases 2018 Journal Article, cited 0 times
Website

Lung Nodule Detection and Classification using Machine Learning Techniques

  • Tekade, Ruchita
ASIAN JOURNAL FOR CONVERGENCE IN TECHNOLOGY (AJCT)-UGC LISTED 2018 Journal Article, cited 0 times
Website
As lung cancer is second most leading cause of death, early detection of lung cancer is became necessary in many computer aided dignosis (CAD) systems. Recently many CAD systems have been implemented to detect the lung nodules which uses Computer Tomography (CT) scan images [2]. In this paper, some image pre-processing methods such as thresholding, clearing borders, morphological operations (viz., erosion, closing, opening) are discussed to detect lung nodule regions ie, Region of Interest (ROI) in patient lung CT scan images. Also, machine learning techniques such as Support Vector Machine (SVM) and Convolutional Neural Network (CNN) has been discussed for classifying the lung nodules and non-nodules objects in patient lung ct scan images using the sets of lung nodule regions. In this study, Lung Image Database Consortium image collection (LIDC-IDRI) dataset having patient CT scan images has been used to detect and classify lung nodules. Lung nodule classification accuacy of SVM is 90% and that of CNN is 91.66%.

DESIGNING AND TESTING A MOLECULARLY TARGETED GLIOBLASTOMA THERANOSTIC: EXPERIMENTAL AND COMPUTATIONAL STUDIES

  • Tedla, Getachew Ebuy
2018 Thesis, cited 0 times
Website
With an extremely poor patient prognosis glioblastoma multiforme (GBM) is one of the most aggressive forms of brain tumor with a median patient survival of less than 15 months. While new diagnostic and therapeutic approaches continue to emerge, the progress to reduce the mortality associated with the disease is insufficient. Thus, developing new methods having the potential to overcome problems that limit effective imaging and therapeutic efficacy in GBM is still a critical need. The overall goal of this research was therefore to develop targeted glioblastoma theranostics capable of imaging disease progression and simultaneously killing cancer cells. To achieve this, the state of the art of liposome based cancer theranostics are reviewed in detail and potential glioblastoma biomarkers for theranostic delivery are identified by querying different databases and by reviewing the literature. Then tumor targeting liposomes loaded with Gd3N@C80 and doxorubicin (DXR) are developed and tested in vitro. Finally, the stability of these formulations in different physiological salt solutions is evaluated using computational techniques including area per lipid, lipid interdigitaion, carbon-deuterium order parameter, radial distribution of ions as well as steered molecular dynamic simulations. In conclusion the experimental and computational studies of this dissertation demonstrated that DXR and Gd3N@C80-OH loaded and lactoferrin & transferrin dual tagged, PEGylated liposomes might be potential drug and imaging agent delivery systems for GBM treatment.

Reduced lung-cancer mortality with low-dose computed tomographic screening

  • The National Lung Screening Trial Research Team
  • Aberle, D. R.
  • Adams, A. M.
  • Berg, C. D.
  • Black, W. C.
  • Clapp, J. D.
  • Fagerstrom, R. M.
  • Gareen, I. F.
  • Gatsonis, C.
  • Marcus, P. M.
  • Sicks, J. D.
New England Journal of Medicine 2011 Journal Article, cited 4992 times
Website
BACKGROUND The aggressive and heterogeneous nature of lung cancer has thwarted efforts to reduce mortality from this cancer through the use of screening. The advent of low-dose helical computed tomography (CT) altered the landscape of lung-cancer screening, with studies indicating that low-dose CT detects many tumors at early stages. The National Lung Screening Trial (NLST) was conducted to determine whether screening with low-dose CT could reduce mortality from lung cancer. METHODS From August 2002 through April 2004, we enrolled 53,454 persons at high risk for lung cancer at 33 U.S. medical centers. Participants were randomly assigned to undergo three annual screenings with either low-dose CT (26,722 participants) or single-view posteroanterior chest radiography (26,732). Data were collected on cases of lung cancer and deaths from lung cancer that occurred through December 31, 2009. RESULTS The rate of adherence to screening was more than 90%. The rate of positive screening tests was 24.2% with low-dose CT and 6.9% with radiography over all three rounds. A total of 96.4% of the positive screening results in the low-dose CT group and 94.5% in the radiography group were false positive results. The incidence of lung cancer was 645 cases per 100,000 person-years (1060 cancers) in the low-dose CT group, as compared with 572 cases per 100,000 person-years (941 cancers) in the radiography group (rate ratio, 1.13; 95% confidence interval [CI], 1.03 to 1.23). There were 247 deaths from lung cancer per 100,000 person-years in the low-dose CT group and 309 deaths per 100,000 person-years in the radiography group, representing a relative reduction in mortality from lung cancer with low-dose CT screening of 20.0% (95% CI, 6.8 to 26.7; P=0.004). The rate of death from any cause was reduced in the low-dose CT group, as compared with the radiography group, by 6.7% (95% CI, 1.2 to 13.6; P=0.02). CONCLUSIONS Screening with the use of low-dose CT reduces mortality from lung cancer. (Funded by the National Cancer Institute; National Lung Screening Trial ClinicalTrials.gov number, NCT00047385.)

Association between tumor architecture derived from generalized Q-space MRI and survival in glioblastoma

  • Taylor, Erik N
  • Ding, Yao
  • Zhu, Shan
  • Cheah, Eric
  • Alexander, Phillip
  • Lin, Leon
  • Aninwene II, George E
  • Hoffman, Matthew P
  • Mahajan, Anita
  • Mohamed, Abdallah SR
OncotargetOncotarget 2017 Journal Article, cited 0 times
Website
While it is recognized that the overall resistance of glioblastoma to treatment may be related to intra-tumor patterns of structural heterogeneity, imaging methods to assess such patterns remain rudimentary. Methods: We utilized a generalized Q-space imaging (GQI) algorithm to analyze magnetic resonance imaging (MRI) derived from a rodent model of glioblastoma and 2 clinical datasets to correlate GQI, histology, and survival. Results: In a rodent glioblastoma model, GQI demonstrated a poorly coherent core region, consisting of diffusion tracts < 5 mm, surrounded by a shell of highly coherent diffusion tracts, 6-25 mm. Histologically, the core region possessed a high degree of necrosis, whereas the shell consisted of organized sheets of anaplastic cells with elevated mitotic index. These attributes define tumor architecture as the macroscopic organization of variably aligned tumor cells. Applied to MRI data from The Cancer Imaging Atlas (TCGA), the core-shell diffusion tract-length ratio (c/s ratio) correlated linearly with necrosis, which, in turn, was inversely associated with survival (p = 0.00002). We confirmed in an independent cohort of patients (n = 62) that the c/s ratio correlated inversely with survival (p = 0.0004). Conclusions: The analysis of MR images by GQI affords insight into tumor architectural patterns in glioblastoma that correlate with biological heterogeneity and clinical outcome.

A One-Class Variational Autoencoder (OCVAE) Cascade for Classifying Atypical Bone Marrow Cell Sub-types

  • Tarquino, Jonathan
  • Rodriguez, Jhonathan
  • Alvarez-Jimenez, Charlems
  • Romero, Eduardo
2023 Conference Paper, cited 0 times
Atypical bone marrow (BM) cell-subtype characterization defines the diagnosis and follow up of different hematologic disorders. However, this process is basically a visual task, which is prone to inter- and intra-observer variability. The presented work introduces a new application of one-class variational autoencoders (OCVAE) for automatically classifying the 4 most common pathological atypical BM cell-subtypes, namely myelocytes, blasts, promyelocytes, and erythroblasts, regardless the disease they are associated with. The presented OCVAE-based representation is obtained by concatenating the bottleneck of 4 separated OCVAEs, specifically set to capture one-cell-sub-type pattern at a time. In addition, this strategy provides a complete validation scheme in a subset of an open access image dataset, demonstrating low requirements in terms of number of training images. Each particular OCVAE is trained to provide specific latent space parameters (64 means and 64 variances) for the corresponding atypical cell class. Afterwards, the obtained concatenated representation space feeds different classifiers which discriminate the proposed classes. Evaluation is done by using a subset (n=26,000) of a public single-cell BM image database, including two independent partitions, one for setting the VAEs to extract features ( n=20,800), and one for training and testing a set classifiers (n = 5200). Reported performance metrics show the concatenated-OCVAE characterization successfully differentiates the proposed atypical BM cell classes with accuracy = 0.938, precision = 0.935, recall = 0.935, f1-score = 0.932, outperforming previously published strategies for the same task (handcrafted features, ResNext, ResNet-50, XCeption, CoAtnet), while a more thorough experimental validation is included.

Automated Detection of Early Pulmonary Nodule in Computed Tomography Images

  • Tariq, Ahmed Usama
2019 Thesis, cited 0 times
Website
Classification of lung cancer in CT scans majorly have two steps, detect all suspicious lesions also known as pulmonary nodules and calculate the malignancy. Currently, a lot of studies are about nodules detection, but some are about the evaluation of nodule malignancy. Since the presence of nodule does not unquestionably define the presence lung cancer and the morphology of nodule has a complex association with malignant growth, the diagnosis of lung cancer requests cautious examinations on each suspicious nodule and integrateed information every nodule. We propose a 3D CNN CAD systemto solve this problem. The system consists of two modules a 3D CNN for nodule detection, which outputs all suspicious nodules for a subject and second module train on XGBoost classifier with selective data to acquire the probability of lung malignancy for the subject.

Lightweight U-Nets for Brain Tumor Segmentation

  • Tarasiewicz, Tomasz
  • Kawulok, Michal
  • Nalepa, Jakub
2021 Book Section, cited 0 times
Automated brain tumor segmentation is a vital topic due to its clinical applications. We propose to exploit a lightweight U-Net-based deep architecture called Skinny for this task—it was originally employed for skin detection from color images, and benefits from a wider spatial context. We train multiple Skinny networks over all image planes (axial, coronal, and sagittal), and form an ensemble containing such models. The experiments showed that our approach allows us to obtain accurate brain tumor delineation from multi-modal magnetic resonance images.

Improved automated tumor segmentation in whole-body 3D scans using multi-directional 2D projection-based priors

  • Tarai, S.
  • Lundstrom, E.
  • Sjoholm, T.
  • Jonsson, H.
  • Korenyushkin, A.
  • Ahmad, N.
  • Pedersen, M. A.
  • Molin, D.
  • Enblad, G.
  • Strand, R.
  • Ahlstrom, H.
  • Kullberg, J.
2024 Journal Article, cited 0 times
Website
Early cancer detection, guided by whole-body imaging, is important for the overall survival and well-being of the patients. While various computer-assisted systems have been developed to expedite and enhance cancer diagnostics and longitudinal monitoring, the detection and segmentation of tumors, especially from whole-body scans, remain challenging. To address this, we propose a novel end-to-end automated framework that first generates a tumor probability distribution map (TPDM), incorporating prior information about the tumor characteristics (e.g. size, shape, location). Subsequently, the TPDM is integrated with a state-of-the-art 3D segmentation network along with the original PET/CT or PET/MR images. This aims to produce more meaningful tumor segmentation masks compared to using the baseline 3D segmentation network alone. The proposed method was evaluated on three independent cohorts (autoPET, CAR-T, cHL) of images containing different cancer forms, obtained with different imaging modalities, and acquisition parameters and lesions annotated by different experts. The evaluation demonstrated the superiority of our proposed method over the baseline model by significant margins in terms of Dice coefficient, and lesion-wise sensitivity and precision. Many of the extremely small tumor lesions (i.e. the most difficult to segment) were missed by the baseline model but detected by the proposed model without additional false positives, resulting in clinically more relevant assessments. On average, an improvement of 0.0251 (autoPET), 0.144 (CAR-T), and 0.0528 (cHL) in overall Dice was observed. In conclusion, the proposed TPDM-based approach can be integrated with any state-of-the-art 3D UNET with potentially more accurate and robust segmentation results.

Pancreas CT Segmentation by Predictive Phenotyping

  • Tang, Yucheng
  • Gao, Riqiang
  • Lee, Hohin
  • Yang, Qi
  • Yu, Xin
  • Zhou, Yuyin
  • Bao, Shunxing
  • Huo, Yuankai
  • Spraggins, Jeffrey
  • Virostko, Jack
  • Xu, Zhoubing
  • Landman, Bennett A.
2021 Conference Paper, cited 0 times
Website
Pancreas CT segmentation offers promise at understanding the structural manifestation of metabolic conditions. To date, the medical primary record of conditions that impact the pancreas is in the electronic health record (EHR) in terms of diagnostic phenotype data (e.g., ICD-10 codes). We posit that similar structural phenotypes could be revealed by studying subjects with similar medical outcomes. Segmentation is mainly driven by imaging data, but this direct approach may not consider differing canonical appearances with different underlying conditions (e.g., pancreatic atrophy versus pancreatic cysts). To this end, we exploit clinical features from EHR data to complement image features for enhancing the pancreas segmentation, especially in high-risk outcomes. Specifically, we propose, to the best of our knowledge, the first phenotype embedding model for pancreas segmentation by predicting representatives that share similar comorbidities. Such an embedding strategy can adaptively refine the segmentation outcome based on the discriminative contexts distilled from clinical features. Experiments with 2000 patients’ EHR data and 300 CT images with the healthy pancreas, type II diabetes, and pancreatitis subjects show that segmentation by predictive phenotyping significantly improves performance over state-of-the-arts (Dice score 0.775 to 0.791, p<0.05 , Wilcoxon signed-rank test). The proposed method additionally achieves superior performance on two public testing datasets, BTCV MICCAI Challenge 2015 and TCIA pancreas CT. Our approach provides a promising direction of advancing segmentation with phenotype features while without requiring EHR data as input during testing.

The Prognostic Value of Radiomics Features Extracted From Computed Tomography in Patients With Localized Clear Cell Renal Cell Carcinoma After Nephrectomy

  • Tang, Xin
  • Pang, Tong
  • Yan, Wei-Feng
  • Qian, Wen-Lei
  • Gong, You-Ling
  • Yang, Zhi-Gang
Front Oncol 2021 Journal Article, cited 0 times
Website
Background and purpose: Radiomics is an emerging field of quantitative imaging. The prognostic value of radiomics analysis in patients with localized clear cell renal cell carcinoma (ccRCC) after nephrectomy remains unknown. Methods: Computed tomography images of 167 eligible cases were obtained from the Cancer Imaging Archive database. Radiomics features were extracted from the region of interest contoured manually for each patient. Hierarchical clustering was performed to divide patients into distinct groups. Prognostic assessments were performed by Kaplan-Meier curves, COX regression, and least absolute shrinkage and selection operator COX regression. Besides, transcriptome mRNA data were also included in the prognostic analyses. Endpoints were overall survival (OS) and disease-free survival (DFS). Concordance index (C-index), decision curve analysis and calibration curves with 1,000 bootstrapping replications were used for model's validation. Results: Hierarchical clustering groups from nephrographic features and mRNA can divide patients into different prognostic groups while clustering groups from corticomedullary or unenhanced phase couldn't distinguish patients' prognosis. In multivariate analyses, 11 OS-predicting and eight DFS-predicting features were identified in nephrographic phase. Similarly, seven OS-predictors and seven DFS-predictors were confirmed in mRNA data. In contrast, limited prognostic features were found in corticomedullary (two OS-predictor and two DFS-predictors) and unenhanced phase (one OS-predictors and two DFS-predictors). Prognostic models combining both nephrographic features and mRNA showed improved C-index than any model alone (C-index: 0.927 and 0.879 for OS- and DFS-predicting, respectively). In addition, decision curves and calibration curves also revealed the great performance of the novel models. Conclusion: We firstly investigated the prognostic significance of preoperative radiomics signatures in ccRCC patients. Radiomics features obtained from nephrographic phase had stronger predictive ability than features from corticomedullary or unenhanced phase. Multi-omics models combining radiomics and transcriptome data could further increase the predictive accuracy.

Variational-Autoencoder Regularized 3D MultiResUNet for the BraTS 2020 Brain Tumor Segmentation

  • Tang, Jiarui
  • Li, Tengfei
  • Shu, Hai
  • Zhu, Hongtu
2021 Book Section, cited 0 times
Tumor segmentation is an important research topic in medical image segmentation. With the fast development of deep learning in computer vision, automated segmentation of brain tumors using deep neural networks becomes increasingly popular. U-Net is the most widely-used network in the applications of automated image segmentation. Many well-performed models are built based on U-Net. In this paper, we devise a model that combines the variational-autoencoder regularuzed 3D U-Net model [10] and the MultiResUNet model [7]. The model is trained on the 2020 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset and predicts on the validation set. Our result shows that the modified 3D MultiResUNet performs better than the previous 3D U-Net.

Clinically applicable deep learning framework for organs at risk delineation in CT images

  • Tang, Hao
  • Chen, Xuming
  • Liu, Yang
  • Lu, Zhipeng
  • You, Junhua
  • Yang, Mingzhou
  • Yao, Shengyu
  • Zhao, Guoqi
  • Xu, Yi
  • Chen, Tingfeng
  • Liu, Yong
  • Xie, Xiaohui
Nature Machine Intelligence 2019 Journal Article, cited 0 times
Radiation therapy is one of the most widely used therapies for cancer treatment. A critical step in radiation therapy planning is to accurately delineate all organs at risk (OARs) to minimize potential adverse effects to healthy surrounding organs. However, manually delineating OARs based on computed tomography images is time-consuming and error-prone. Here, we present a deep learning model to automatically delineate OARs in head and neck, trained on a dataset of 215 computed tomography scans with 28 OARs manually delineated by experienced radiation oncologists. On a hold-out dataset of 100 computed tomography scans, our model achieves an average Dice similarity coefficient of 78.34% across the 28 OARs, significantly outperforming human experts and the previous state-of-the-art method by 10.05% and 5.18%, respectively. Our model takes only a few seconds to delineate an entire scan, compared to over half an hour by human experts. These findings demonstrate the potential for deep learning to improve the quality and reduce the treatment planning time of radiation therapy.

Radiomics from Various Tumour Volume Sizes for Prognosis Prediction of Head and Neck Squamous Cell Carcinoma: A Voted Ensemble Machine Learning Approach

  • Tang, F. H.
  • Cheung, E. Y.
  • Wong, H. L.
  • Yuen, C. M.
  • Yu, M. H.
  • Ho, P. C.
2022 Journal Article, cited 0 times
Website
BACKGROUND: Traditionally, cancer prognosis was determined by tumours size, lymph node spread and presence of metastasis (TNM staging). Radiomics of tumour volume has recently been used for prognosis prediction. In the present study, we evaluated the effect of various sizes of tumour volume. A voted ensemble approach with a combination of multiple machine learning algorithms is proposed for prognosis prediction for head and neck squamous cell carcinoma (HNSCC). METHODS: A total of 215 HNSCC CT image sets with radiotherapy structure sets were acquired from The Cancer Imaging Archive (TCIA). Six tumour volumes, including gross tumour volume (GTV), diminished GTV, extended GTV, planning target volume (PTV), diminished PTV and extended PTV were delineated. The extracted radiomics features were analysed by decision tree, random forest, extreme boost, support vector machine and generalized linear algorithms. A voted ensemble machine learning (VEML) model that optimizes the above algorithms was used. The receiver operating characteristic area under the curve (ROC-AUC) were used to compare the performance of machine learning methods, including accuracy, sensitivity and specificity. RESULTS: The VEML model demonstrated good prognosis prediction ability for all sizes of tumour volumes with reference to GTV and PTV with high accuracy of up to 88.3%, sensitivity of up to 79.9% and specificity of up to 96.6%. There was no significant difference between the various target volumes for the prognostic prediction of HNSCC patients (chi-square test, p &gt; 0.05). CONCLUSIONS: Our study demonstrates that the proposed VEML model can accurately predict the prognosis of HNSCC patients using radiomics features from various tumour volumes.

Five Classifications of Mammography IMages Based on Deep Cooperation Convolutional Neural Network

  • Tang, Chun-ming
  • Cui, Xiao-Mei
  • Yu, Xiang
  • Yang, Fan
American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS) 2019 Journal Article, cited 0 times
Website
Mammography is currently the preferred imaging method for breast cancer screening. Masses and calcificationare the main positive signs of mammography. Due to the variable appearance of masses and calcification, asignificant number of breast cancer cases are missed or misdiagnosed if it is only depended on the radiologists’subjective judgement. At present, most of the studies are based on the classical Convolutional Neural Networks(CNN), which uses the transfer learning to classify the benign and malignant masses in the mammographyimages. However, the CNN is designed for natural images which are substantially different from medicalimages. Therefore, we propose a Deep Cooperation CNN (DCCNN) to classify mammography images of a dataset into five categories including benign calcification, benign mass, malignant calcification, malignant mass andnormal breast. The data set consists of 695 normal cases from DDSM, 753 calcification cases and 891 masscases from CBIS-DDSM. Finally, DCCNN achieves 91% accuracy and 0.98 AUC on the test set, whoseperformance is superior to VGG16, GoogLeNet and InceptionV3 models. Therefore, DCCNN can aidradiologists to make more accurate judgments, greatly reducing the rate of missed and misdiagnosis.

Performance optimisation of deep learning models using majority voting algorithm for brain tumour classification

  • Tandel, G. S.
  • Tiwari, A.
  • Kakde, O. G.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
BACKGROUND: Although biopsy is the gold standard for tumour grading, being invasive, this procedure also proves fatal to the brain. Thus, non-invasive methods for brain tumour grading are urgently needed. Here, a magnetic resonance imaging (MRI)-based non-invasive brain tumour grading method has been proposed using deep learning (DL) and machine learning (ML) techniques. METHOD: Four clinically applicable datasets were designed. The four datasets were trained and tested on five DL-based models (convolutional neural networks), AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50, and five ML-based models, Support Vector Machine, K-Nearest Neighbours, Naive Bayes, Decision Tree, and Linear Discrimination using five-fold cross-validation. A majority voting (MajVot)-based ensemble algorithm has been proposed to optimise the overall classification performance of five DL and five ML-based models. RESULTS: The average accuracy improvement of four datasets using the DL-based MajVot algorithm against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 models was 2.02%, 1.11%, 1.04%, 2.67%, and 1.65%, respectively. Further, a 10.12% improvement was seen in the average accuracy of four datasets using the DL method against ML. Furthermore, the proposed DL-based MajVot algorithm was validated on synthetic face data and improved the male versus female face image classification accuracy by 2.88%, 0.71%, 1.90%, 2.24%, and 0.35% against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50, respectively. CONCLUSION: The proposed MajVot algorithm achieved promising results for brain tumour classification and is able to utilise the combined potential of multiple models.

Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm

  • Tandel, G. S.
  • Balestrieri, A.
  • Jujaray, T.
  • Khanna, N. N.
  • Saba, L.
  • Suri, J. S.
Comput Biol Med 2020 Journal Article, cited 157 times
Website
MOTIVATION: Brain or central nervous system cancer is the tenth leading cause of death in men and women. Even though brain tumour is not considered as the primary cause of mortality worldwide, 40% of other types of cancer (such as lung or breast cancers) are transformed into brain tumours due to metastasis. Although the biopsy is considered as the gold standard for cancer diagnosis, it poses several challenges such as low sensitivity/specificity, risk during the biopsy procedure, and relatively long waiting times for the biopsy results. Due to an increase in the sheer volume of patients with brain tumours, there is a need for a non-invasive, automatic computer-aided diagnosis tool that can automatically diagnose and estimate the grade of a tumour accurately within a few seconds. METHOD: Five clinically relevant multiclass datasets (two-, three-, four-, five-, and six-class) were designed. A transfer-learning-based Artificial Intelligence paradigm using a Convolutional Neural Network (CCN) was proposed and led to higher performance in brain tumour grading/classification using magnetic resonance imaging (MRI) data. We benchmarked the transfer-learning-based CNN model against six different machine learning (ML) classification methods, namely Decision Tree, Linear Discrimination, Naive Bayes, Support Vector Machine, K-nearest neighbour, and Ensemble. RESULTS: The CNN-based deep learning (DL) model outperforms the six types of ML models when considering five types of multiclass tumour datasets. These five types of data are two-, three-, four-, five, and six-class. The CNN-based AlexNet transfer learning system yielded mean accuracies derived from three kinds of cross-validation protocols (K2, K5, and K10) of 100, 95.97, 96.65, 87.14, and 93.74%, respectively. The mean areas under the curve of DL and ML were found to be 0.99 and 0.87, respectively, for p < 0.0001, and DL showed a 12.12% improvement over ML. Multiclass datasets were benchmarked against the TT protocol (where training and testing samples are the same). The optimal model was validated using a statistical method of a tumour separation index and verified on synthetic data consisting of eight classes. CONCLUSION: The transfer-learning-based AI system is useful in multiclass brain tumour grading and shows better performance than ML systems.

Investigation of thoracic four-dimensional CT-based dimension reduction technique for extracting the robust radiomic features

  • Tanaka, S.
  • Kadoya, N.
  • Kajikawa, T.
  • Matsuda, S.
  • Dobashi, S.
  • Takeda, K.
  • Jingu, K.
Phys Med 2019 Journal Article, cited 0 times
Website
Robust feature selection in radiomic analysis is often implemented using the RIDER test-retest datasets. However, the CT Protocol between the facility and test-retest datasets are different. Therefore, we investigated possibility to select robust features using thoracic four-dimensional CT (4D-CT) scans that are available from patients receiving radiation therapy. In 4D-CT datasets of 14 lung cancer patients who underwent stereotactic body radiotherapy (SBRT) and 14 test-retest datasets of non-small cell lung cancer (NSCLC), 1170 radiomic features (shape: n = 16, statistics: n = 32, texture: n = 1122) were extracted. A concordance correlation coefficient (CCC) > 0.85 was used to select robust features. We compared the robust features in various 4D-CT group with those in test-retest. The total number of robust features was a range between 846/1170 (72%) and 970/1170 (83%) in all 4D-CT groups with three breathing phases (40%–60%); however, that was a range between 44/1170 (4%) and 476/ 1170 (41%) in all 4D-CT groups with 10 breathing phases. In test-retest, the total number of robust features was 967/1170 (83%); thus, the number of robust features in 4D-CT was almost equal to that in test-retest by using 40–60% breathing phases. In 4D-CT, respiratory motion is a factor that greatly affects the robustness of features, thus by using only 40–60% breathing phases, excessive dimension reduction will be able to be prevented in any 4D-CT datasets, and select robust features suitable for CT protocol of your own facility.

Analysis of a feature-deselective neuroevolution classifier (FD-NEAT) in a computer-aided lung nodule detection system for CT images

  • Tan, Maxine
  • Deklerck, Rudi
  • Jansen, Bart
  • Cornelis, Jan
2012 Conference Proceedings, cited 9 times
Website
Systems for Computer-Aided Detection (CAD), specifically for lung nodule detection received increasing attention in recent years. This is in tandem with the observation that patients who are diagnosed with early stage lung cancer and who undergo curative resection have a much better prognosis. In this paper, we analyze the performance of a novel feature-deselective neuroevolution method called FD-NEAT to retain relevant features derived from CT images and evolve neural networks that perform well for combined feature selection and classification. Network performance is analyzed based on radiologists' ratings of various lung nodule characteristics defined in the LIDC database. The analysis shows that the FD-NEAT classifier relates well with the radiologists' perception in almost all the defined nodule characteristics, and shows that FD-NEAT evolves networks that are less complex than the fixed-topology ANN in terms of number of connections.

msFormer: Adaptive Multi-Modality 3D Transformer for Medical Image Segmentation

  • Tan, Jiaxin
  • Jiang, Chuangbo
  • Li, Laquan
  • Li, Haoyuan
  • Li, Weisheng
  • Zheng, Shenhai
2022 Book Section, cited 0 times
Over the past years, Convolutional Neural Networks (CNNs) have dominated the field of medical image segmentation. But they have difficulty representing long-range dependencies. Recently, the Transformer has been applied to medical image segmentation. Transformer-based architectures that utilize the self-attention (core of the Transformer) mechanism can encode long-range dependencies on images with highly expressive learning capabilities. In this paper, we introduce an adaptive multi-modality 3D medical image segmentation network based on Transformer (called msFormer), which is also a powerful 3D fusion network, and extend the application of Transformer to multi-modality medical image segmentation. This fusion network is modeled in the U-shaped structure to exploit complementary features of different modalities at multiple scales, which increases the cubical representations. We conducted a comprehensive experimental analysis on the Prostate and BraTS2021 datasets. The results show that our method achieves an average DSC of 0.905 and 0.851 on these two datasets, respectively, outperforming existing state-of-the-art methods and providing significant improvements.

Does Anatomical Contextual Information Improve 3D U-Net-Based Brain Tumor Segmentation?

  • Tampu, Iulian Emil
  • Haj-Hosseini, Neda
  • Eklund, Anders
Diagnostics 2021 Journal Article, cited 0 times
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is an emerging concept for the development of deep learning applications for computer-aided medical image analysis. A large portion of the current research is devoted to the development of new network architectures to improve segmentation accuracy by using context-aware mechanisms. In this work, it is investigated whether or not the addition of contextual information from the brain anatomy in the form of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks and probability maps improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used to train and test two standard 3D U-Net (nnU-Net) models that, in addition to the conventional MR image modalities, used the anatomical contextual information as extra channels in the form of binary masks (CIM) or probability maps (CIP). For comparison, a baseline model (BLM) that only used the conventional MR image modalities was also trained. The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject. Median (mean) Dice scores of 90.2 (81.9), 90.2 (81.9), and 90.0 (82.1) were obtained on the official BraTS2020 validation dataset (125 subjects) for BLM, CIM, and CIP, respectively. Results show that there is no statistically significant difference when comparing Dice scores between the baseline model and the contextual information models (p > 0.05), even when comparing performances for high and low grade tumors independently. In a few low grade cases where improvement was seen, the number of false positives was reduced. Moreover, no improvements were found when considering model training time or domain generalization. Only in the case of compensation for fewer MR modalities available for each subject did the addition of anatomical contextual information significantly improve (p < 0.05) the segmentation of the whole tumor. In conclusion, there is no overall significant improvement in segmentation performance when using anatomical contextual information in the form of either binary WM, GM, and CSF masks or probability maps as extra channels.

Inception Architecture for Brain Image Classification

  • Tamilarasi, R.
  • Gopinathan, S.
Journal of Physics: Conference Series 2021 Journal Article, cited 0 times
Website
A non-invasive diagnostic support system for brain cancer diagnosis is presented in this study. Recently, very deeper convolution neural networks are designed for computerized tasks such as image classification, natural language processing. One of the standard architecture designs is the Visual Geometric Group (VGG) models. It uses a large number of small convolution filters (3x3) connected serially. Before applying max pooling, convolution filters are stacked up to four layers to extract features' abstraction. The main drawback of going deeper is over fitting, and also updating gradient weights is very hard. These limitations are overcome using the inception module, which is wider rather than deeper. Also, it has parallel convolution layers with 3x3, 5x5, and 1x1 filters that reduce the computational complexity due to stacking, and the outputs are concatenated. This study's experimental results show the usefulness of inception architecture for aiding brain image classification on Repository of Molecular Brain Neoplasia DaTa (REMBRANDT) Magnetic Resonance Imaging (MRI) images with an average accuracy of 95.1%, sensitivity of 96.2%, and specificity of 94%.

A2M-LEUK: attention-augmented algorithm for blood cancer detection in children

  • Talaat, Fatma M.
  • Gamel, Samah A.
Neural Computing and Applications 2023 Journal Article, cited 0 times
Website
Leukemia is a malignancy that affects the blood and bone marrow. Its detection and classification are conventionally done through labor-intensive and specialized methods. The diagnosis of blood cancer in children is a critical task that requires high precision and accuracy. This study proposes a novel approach utilizing attention mechanism-based machine learning in conjunction with image processing techniques for the precise detection and classification of leukemia cells. The proposed attention-augmented algorithm for blood cancer detection in children (A2M-LEUK) is an innovative algorithm that leverages attention mechanisms to improve the detection of blood cancer in children. A2M-LEUK was evaluated on a dataset of blood cell images and achieved remarkable performance metrics: Precision = 99.97%, Recall = 100.00%, F1-score = 99.98%, and Accuracy = 99.98%. These results indicate the high accuracy and sensitivity of the proposed approach in identifying and categorizing leukemia, and its potential to reduce the workload of medical professionals and improve the diagnosis of leukemia. The proposed method provides a promising approach for accurate and efficient detection and classification of leukemia cells, which could potentially improve the diagnosis and treatment of leukemia. Overall, A2M-LEUK improves the diagnosis of leukemia in children and reduces the workload of medical professionals.

Staging of clear cell renal cell carcinoma using random forest and support vector machine

  • Talaat, D.
  • Zada, F.
  • Kadry, R.
2020 Conference Paper, cited 0 times
Website
Abstract. Kidney cancer is one of the deadliest types of cancer affecting the human body. It’s regarded as the seventh most common type of cancer affecting men and the ninth affecting women. Early diagnosis of kidney cancer can improve the survival rates for many patients. Clear cell renal cell carcinoma (ccRCC) accounts for 90% of renal cancers. Although the exact cause of the kidney cancer is still unknown, early diagnosis can help patients get the proper treatment at the proper time. In this paper, a novel semi-automated model is proposed for early detection and staging of clear cell renal cell carcinoma. The proposed model consists of three phases: segmentation, feature extraction, and classification. The first phase is image segmentation phase where images were masked to segment the kidney lobes. Then the masked images were fed into watershed algorithm to extract tumor from the kidney. The second phase is feature extraction phase where gray level co-occurrence matrix (GLCM) method was integrated with normal statistical method to extract the feature vectors from the segmented images. The last phase is the classification phase where the resulted feature vectors were introduced to random forest (RF) and support vector machine (SVM) classifiers. Experiments have been carried out to validate the effectiveness of the proposed model using TCGA-KRIC dataset which contains 228 CT scans of ccRCC patients where 150 scans were used for learning and 78 for validation. The proposed model showed an outstanding improvement of 15.12% for accuracy from the previous work.

Fine-Tuning Approach for Segmentation of Gliomas in Brain Magnetic Resonance Images with a Machine Learning Method to Normalize Image Differences among Facilities

  • Takahashi, S.
  • Takahashi, M.
  • Kinoshita, M.
  • Miyake, M.
  • Kawaguchi, R.
  • Shinojima, N.
  • Mukasa, A.
  • Saito, K.
  • Nagane, M.
  • Otani, R.
  • Higuchi, F.
  • Tanaka, S.
  • Hata, N.
  • Tamura, K.
  • Tateishi, K.
  • Nishikawa, R.
  • Arita, H.
  • Nonaka, M.
  • Uda, T.
  • Fukai, J.
  • Okita, Y.
  • Tsuyuguchi, N.
  • Kanemura, Y.
  • Kobayashi, K.
  • Sese, J.
  • Ichimura, K.
  • Narita, Y.
  • Hamamoto, R.
Cancers (Basel) 2021 Journal Article, cited 0 times
Website
Machine learning models for automated magnetic resonance image segmentation may be useful in aiding glioma detection. However, the image differences among facilities cause performance degradation and impede detection. This study proposes a method to solve this issue. We used the data from the Multimodal Brain Tumor Image Segmentation Benchmark (BraTS) and the Japanese cohort (JC) datasets. Three models for tumor segmentation are developed. In our methodology, the BraTS and JC models are trained on the BraTS and JC datasets, respectively, whereas the fine-tuning models are developed from the BraTS model and fine-tuned using the JC dataset. Our results show that the Dice coefficient score of the JC model for the test portion of the JC dataset was 0.779 +/- 0.137, whereas that of the BraTS model was lower (0.717 +/- 0.207). The mean Dice coefficient score of the fine-tuning model was 0.769 +/- 0.138. There was a significant difference between the BraTS and JC models (p < 0.0001) and the BraTS and fine-tuning models (p = 0.002); however, no significant difference between the JC and fine-tuning models (p = 0.673). As our fine-tuning method requires fewer than 20 cases, this method is useful even in a facility where the number of glioma cases is small.

Computational Complexity Reduction of Neural Networks of Brain Tumor Image Segmentation by Introducing Fermi–Dirac Correction Functions

  • Tai, Yen-Ling
  • Huang, Shin-Jhe
  • Chen, Chien-Chang
  • Lu, Henry Horng-Shing
Entropy 2021 Journal Article, cited 0 times
Website
Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi-Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi-Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi-Dirac correction function exhibits better capabilities of image augmentation and segmentation.

Enhancing Clinical Support for Breast Cancer with Deep Learning Models Using Synthetic Correlated Diffusion Imaging

  • Tai, Chi-en Amy
  • Gunraj, Hayden
  • Hodzic, Nedim
  • Flanagan, Nic
  • Sabri, Ali
  • Wong, Alexander
2024 Book Section, cited 0 times
Breast cancer is the second most common type of cancer in women in Canada and the United States, representing over 25% of all new female cancer cases. As such, there has been immense research and progress on improving screening and clinical support for breast cancer. In this paper, we investigate enhancing clinical support for breast cancer with deep learning models using a newly introduced magnetic resonance imaging (MRI) modality called synthetic correlated diffusion imaging (CDIs). More specifically, we leverage a volumetric convolutional neural network to learn volumetric deep radiomic features from a pre-treatment cohort and construct a predictor based on the learnt features for grade and post-treatment response prediction. As the first study to learn CDIs-centric radiomic sequences within a deep learning perspective for clinical decision support, we evaluated the proposed approach using the ACRIN-6698 study against those learnt using gold-standard imaging modalities. We find that the proposed approach can achieve better performance for both grade and post-treatment response prediction and thus may be a useful tool to aid oncologists in improving recommendation of treatment of patients. Subsequently, the approach to leverage volumetric deep radiomic features for breast cancer can be further extended to other applications of CDIs in the cancer domain to further improve clinical support.

Automatic estimation of the aortic lumen geometry by ellipse tracking

  • Tahoces, Pablo G
  • Alvarez, Luis
  • González, Esther
  • Cuenca, Carmelo
  • Trujillo, Agustín
  • Santana-Cedrés, Daniel
  • Esclarín, Julio
  • Gomez, Luis
  • Mazorra, Luis
  • Alemán-Flores, Miguel
International Journal of Computer Assisted Radiology and Surgery 2019 Journal Article, cited 0 times

Analyzing the Reliability of Different Machine Radiomics Features Considering Various Segmentation Approaches in Lung Cancer CT Images

  • Tahmooresi, Maryam
  • Abdel-Nasser, Mohamed
  • Puig, Domenec
2022 Book Section, cited 0 times
Cancer is generally defined as the uncontrollable increase of number of cells in the body. These cells might be formed anywhere in the body and spread to other parts of the body. Although the mortality rate of cancer is high, it is possible to decrease cancer cases by up to 30% to 50% through taking a healthy lifestyle and avoiding unhealthy habits. Imaging is one of the powerful technologies used for detecting and treating cancer at its early stages. Nowadays, scientists admit that medical images hold more information than their diagnosis, which is called a radiomics approach. Radiomics demonstrate that images comprise numerous quantitative features that are useful in predicting, detecting, and treating cancers in a personalized manner. While radiomics can extract numerous features, not all of them are useful. It should not be neglected that the outcome of data analysis is highly dependent on the selected features. There are different ways of finding the most reliable features. One possible way is to select all extracted features, analyze them, and find the most reproducible and reliable ones. Different statistical analysis metrics could analyze the features. To discover and introduce the most accurate metrics, in this paper, different statistical metrics used for measuring the stability and reproducibility of the features are investigated.

Combo loss: Handling input and output imbalance in multi-organ segmentation

  • Taghanaki, S. A.
  • Zheng, Y.
  • Kevin Zhou, S.
  • Georgescu, B.
  • Sharma, P.
  • Xu, D.
  • Comaniciu, D.
  • Hamarneh, G.
Comput Med Imaging Graph 2019 Journal Article, cited 219 times
Website
Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.

Segmentation-free direct tumor volume and metabolic activity estimation from PET scans

  • Taghanaki, S. A.
  • Duggan, N.
  • Ma, H.
  • Hou, X.
  • Celler, A.
  • Benard, F.
  • Hamarneh, G.
Comput Med Imaging Graph 2018 Journal Article, cited 2 times
Website
Tumor volume and metabolic activity are two robust imaging biomarkers for predicting early therapy response in F-fluorodeoxyglucose (FDG) positron emission tomography (PET), which is a modality to image the distribution of radiotracers and thereby observe functional processes in the body. To date, estimation of these two biomarkers requires a lesion segmentation step. While the segmentation methods requiring extensive user interaction have obvious limitations in terms of time and reproducibility, automatically estimating activity from segmentation, which involves integrating intensity values over the volume is also suboptimal, since PET is an inherently noisy modality. Although many semi-automatic segmentation based methods have been developed, in this paper, we introduce a method which completely eliminates the segmentation step and directly estimates the volume and activity of the lesions. We trained two parallel ensemble models using locally extracted 3D patches from phantom images to estimate the activity and volume, which are derivatives of other important quantification metrics such as standardized uptake value (SUV) and total lesion glycolysis (TLG). For validation, we used 54 clinical images from the QIN Head and Neck collection on The Cancer Imaging Archive, as well as a set of 55 PET scans of the Elliptical Lung-Spine Body Phantomwith different levels of noise, four different reconstruction methods, and three different background activities, namely; air, water, and hot background. In the validation on phantom images, we achieved relative absolute error (RAE) of 5.11%+/-3.5% and 5.7%+/-5.25% for volume and activity estimation, respectively, which represents improvements of over 20% and 6% respectively, compared with the best competing methods. From the validation performed using clinical images, we found that the proposed method is capable of obtaining almost the same level of agreement with a group of trained experts, as a single trained expert is, indicating that the method has the potential to be a useful tool in clinical practice.

Machine learning with textural analysis of longitudinal multiparametric MRI and molecular subtypes accurately predicts pathologic complete response in patients with invasive breast cancer

  • Syed, A.
  • Adam, R.
  • Ren, T.
  • Lu, J.
  • Maldjian, T.
  • Duong, T. Q.
PLoS One 2023 Journal Article, cited 9 times
Website
PURPOSE: To predict pathological complete response (pCR) after neoadjuvant chemotherapy using extreme gradient boosting (XGBoost) with MRI and non-imaging data at multiple treatment timepoints. MATERIAL AND METHODS: This retrospective study included breast cancer patients (n = 117) who underwent neoadjuvant chemotherapy. Data types used included tumor ADC values, diffusion-weighted and dynamic-contrast-enhanced MRI at three treatment timepoints, and patient demographics and tumor data. GLCM textural analysis was performed on MRI data. An extreme gradient boosting machine learning algorithm was used to predict pCR. Prediction performance was evaluated using the area under the curve (AUC) of the receiver operating curve along with precision and recall. RESULTS: Prediction using texture features of DWI and DCE images at multiple treatment time points (AUC = 0.871; 95% CI: (0.768, 0.974; p<0.001) and (AUC = 0.903 95% CI: 0.854, 0.952; p<0.001) respectively), outperformed that using mean tumor ADC (AUC = 0.850 (95% CI: 0.764, 0.936; p<0.001)). The AUC using all MRI data was 0.933 (95% CI: 0.836, 1.03; p<0.001). The AUC using non-MRI data was 0.919 (95% CI: 0.848, 0.99; p<0.001). The highest AUC of 0.951 (95% CI: 0.909, 0.993; p<0.001) was achieved with all MRI and all non-MRI data at all time points as inputs. CONCLUSION: Using XGBoost on extracted GLCM features and non-imaging data accurately predicts pCR. This early prediction of response can minimize exposure to toxic chemotherapy, allowing regimen modification mid-treatment and ultimately achieving better outcomes.

A Virtual Spine Construction Algorithm for a Patient - Specific Pedicle Screw Surgical Simulators

  • Syamlan, Adlina
  • Mampaey, Tuur
  • Fathurachman,
  • Denis, Kathleen
  • Poorten, Emmanuel Vander
  • Tjahjowidodo, Tegoeh
2022 Conference Paper, cited 0 times
This paper presents an underlying study of a virtual spine construction as part of a surgical simulator. The goal is to create a patient - specific segmentation and rendering algorithm in two aspects, namely geometric modelling and material properties estimation. The spines are isolated from the CT scan data using an in house segmentation algorithm based on U - Net architecture, which are then rendered using the marching cube algorithm. Two rendering parameters (step size and voxel size) are tuned to give the best visual result. The material property are extracted from the gray scale values of the CT scan. The developed algorithm are bench - marked against an open source segmentation software.

Advancing Semantic Interoperability of Image Annotations: Automated Conversion of Non-standard Image Annotations in a Commercial PACS to the Annotation and Image Markup

  • Swinburne, Nathaniel C
  • Mendelson, David
  • Rubin, Daniel L
J Digit Imaging 2019 Journal Article, cited 0 times
Website
Sharing radiologic image annotations among multiple institutions is important in many clinical scenarios; however, interoperability is prevented because different vendors’ PACS store annotations in non-standardized formats that lack semantic interoperability. Our goal was to develop software to automate the conversion of image annotations in a commercial PACS to the Annotation and Image Markup (AIM) standardized format and demonstrate the utility of this conversion for automated matching of lesion measurements across time points for cancer lesion tracking. We created a software module in Java to parse the DICOM presentation state (DICOM-PS) objects (that contain the image annotations) for imaging studies exported from a commercial PACS (GE Centricity v3.x). Our software identifies line annotations encoded within the DICOM-PS objects and exports the annotations in the AIM format. A separate Python script processes the AIM annotation files to match line measurements (on lesions) across time points by tracking the 3D coordinates of annotated lesions. To validate the interoperability of our approach, we exported annotations from Centricity PACS into ePAD (http://epad.stanford.edu) (Rubin et al., Transl Oncol 7(1):23–35, 2014), a freely available AIM-compliant workstation, and the lesion measurement annotations were correctly linked by ePAD across sequential imaging studies. As quantitative imaging becomes more prevalent in radiology, interoperability of image annotations gains increasing importance. Our work demonstrates that image annotations in a vendor system lacking standard semantics can be automatically converted to a standardized metadata format such as AIM, enabling interoperability and potentially facilitating large-scale analysis of image annotations and the generation of high-quality labels for deep learning initiatives. This effort could be extended for use with other vendors’ PACS.

Image Correction in Emission Tomography Using Deep Convolution Neural Network

  • Suzuki, T
  • Kudo, H
2019 Conference Proceedings, cited 0 times
We propose a new approach using Deep Convolution Neural Network (DCNN) to correct for image degradations due to statistical noise and photon attenuation in Emission Tomography (ET). The proposed approach first reconstructs an image by the standard Filtered Backprojection (FBP) without correcting for the degradations followed by inputting the degraded image into DCNN to obtain an improved image. We consider two different scenarios. The first scenario inputs an ET image only into DCNN, whereas the second scenario inputs a pair of degraded ET image and CT/MRI image to improve accuracy of the correction. The simulation result demonstrates that both the scenarios can improve image quality compared to the FBP without correction, and, in particular, accuracy of the second scenario is comparable to that of the standard iterative reconstruction such as Maximum Likelihood Expectation Maximization (MLEM) and Ordered-Subsets EM (OSEM) methods. The proposed method is able to output an image in very short time, because it does not rely on iterative computations.

Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets

  • Suzuki, K.
  • Otsuka, Y.
  • Nomura, Y.
  • Kumamaru, K. K.
  • Kuwatsuru, R.
  • Aoki, S.
Acad Radiol 2020 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVES: A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. MATERIALS AND METHODS: In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. RESULTS: In the internal validation, the CPM was 94.7% (95% CI: 89.1%-98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%-86.1%). CONCLUSION: The modified 3D U-net deep-learning model showed high performance in both internal and external validation.

Classification of Benign and Malignant Tumors of Lung Using Bag of Features

  • Suzan, A Melody
  • Prathibha, G
Journal of Scientific & Engineering Research 2017 Journal Article, cited 0 times
Website
This paper presents a novel approach for feature extraction and classification of lung cancer, i.e., Benign or malignant. Classification of lung cancer is based on a code book generated by using Bag of features algorithm. In this paper 300 regions of Interest (ROI’s) of lung cancer images from The Cancer Imaging Archive (TICA) sponsored by SPIE are used. In this approach Scale-Invariant Feature Transform (SIFT) is used for feature extraction and this coefficients are quantized using a bag of features into a predefined code book. This code book is given as input to the KNN classifier. The overall performance of the system in classifying tumors of lung is evaluated by using Receiver Operating Characteristics Curve (ROC). Area under the curve (AUC) is Az=0.95.

Personalized Medicine, Biomarkers of Risk and Breast MRI

  • Sutton, Elizabeth J
  • Purvis, Nina
  • Pinker-Domenig, Katja
  • Morris, Elizabeth A
2017 Book Section, cited 0 times
Website
Breast cancer is a heterogeneous disease with inter- and intra-tumor genetic variation impacting predictive and prognostic risk. This chapter discusses the use of breast MRI, the most sensitive imaging modality for high-risk screening and pre-operative assessment, to predict breast cancer risk, to define extent of disease and to monitor neoadjuvant chemotherapeutic response at the level of the individual patient. In the current clinical landscape, immunohistochemical surrogates are used to define molecular subtypes and personalized cancer treatment and care. Radiogenomics involves the correlation of genomic information with imaging features. Feature extraction from breast MRI is being pursued on a large scale as a potential non-invasive means of defining molecular subtypes and/or developing phenotypic biomarkers that can be clinically analogous to commercially available genomic assays. Neoadjuvant chemotherapy, treatment administered in operable cancers before surgery, is increasingly used, allowing for breast conservation in women who would traditionally require mastectomy. As breast cancer genetic molecular subtypes are predictive of recurrence free and overall survival, treatment based on breast cancer molecular subtype and breast MRI is critical in evaluating response though improvement in its sensitivity for pathologic complete response. Breast MRI in the neoadjuvant cohort has provided biomarkers of response and insight into the biologic basis of disease. MRI is at the forefront of technology providing prognostic indicators as well as a crucial tool in personalizing medicine. © Springer International Publishing Switzerland 2017. All rights reserved.

Breast MRI radiomics: comparison of computer- and human-extracted imaging phenotypes

  • Sutton, Elizabeth J
  • Huang, Erich P
  • Drukker, Karen
  • Burnside, Elizabeth S
  • Li, Hui
  • Net, Jose M
  • Rao, Arvind
  • Whitman, Gary J
  • Zuley, Margarita
  • Ganott, Marie
  • Bonaccio, Ermelinda
  • Giger, Maryellen L
  • Morris, Elizabeth A
European Radiology Experimental 2017 Journal Article, cited 17 times
Website
Background: In this study, we sought to investigate if computer-extracted magnetic resonance imaging (MRI) phenotypes of breast cancer could replicate human-extracted size and Breast Imaging-Reporting and Data System (BI-RADS) imaging phenotypes using MRI data from The Cancer Genome Atlas (TCGA) project of the National Cancer Institute. Methods: Our retrospective interpretation study involved analysis of Health Insurance Portability and Accountability Act-compliant breast MRI data from The Cancer Imaging Archive, an open-source database from the TCGA project. This study was exempt from institutional review board approval at Memorial Sloan Kettering Cancer Center and the need for informed consent was waived. Ninety-one pre-operative breast MRIs with verified invasive breast cancers were analysed. Three fellowship-trained breast radiologists evaluated the index cancer in each case according to size and the BI-RADS lexicon for shape, margin, and enhancement (human-extracted image phenotypes [HEIP]). Human inter-observer agreement was analysed by the intra-class correlation coefficient (ICC) for size and Krippendorff's alpha for other measurements. Quantitative MRI radiomics of computerised three-dimensional segmentations of each cancer generated computer-extracted image phenotypes (CEIP). Spearman's rank correlation coefficients were used to compare HEIP and CEIP. Results: Inter-observer agreement for HEIP varied, with the highest agreement seen for size (ICC 0.679) and shape (ICC 0.527). The computer-extracted maximum linear size replicated the human measurement with p < 10(-12). CEIP of shape, specifically sphericity and irregularity, replicated HEIP with both p values < 0.001. CEIP did not demonstrate agreement with HEIP of tumour margin or internal enhancement. Conclusions: Quantitative radiomics of breast cancer may replicate human-extracted tumour size and BI-RADS imaging phenotypes, thus enabling precision medicine.

Breast cancer molecular subtype classifier that incorporates MRI features

  • Sutton, Elizabeth J
  • Dashevsky, Brittany Z
  • Oh, Jung Hun
  • Veeraraghavan, Harini
  • Apte, Aditya P
  • Thakur, Sunitha B
  • Morris, Elizabeth A
  • Deasy, Joseph O
Journal of Magnetic Resonance Imaging 2016 Journal Article, cited 34 times
Website
Purpose: To use features extracted from magnetic resonance (MR) images and a machine-learning method to assist in differentiating breast cancer molecular subtypes. Materials and Methods: This retrospective Health Insurance Portability and Accountability Act (HIPAA)-compliant study received Institutional Review Board (IRB) approval. We identified 178 breast cancer patients between 2006-2011 with: 1) ERPR+ (n=95, 53.4%), ERPR-/HER2+ (n=35, 19.6%), or triple negative (TN, n=48, 27.0%) invasive ductal carcinoma (IDC), and 2) preoperative breast MRI at 1.5T or 3.0T. Shape, texture, and histogram-based features were extracted from each tumor contoured on pre- and three postcontrast MR images using in-house software. Clinical and pathologic features were also collected. Machine-learning-based (support vector machines) models were used to identify significant imaging features and to build models that predict IDC subtype. Leave-one-out cross-validation (LOOCV) was used to avoid model overfitting. Statistical significance was determined using the Kruskal-Wallis test. Results: Each support vector machine fit in the LOOCV process generated a model with varying features. Eleven out of the top 20 ranked features were significantly different between IDC subtypes with P < 0.05. When the top nine pathologic and imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 83.4%. The combined pathologic and imaging model's accuracy for each subtype was 89.2% (ERPR+), ;63.6% (ERPR-/HER2+), and 82.5% (TN). When only the top nine imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 71.2%. The combined pathologic and imaging model's accuracy for each subtype was 69.9% (ERPR+), 62.9% (ERPR-/HER2+), and 81.0% (TN). Conclusion: We developed a machine-learning-based predictive model using features extracted from MRI that can distinguish IDC subtypes with significant predictive power.

Overall Survival Prediction for Glioblastoma on Pre-treatment MRI Using Robust Radiomics and Priors

  • Suter, Yannick
  • Knecht, Urspeter
  • Wiest, Roland
  • Reyes, Mauricio
2021 Book Section, cited 0 times
Patients with Glioblastoma multiforme (GBM) have a very low overall survival (OS) time, due to the rapid growth an invasiveness of this brain tumor. As a contribution to the overall survival (OS) prediction task within the Brain Tumor Segmentation Challenge (BraTS), we classify the OS of GBM patients into overall survival classes based on information derived from pre-treatment Magnetic Resonance Imaging (MRI). The top-ranked methods from the past years almost exclusively used shape and position features. This is a remarkable contrast to the current advances in GBM radiomics showing a benefit of intensity-based features. This discrepancy may be caused by the inconsistent acquisition parameters in a multi-center setting. In this contribution, we test if normalizing the images based on the healthy tissue intensities enables the robust use of intensity features in this challenge. Based on these normalized images, we test the performance of 176 combinations of feature selection techniques and classifiers. Additionally, we test the incorporation of a sequence and robustness prior to limit the performance drop when models are applied to unseen data. The most robust performance on the training data (accuracy: 0.52±0.09 ) was achieved with random forest regression, but this accuracy could not be maintained on the test set.

Radiomics for glioblastoma survival analysis in pre-operative MRI: exploring feature robustness, class boundaries, and machine learning techniques

  • Suter, Y.
  • Knecht, U.
  • Alao, M.
  • Valenzuela, W.
  • Hewer, E.
  • Schucht, P.
  • Wiest, R.
  • Reyes, M.
Cancer Imaging 2020 Journal Article, cited 0 times
Website
BACKGROUND: This study aims to identify robust radiomic features for Magnetic Resonance Imaging (MRI), assess feature selection and machine learning methods for overall survival classification of Glioblastoma multiforme patients, and to robustify models trained on single-center data when applied to multi-center data. METHODS: Tumor regions were automatically segmented on MRI data, and 8327 radiomic features extracted from these regions. Single-center data was perturbed to assess radiomic feature robustness, with over 16 million tests of typical perturbations. Robust features were selected based on the Intraclass Correlation Coefficient to measure agreement across perturbations. Feature selectors and machine learning methods were compared to classify overall survival. Models trained on single-center data (63 patients) were tested on multi-center data (76 patients). Priors using feature robustness and clinical knowledge were evaluated. RESULTS: We observed a very large performance drop when applying models trained on single-center on unseen multi-center data, e.g. a decrease of the area under the receiver operating curve (AUC) of 0.56 for the overall survival classification boundary at 1 year. By using robust features alongside priors for two overall survival classes, the AUC drop could be reduced by 21.2%. In contrast, sensitivity was 12.19% lower when applying a prior. CONCLUSIONS: Our experiments show that it is possible to attain improved levels of robustness and accuracy when models need to be applied to unseen multi-center data. The performance on multi-center data of models trained on single-center data can be increased by using robust features and introducing prior knowledge. For successful model robustification, tailoring perturbations for robustness testing to the target dataset is key.

ROI-based feature learning for efficient true positive prediction using convolutional neural network for lung cancer diagnosis

  • Suresh, Supriya
  • Mohan, Subaji
Neural Computing and Applications 2020 Journal Article, cited 0 times

Brain Tumour Segmentation Using a Triplanar Ensemble of U-Nets on MR Images

  • Sundaresan, Vaanathi
  • Griffanti, Ludovica
  • Jenkinson, Mark
2021 Book Section, cited 0 times
Gliomas appear with wide variation in their characteristics both in terms of their appearance and location on brain MR images, which makes robust tumour segmentation highly challenging, and leads to high inter-rater variability even in manual segmentations. In this work, we propose a triplanar ensemble network, with an independent tumour core prediction module, for accurate segmentation of these tumours and their sub-regions. On evaluating our method on the MICCAI Brain Tumor Segmentation (BraTS) challenge validation dataset, for tumour sub-regions, we achieved a Dice similarity coefficient of 0.77 for both enhancing tumour (ET) and tumour core (TC). In the case of the whole tumour (WT) region, we achieved a Dice value of 0.89, which is on par with the top-ranking methods from BraTS’17-19. Our method achieved an evaluation score that was the equal 5th highest value (with our method ranking in 10th place) in the BraTS’20 challenge, with mean Dice values of 0.81, 0.89 and 0.84 on ET, WT and TC regions respectively on the BraTS’20 unseen test dataset.

Self-supervised pre-training of an attention-based model for 3D medical image segmentation

  • Sund Aillet, Albert
2023 Thesis, cited 0 times
Website
Abstract [en] Accurate segmentation of anatomical structures is crucial for radiation therapy in cancer treatment. Deep learning methods have been demonstrated effective for segmentation of 3D medical images, establishing the current standard. However, they require large amounts of labelled data and suffer from reduced performance on domain shift. A possible solution to these challenges is self-supervised learning, that uses unlabelled data to learn representations, which could possibly reduce the need for labelled data and produce more robust segmentation models. This thesis investigates the impact of self-supervised pre-training on an attention-based model for 3D medical image segmentation, specifically focusing on single-organ semantic segmentation, exploring whether self-supervised pre-training enhances the segmentation performance on CT scans with and without domain shift. The Swin UNETR is chosen as the deep learning model since it has been shown to be a successful attention-based architecture for semantic segmentation. During the pre-training stage, the contracting path is trained for three self-supervised pretext tasks using a large dataset of 5 465 unlabelled CT scans. The model is then fine-tuned using labelled datasets with 97, 142 and 288 segmentations of the stomach, the sternum and the pancreas. The results indicate that a substantial performance gain from self-supervised pre-training is not evident. Parameter freezing of the contracting path suggest that the representational power of the contracting path is not as critical for model performance as expected. Decreasing the amount of supervised training data shows that while the pre-training improves model performance when the amount of training data is restricted, the improvements are strongly decreased when more supervised training data is used. Abstract [sv] Noggrann segmentering av anatomiska strukturer är avgörande för strålbehandling inom cancervården. Djupinlärningmetoder har visat sig vara effektiva och utgör standard för segmentering av 3D medicinska bilder. Dessa metoder kräver däremot stora mängder märkt data och kännetecknas av lägre prestanda vid domänskift. Eftersom självövervakade inlärningsmetoder använder icke-märkt data för inlärning, kan de möjligen minska behovet av märkt data och producera mer robusta segmenteringsmodeller. Denna uppsats undersöker effekten av självövervakad förberedande träning av en attention-baserad modell för 3D medicinsk bildsegmentering, med särskilt fokus på semantisk segmentering av enskilda organ. Syftet är att studera om självövervakad förberedande träning förbättrar segmenteringsprestandan utan respektive med domänskift. Swin UNETR har valts som djupinlärningsmodell eftersom den har visat sig vara en framgångsrik attention-baserad arkitektur för semantisk segmentering. Under den förberedande träningsfasen optimeras modellens kontraherande del med 5 465 icke-märkta CT-scanningar. Modellen tränas sedan på märkta dataset med 97, 142 och 288 segmenterade skanningar av magen, bröstbenet och bukspottkörteln. Resultaten visar att prestandaökningen från självövervakad förberedande träning inte är tydlig. Parameterfrysning av den kontraherande delen visar att dess representationer inte lika avgörande för segmenteringsprestandan som förväntat. Minskning av mängden träningsdata tyder på att även om den förberedande träningen förbättrar modellens prestanda när mängden träningsdata är begränsad, minskas förbättringarna betydligt när mer träningsdata används.

AAWS-Net: Anatomy-aware weakly-supervised learning network for breast mass segmentation

  • Sun, Y.
  • Ji, Y.
PLoS One 2021 Journal Article, cited 0 times
Website
Accurate segmentation of breast masses is an essential step in computer aided diagnosis of breast cancer. The scarcity of annotated training data greatly hinders the model's generalization ability, especially for the deep learning based methods. However, high-quality image-level annotations are time-consuming and cumbersome in medical image analysis scenarios. In addition, a large amount of weak annotations is under-utilized which comprise common anatomy features. To this end, inspired by teacher-student networks, we propose an Anatomy-Aware Weakly-Supervised learning Network (AAWS-Net) for extracting useful information from mammograms with weak annotations for efficient and accurate breast mass segmentation. Specifically, we adopt a weakly-supervised learning strategy in the Teacher to extract anatomy structure from mammograms with weak annotations by reconstructing the original image. Besides, knowledge distillation is used to suggest morphological differences between benign and malignant masses. Moreover, the prior knowledge learned from the Teacher is introduced to the Student in an end-to-end way, which improves the ability of the student network to locate and segment masses. Experiments on CBIS-DDSM have shown that our method yields promising performance compared with state-of-the-art alternative models for breast mass segmentation in terms of segmentation accuracy and IoU.

Effect of machine learning methods on predicting NSCLC overall survival time based on Radiomics analysis

  • Sun, Wenzheng
  • Jiang, Mingyan
  • Dang, Jun
  • Chang, Panchun
  • Yin, Fang-Fang
Radiation Oncology 2018 Journal Article, cited 0 times
Website

A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study

  • Sun, Roger
  • Limkin, Elaine Johanna
  • Vakalopoulou, Maria
  • Dercle, Laurent
  • Champiat, Stéphane
  • Han, Shan Rong
  • Verlingue, Loïc
  • Brandao, David
  • Lancia, Andrea
  • Ammari, Samy
The Lancet Oncology 2018 Journal Article, cited 4 times
Website

Segmentation of the Multimodal Brain Tumor Images Used Res-U-Net

  • Sun, Jindong
  • Peng, Yanjun
  • Li, Dapeng
  • Guo, Yanfei
2021 Book Section, cited 0 times
Gliomas are the most common brain tumors, which have a high mortality. Magnetic resonance imaging (MRI) is useful to assess gliomas, in which segmentation of multimodal brain tissues in 3D medical images is of great significance for brain diagnosis. Due to manual job for segmentation is time-consuming, an automated and accurate segmentation method is required. How to segment multimodal brain accurately is still a challenging task. To address this problem, we employ residual neural blocks and a U-Net architecture to build a novel network. We have evaluated the performances of different primary residual neural blocks in building U-Net. Our proposed method was evaluated on the validation set of BraTS 2020, in which our model makes an effective segmentation for the complete, core and enhancing tumor regions in Dice Similarity Coefficient (DSC) metric (0.89, 0.78, 0.72). And in testing set, our model got the DSC results of 0.87, 0.82, 0.80. Residual convolutional block is especially useful to improve performance in building model. Our proposed method is inherently general and is a powerful tool to studies of medical images of brain tumors.

Machine learning to predict lung nodule biopsy method using CT image features: A pilot study

  • Sumathipala, Yohan
  • Shafiq, Majid
  • Bongen, Erika
  • Brinton, Connor
  • Paik, David
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times
Website

MDFU-Net: Multiscale dilated features up-sampling network for accurate segmentation of tumor from heterogeneous brain data

  • Sultan, Haseeb
  • Owais, Muhammad
  • Nam, Se Hyun
  • Haider, Adnan
  • Akram, Rehan
  • Usman, Muhammad
  • Park, Kang Ryoung
Journal of King Saud University - Computer and Information Sciences 2023 Journal Article, cited 0 times
Website
The existing methods for accurate brain tumor (BT) segmentation based on homogeneous datasets show significant performance degradation in actual clinical applications and lacked heterogeneous data analysis. To address these issues, we designed a deep learning-based multiscale dilated features up-sampling network (MDFU-Net) for accurate BT segmentation from heterogeneous brain data. Our method primarily uses the strength of multiscale dilated features (MDF) inside the encoder module to improve the segmentation performance. For the final segmentation, a simple yet effective decoder module is designed to process the dense spatial MDF. For experiments, our MDFU-Net is trained on one dataset and tested with another dataset in a heterogeneous environment, showing quantitative results of the Dice similarity coefficient (DC) of 62.66%, intersection over union (IoU) of 56.96%, specificity (Spe) of 99.29%, and sensitivity (Sen) of 51.98%, which were higher than those of the state-of-the-art methods. There are several reasons for the lower values of the evaluation metrics of the heterogeneous dataset, including the change in characteristics of different MRI modalities, the presence of minor lesions, and a highly imbalanced dataset. Moreover, the experimental results for a homogeneous dataset showed that our MDFU-Net achieved a DC of 82.96%, IoU of 74.94%, Spe of 99.89%, and Sen of 68.05%, which were also higher than those of the state-of-the-art methods. Our system, which is based on heterogeneous brain data as well as homogeneous brain data, can be advantageous to radiologists and medical experts.

Automatic Lung Segmentation and Lung Nodule Type Identification over LIDC-IDRI dataset

  • Suji, R. Jenkin
  • Bhadauria, Sarita Singh
Indian Journal of Computer Science and Engineering 2021 Journal Article, cited 0 times
Website
Accurate segmentation of lung parenchyma is one of the basic steps for lung nodule detection and diagnosis. Using thresholding and morphology based methods for lung parenchyma segmentation is challenging due to the homogeneous intensities present in lung images. Further, typically, datasets do not contain explicit labels of their nodule types and there little literature on how to typify nodules into different nodule types eventhough identifying nodule types help to understand and explain the progress and shortcomings of various steps in the computer aided diagnosis pipeline. Hence, this work also presents methods for identification of nodule types, juxta-vascular, juxta-pleural and isolated. This work presents thresholding and morphological operation based methods for both lung segmentation and lung nodule type identification. Thresholding and morphology based methods have been chosen over sophisticated approaches due to the reasons of simplicity and rapidity. Qualitative validation of the proposed lung segmentation method is provided in terms of step by step output on a scan from LIDC-IDRI dataset and lung nodule type identification method is provided by output volume images. Further, the lung segmentation method is validated by percentage of overlap and the results on nodule type identification for various lung segmentation outputs have been analysed. The provided analysis offers a peekview into the ability to analyse the lung segmentation algorithms and nodule detection and segmentation algorithms interms of nodule types and motivates the need to provide nodule type groundtruth information also for developing better nodule type classification/identification algorithms. Keywords: Lung Segmentation; Juxta-vascular nodules; Juxta-pleural nodules; Thresholding; Morphological operations.

Multi-stage AI analysis system to support prostate cancer diagnostic imaging

  • Suchanek, J
  • Rix, AW
  • Mehan, AH
  • Doran, CJL
  • Padhani, AR
  • Kastner, C
  • Barrett, T
  • Sala, E
2020 Conference Paper, cited 0 times
Website
An Artificial intelligence (AI) system was developed to support interpretation of pre-biopsy prostate multiparametric MRI (mpMRI), aiming to improve patient selection for biopsy, biopsy target identification, and productivity of segmentation and reporting, in the prostate cancer diagnostic pathway. For segmentation, the system achieved 92% average Dice score for prostate gland segmentation on held-out test cases from the PROMISE12 dataset (10 patients). For biopsy assessment, the system identified patients with Gleason ≥3+4 clinically significant prostate cancer (csPCA) with sensitivity 93% (95% CI 82-100%), specificity 76% (64-87%), NPV 95% (88-100%), and AUC 0.92 (0.84-0.98), using biparametric MRI (bpMRI) data from the combined PROSTATEx development validation and test sets (80 patients). Performance on the held-out PROSTATEx test set (40 patients) was higher. Radiologists in major studies achieved 93% per-patient sensitivity at specificity from 18-73%. Equivalent sensitivity is reported for comparable AI/CAD systems at specificity from 6%-42%. For biopsy targeting, the system identified lesions containing csPCa in the PROSTATEx blinded test set (208 lesions, 140 patients) with AUC 0.84/0.85 with bpMRI/mpMRI data respectively. The AI system shows promising performance compared to radiologists and the literature. Further development and evaluation with larger, multi-centre datasets is now planned to support regulatory approval of the system.

Context Dependent Fuzzy Associated Statistical Model for Intensity Inhomogeneity Correction from Magnetic Resonance Images

  • Subudhi, BN
  • Veerakumar, T
  • Esakkirajan, S
  • Ghosh, A
IEEE Journal of Translational Engineering in Health and Medicine 2019 Journal Article, cited 0 times
Website
In this article, a novel context dependent fuzzy set associated statistical model based intensity inhomogeneity correction technique for Magnetic Resonance Image (MRI) is proposed. The observed MRI is considered to be affected by intensity inhomogeneity and it is assumed to be a multiplicative quantity. In the proposed scheme the intensity inhomogeneity correction and MRI segmentation is considered as a combined task. The maximum a posteriori probability (MAP) estimation principle is explored to solve this problem. A fuzzy set associated Gibbs' Markov random field (MRF) is considered to model the spatio-contextual information of an MRI. It is observed that the MAP estimate of the MRF model does not yield good results with any local searching strategy, as it gets trapped to local optimum. Hence, we have exploited the advantage of variable neighborhood searching (VNS) based iterative global convergence criterion for MRF-MAP estimation. The effectiveness of the proposed scheme is established by testing it on different MRIs. Three performance evaluation measures are considered to evaluate the performance of the proposed scheme against existing state-of-the-art techniques. Simulation results establish the effectiveness of the proposed technique.

Breast cancer detection from mammograms using artificial intelligence

  • Subasi, Abdulhamit
  • Kandpal, Aayush Dinesh
  • Raj, Kolla Anant
  • Bagci, Ulas
2023 Book Section, cited 0 times
Breast cancer is one of the fastest-growing forms of cancer in the world today. Breast cancer is primarily found in women, and its frequency has been gaining significantly in the last few years. The key to tackle the rising cases of breast cancer is early detection. Many studies have shown that early detection significantly reduces the mortality rate of those affected. Machine learning and deep learning techniques have been adopted in the present scenario to help detect breast cancer in an early stage. Deep learning models such as the convolutional neural networks (CNNs) are suited explicitly to image data and overcome the drawbacks of machine learning models. To improve upon conventional approaches, we apply deep CNNs for automatic feature extraction and classifier building. In this chapter, we have demonstrated thoroughly the use of deep learning models through transfer learning, deep feature extraction, and machine learning models. Computer-aided detection or diagnosis systems have recently been developed to help health-care professionals increase diagnosis accuracy. This chapter presents early breast cancer detection from mammograms using artificial intelligence (AI). Various models have been presented along with an in-depth comparative analysis of the different state-of-the-art architectures, custom CNN networks, and classifiers trained on features extracted from pretrained networks. Our findings have indicated that deep learning models can achieve training accuracies of up to 99%, while both validation and test accuracies up to 96%. We conclude by suggesting various improvements that could be made to existing architectures and how AI techniques could help further improve and help in the early detection of breast cancer.

Attention U-Net with Dimension-Hybridized Fast Data Density Functional Theory for Automatic Brain Tumor Image Segmentation

  • Su, Zi-Jun
  • Chang, Tang-Chen
  • Tai, Yen-Ling
  • Chang, Shu-Jung
  • Chen, Chien-Chang
2021 Book Section, cited 0 times
In the article, we proposed a hybridized method for brain tumor image segmentation by fusing topological heterogeneities of images and the attention mechanism in the neural networks. The three-dimensional image datasets were first pre-processed using the histogram normalization for the standardization of pixel intensities. Then the normalized images were parallel fed into the procedures of affine transformations and feature pre-extractions. The technique of fast data density functional theory (fDDFT) was adopted for the topological feature extractions. Under the framework of fDDFT, 3-dimensional topological features were extracted and then used for the 2-dimensional tumor image segmentation, then those 2-dimensional significant images are reconstructed back to the 3-dimensional intensity feature maps by utilizing physical perceptrons. The undesired image components would be filtered out in this procedure. Thus, at the pre-processing stage, the proposed framework provided dimension-hybridized intensity feature maps and image sets after the affine transformations simultaneously. Then the feature maps and the transformed images were concatenated and then became the inputs of the attention U-Net. By employing the concept of gate controlling of the data flow, the encoder can perform as a masked feature tracker to concatenate the features produced from the decoder. Under the proposed algorithmic scheme, we constructed a fast method of dimension-hybridized feature pre-extraction for the training procedure in the neural network. Thus, the model size as well as the computational complexity might be reduced safely by applying the proposed algorithm.

YOLO-LOGO: A transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms

  • Su, Y.
  • Liu, Q.
  • Xie, W.
  • Hu, P.
Comput Methods Programs Biomed 2022 Journal Article, cited 4 times
Website
BACKGROUND AND OBJECTIVE: Both mass detection and segmentation in digital mammograms play a crucial role in early breast cancer detection and treatment. Furthermore, clinical experience has shown that they are the upstream tasks of pathological classification of breast lesions. Recent advancements in deep learning have made the analyses faster and more accurate. This study aims to develop a deep learning model architecture for breast cancer mass detection and segmentation using the mammography. METHODS: In this work we proposed a double shot model for mass detection and segmentation simultaneously using a combination of YOLO (You Only Look Once) and LOGO (Local-Global) architectures. Firstly, we adopted YoloV5L6, the state-of-the-art object detection model, to position and crop the breast mass in mammograms with a high resolution; Secondly, to balance training efficiency and segmentation performance, we modified the LOGO training strategy to train the whole images and cropped images on the global and local transformer branches separately. The two branches were then merged to form the final segmentation decision. RESULTS: The proposed YOLO-LOGO model was tested on two independent mammography datasets (CBIS-DDSM and INBreast). The proposed model performs significantly better than previous works. It achieves true positive rate 95.7% and mean average precision 65.0% for mass detection on CBIS-DDSM dataset. Its performance for mass segmentation on CBIS-DDSM dataset is F1-score=74.5% and IoU=64.0%. The similar performance trend is observed in another independent dataset INBreast as well. CONCLUSIONS: The proposed model has a higher efficiency and better performance, reduces computational requirements, and improves the versatility and accuracy of computer-aided breast cancer diagnosis. Hence it has the potential to enable more assistance for doctors in early breast cancer detection and treatment, thereby reducing mortality.

Radiogenomic-based multiomic analysis reveals imaging intratumor heterogeneity phenotypes and therapeutic targets

  • Su, G. H.
  • Xiao, Y.
  • You, C.
  • Zheng, R. C.
  • Zhao, S.
  • Sun, S. Y.
  • Zhou, J. Y.
  • Lin, L. Y.
  • Wang, H.
  • Shao, Z. M.
  • Gu, Y. J.
  • Jiang, Y. Z.
2023 Journal Article, cited 0 times
Website
Intratumor heterogeneity (ITH) profoundly affects therapeutic responses and clinical outcomes. However, the widespread methods for assessing ITH based on genomic sequencing or pathological slides, which rely on limited tissue samples, may lead to inaccuracies due to potential sampling biases. Using a newly established multicenter breast cancer radio-multiomic dataset (n = 1474) encompassing radiomic features extracted from dynamic contrast-enhanced magnetic resonance images, we formulated a noninvasive radiomics methodology to effectively investigate ITH. Imaging ITH (IITH) was associated with genomic and pathological ITH, predicting poor prognosis independently in breast cancer. Through multiomic analysis, we identified activated oncogenic pathways and metabolic dysregulation in high-IITH tumors. Integrated metabolomic and transcriptomic analyses highlighted ferroptosis as a vulnerability and potential therapeutic target of high-IITH tumors. Collectively, this work emphasizes the superiority of radiomics in capturing ITH. Furthermore, we provide insights into the biological basis of IITH and propose therapeutic targets for breast cancers with elevated IITH.

Radiogenomic-based multiomic analysis reveals imaging intratumor heterogeneity phenotypes and therapeutic targets

  • Su, Guan-Hua
  • Xiao, Yi
  • You, Chao
  • Zheng, Ren-Cheng
  • Zhao, Shen
  • Sun, Shi-Yun
  • Zhou, Jia-Yin
  • Lin, Lu-Yi
  • Wang, He
  • Shao, Zhi-Ming
2023 Journal Article, cited 0 times

Predicting, Analyzing and Communicating Outcomes of COVID-19 Hospitalizations with Medical Images and Clinical Data

  • Stritzel, Oliver
  • Raidou, Renata Georgia
2022 Journal Article, cited 0 times
Website
We propose PACO, a visual analytics framework to support the prediction, analysis, and communication of COVID-19 hospitalization outcomes. Although several real-world data sets about COVID-19 are openly available, most of the current research focuses on the detection of the disease. Until now, no previous work exists on combining insights from medical image data with knowledge extracted from clinical data, predicting the likelihood of an intensive care unit (ICU) visit, ventilation, or decease. Moreover, available literature has not yet focused on communicating such results to the broader society. To support the prediction, analysis and communication of the outcomes of COVID-19 hospitalizations on the basis of a publicly available data set comprising both electronic health data and medical image data [SSP∗21], we conduct the following three steps: (1) automated segmentation of the available X-ray images and processing of clinical data, (2) development of a model for the prediction of disease outcomes and a comparison to state-of-the-art prediction scores for both data sources, i.e., medical images and clinical data, and (3) the communication of outcomes to two different groups (i.e., clinical experts and the general population) through interactive dashboards. Preliminary results indicate that the prediction, analysis and communication of hospitalization outcomes is a significant topic in the context of COVID-19 prevention.

Comparison of Safety Margin Generation Concepts in Image Guided Radiotherapy to Account for Daily Head and Neck Pose Variations

  • Stoll, Markus
  • Stoiber, Eva Maria
  • Grimm, Sarah
  • Debus, Jürgen
  • Bendl, Rolf
  • Giske, Kristina
PLoS One 2016 Journal Article, cited 2 times
Website
PURPOSE: Intensity modulated radiation therapy (IMRT) of head and neck tumors allows a precise conformation of the high-dose region to clinical target volumes (CTVs) while respecting dose limits to organs a risk (OARs). Accurate patient setup reduces translational and rotational deviations between therapy planning and therapy delivery days. However, uncertainties in the shape of the CTV and OARs due to e.g. small pose variations in the highly deformable anatomy of the head and neck region can still compromise the dose conformation. Routinely applied safety margins around the CTV cause higher dose deposition in adjacent healthy tissue and should be kept as small as possible. MATERIALS AND METHODS: In this work we evaluate and compare three approaches for margin generation 1) a clinically used approach with a constant isotropic 3 mm margin, 2) a previously proposed approach adopting a spatial model of the patient and 3) a newly developed approach adopting a biomechanical model of the patient. All approaches are retrospectively evaluated using a large patient cohort of over 500 fraction control CT images with heterogeneous pose changes. Automatic methods for finding landmark positions in the control CT images are combined with a patient specific biomechanical finite element model to evaluate the CTV deformation. RESULTS: The applied methods for deformation modeling show that the pose changes cause deformations in the target region with a mean motion magnitude of 1.80 mm. We found that the CTV size can be reduced by both variable margin approaches by 15.6% and 13.3% respectively, while maintaining the CTV coverage. With approach 3 an increase of target coverage was obtained. CONCLUSION: Variable margins increase target coverage, reduce risk to OARs and improve healthy tissue sparing at the same time.

Multimodal deep learning to predict prognosis in adult and pediatric brain tumors

  • Steyaert, S.
  • Qiu, Y. L.
  • Zheng, Y.
  • Mukherjee, P.
  • Vogel, H.
  • Gevaert, O.
2023 Journal Article, cited 0 times
Website
BACKGROUND: The introduction of deep learning in both imaging and genomics has significantly advanced the analysis of biomedical data. For complex diseases such as cancer, different data modalities may reveal different disease characteristics, and the integration of imaging with genomic data has the potential to unravel additional information than when using these data sources in isolation. Here, we propose a DL framework that combines these two modalities with the aim to predict brain tumor prognosis. METHODS: Using two separate glioma cohorts of 783 adults and 305 pediatric patients we developed a DL framework that can fuse histopathology images with gene expression profiles. Three strategies for data fusion were implemented and compared: early, late, and joint fusion. Additional validation of the adult glioma models was done on an independent cohort of 97 adult patients. RESULTS: Here we show that the developed multimodal data models achieve better prediction results compared to the single data models, but also lead to the identification of more relevant biological pathways. When testing our adult models on a third brain tumor dataset, we show our multimodal framework is able to generalize and performs better on new data from different cohorts. Leveraging the concept of transfer learning, we demonstrate how our pediatric multimodal models can be used to predict prognosis for two more rare (less available samples) pediatric brain tumors. CONCLUSIONS: Our study illustrates that a multimodal data fusion approach can be successfully implemented and customized to model clinical outcome of adult and pediatric brain tumors. An increasing amount of complex patient data is generated when treating patients with cancer, including histopathology data (where the appearance of a tumor is examined under a microscope) and molecular data (such as analysis of a tumor's genetic material). Computational methods to integrate these data types might help us to predict outcomes in patients with cancer. Here, we propose a deep learning method which involves computer software learning from patterns in the data, to combine histopathology and molecular data to predict outcomes in patients with brain cancers. Using three cohorts of patients, we show that our method combining the different datasets performs better than models using one data type. Methods like ours might help clinicians to better inform patients about their prognosis and make decisions about their care. eng

Glioblastomas located in proximity to the subventricular zone (SVZ) exhibited enrichment of gene expression profiles associated with the cancer stem cell state

  • Steed, T. C.
  • Treiber, J. M.
  • Taha, B.
  • Engin, H. B.
  • Carter, H.
  • Patel, K. S.
  • Dale, A. M.
  • Carter, B. S.
  • Chen, C. C.
J Neurooncol 2020 Journal Article, cited 2 times
Website
INTRODUCTION: Conflicting results have been reported in the association between glioblastoma proximity to the subventricular zone (SVZ) and enrichment of cancer stem cell properties. Here, we examined this hypothesis using magnetic resonance (MR) images derived from 217 The Cancer Imaging Archive (TCIA) glioblastoma subjects. METHODS: Pre-operative MR images were segmented automatically into contrast enhancing (CE) tumor volumes using Iterative Probabilistic Voxel Labeling (IPVL). Distances were calculated from the centroid of CE tumor volumes to the SVZ and correlated with gene expression profiles of the corresponding glioblastomas. Correlative analyses were performed between SVZ distance, gene expression patterns, and clinical survival. RESULTS: Glioblastoma located in proximity to the SVZ showed increased mRNA expression patterns associated with the cancer stem-cell state, including CD133 (P = 0.006). Consistent with the previous observations suggesting that glioblastoma stem cells exhibit increased DNA repair capacity, glioblastomas in proximity to the SVZ also showed increased expression of DNA repair genes, including MGMT (P = 0.018). Reflecting this enhanced DNA repair capacity, the genomes of glioblastomas in SVZ proximity harbored fewer single nucleotide polymorphisms relative to those located distant to the SVZ (P = 0.003). Concordant with the notion that glioblastoma stem cells are more aggressive and refractory to therapy, patients with glioblastoma in proximity to SVZ exhibited poorer progression free and overall survival (P < 0.01). CONCLUSION: An unbiased analysis of TCIA suggests that glioblastomas located in proximity to the SVZ exhibited mRNA expression profiles associated with stem cell properties, increased DNA repair capacity, and is associated with poor clinical survival.

Differential localization of glioblastoma subtype: implications on glioblastoma pathogenesis

  • Steed, Tyler C
  • Treiber, Jeffrey M
  • Patel, Kunal
  • Ramakrishnan, Valya
  • Merk, Alexander
  • Smith, Amanda R
  • Carter, Bob S
  • Dale, Anders M
  • Chow, LM
  • Chen, Clark C
OncotargetOncotarget 2016 Journal Article, cited 8 times
Website
INTRODUCTION: The subventricular zone (SVZ) has been implicated in the pathogenesis of glioblastoma. Whether molecular subtypes of glioblastoma arise from unique niches of the brain relative to the SVZ remains largely unknown. Here, we tested whether these subtypes of glioblastoma occupy distinct regions of the cerebrum and examined glioblastoma localization in relation to the SVZ. METHODS: Pre-operative MR images from 217 glioblastoma patients from The Cancer Imaging Archive were segmented automatically into contrast enhancing (CE) tumor volumes using Iterative Probabilistic Voxel Labeling (IPVL). Probabilistic maps of tumor location were generated for each subtype and distances were calculated from the centroid of CE tumor volumes to the SVZ. Glioblastomas that arose in a Genetically Modified Murine Model (GEMM) model were also analyzed with regard to SVZ distance and molecular subtype. RESULTS: Classical and mesenchymal glioblastomas were more diffusely distributed and located farther from the SVZ. In contrast, proneural and neural glioblastomas were more likely to be located in closer proximity to the SVZ. Moreover, in a GFAP-CreER; PtenloxP/loxP; Trp53loxP/loxP; Rb1loxP/loxP; Rbl1-/- GEMM model of glioblastoma where tumor can spontaneously arise in different regions of the cerebrum, tumors that arose near the SVZ were more likely to be of proneural subtype (p < 0.0001). CONCLUSIONS: Glioblastoma subtypes occupy different regions of the brain and vary in proximity to the SVZ. These findings harbor implications pertaining to the pathogenesis of glioblastoma subtypes.

Quantification of glioblastoma mass effect by lateral ventricle displacement

  • Steed, Tyler C
  • Treiber, Jeffrey M
  • Brandel, Michael G
  • Patel, Kunal S
  • Dale, Anders M
  • Carter, Bob S
  • Chen, Clark C
2018 Journal Article, cited 1 times
Website
Mass effect has demonstrated prognostic significance for glioblastoma, but is poorly quantified. Here we define and characterize a novel neuroimaging parameter, lateral ventricle displacement (LVd), which quantifies mass effect in glioblastoma patients. LVd is defined as the magnitude of displacement from the center of mass of the lateral ventricle volume in glioblastoma patients relative to that a normal reference brain. Pre-operative MR images from 214 glioblastoma patients from The Cancer Imaging Archive (TCIA) were segmented using iterative probabilistic voxel labeling (IPVL). LVd, contrast enhancing volumes (CEV) and FLAIR hyper-intensity volumes (FHV) were determined. Associations with patient survival and tumor genomics were investigated using data from The Cancer Genome Atlas (TCGA). Glioblastoma patients had significantly higher LVd relative to patients without brain tumors. The variance of LVd was not explained by tumor volume, as defined by CEV or FLAIR. LVd was robustly associated with glioblastoma survival in Cox models which accounted for both age and Karnofsky's Performance Scale (KPS) (p = 0.006). Glioblastomas with higher LVd demonstrated increased expression of genes associated with tumor proliferation and decreased expression of genes associated with tumor invasion. Our results suggest LVd is a quantitative measure of glioblastoma mass effect and a prognostic imaging biomarker.

Iterative Probabilistic Voxel Labeling: Automated Segmentation for Analysis of The Cancer Imaging Archive Glioblastoma Images

  • Steed, TC
  • Treiber, JM
  • Patel, KS
  • Taich, Z
  • White, NS
  • Treiber, ML
  • Farid, N
  • Carter, BS
  • Dale, AM
  • Chen, CC
American Journal of Neuroradiology 2015 Journal Article, cited 12 times
Website
BACKGROUND AND PURPOSE: Robust, automated segmentation algorithms are required for quantitative analysis of large imaging datasets. We developed an automated method that identifies and labels brain tumor-associated pathology by using an iterative probabilistic voxel labeling using k-nearest neighbor and Gaussian mixture model classification. Our purpose was to develop a segmentation method which could be applied to a variety of imaging from The Cancer Imaging Archive. MATERIALS AND METHODS: Images from 2 sets of 15 randomly selected subjects with glioblastoma from The Cancer Imaging Archive were processed by using the automated algorithm. The algorithm-defined tumor volumes were compared with those segmented by trained operators by using the Dice similarity coefficient. RESULTS: Compared with operator volumes, algorithm-generated segmentations yielded mean Dice similarities of 0.92 +/- 0.03 for contrast-enhancing volumes and 0.84 +/- 0.09 for FLAIR hyperintensity volumes. These values compared favorably with the means of Dice similarity coefficients between the operator-defined segmentations: 0.92 +/- 0.03 for contrast-enhancing volumes and 0.92 +/- 0.05 for FLAIR hyperintensity volumes. Robust segmentations can be achieved when only postcontrast T1WI and FLAIR images are available. CONCLUSIONS: Iterative probabilistic voxel labeling defined tumor volumes that were highly consistent with operator-defined volumes. Application of this algorithm could facilitate quantitative assessment of neuroimaging from patients with glioblastoma for both research and clinical indications.

An Integrative Analysis of Image Segmentation and Survival of Brain Tumour Patients

  • Starke, Sebastian
  • Eckert, Carlchristian
  • Zwanenburg, Alex
  • Speidel, Stefanie
  • Löck, Steffen
  • Leger, Stefan
2020 Book Section, cited 0 times
Our contribution to the BraTS 2019 challenge consisted of a deep learning based approach for segmentation of brain tumours from MR images using cross validation ensembles of 2D-UNet models. Furthermore, different approaches for the prediction of patient survival time using clinical as well as imaging features were investigated. A simple linear regression model using patient age and tumour volumes outperformed more elaborate approaches like convolutional neural networks or radiomics-based analysis with an accuracy of 0.55 on the validation cohort and 0.51 on the test cohort.

False Positive Reduction in Mammographic Mass Detection Using Image Representations for Textural Analysis

  • Srinivashini, N
  • Lavanya, R
2021 Conference Paper, cited 0 times
Website
Breast cancer is a prominent disease affecting women and is associated with low survival rate. Mammogram is a widely accepted and adopted modality for diagnosing breast cancer. The challenges faced in the early detection of breast cancer include poor contrast of mammograms, complex nature of abnormalities and difficulty in interpreting dense tissues. Computer-Aided Diagnosis (CAD) schemes help radiologists improve the sensitivity by rendering an objective diagnosis, in addition to reducing the time and cost involved. Conventional methods for automated diagnosis involve extracting handcrafted features from Region of Interest (ROI) followed by classification using Machine Learning (ML) techniques. The main challenge faced in CAD is higher false positive rate which adds to patient anxiety. This paper proposes a new CAD scheme for reducing the number of false positives in mammographic mass detection using a Deep Learning (DL) method. Convolutional Neural Network (CNN) can be considered as a prospective candidate for efficiently eliminating false positives in mammographic mass detection. More specifically, image representations that include Hilbert's image representation and forest fire model which contain rich textural information are given as input to CNN for mammogram classification. The proposed system outperforms ML approach based on handcrafted features extracted from the image representations considered. In particular, forest fire- CNN combination achieves accuracy as high as 96%.

A hybrid deep CNN model for brain tumor image multi-classification

  • Srinivasan, S.
  • Francis, D.
  • Mathivanan, S. K.
  • Rajadurai, H.
  • Shivahare, B. D.
  • Shah, M. A.
BMC Med Imaging 2024 Journal Article, cited 0 times
Website
The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.

An Image Processing Tool for Efficient Feature Extraction in Computer-Aided Detection Systems

  • Soysal, Omer M
  • Chen, P
  • Schneider, Helmut
2010 Conference Proceedings, cited 3 times
Website
In this paper, we present an image processing tool that supports efficient image feature extraction and pre-processing developed in the context of a computer-aided detection (CAD) system for lung cancer nodule detection from CT images. We outline the main functionalities of the tool, which implements a number of novel methods for handling image pre-processing and feature extraction tasks. In particular, we describe an efficient way to compute the run-length feature, a photometric feature describing the texture of an image.

ALTIS: A fast and automatic lung and trachea CT-image segmentation method

  • Sousa, A. M.
  • Martins, S. B.
  • Falcão, A. X.
  • Reis, F.
  • Bagatin, E.
  • Irion, K.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at http://lids.ic.unicamp.br/downloads) - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea. METHODS: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK). RESULTS: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs. CONCLUSION: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.

Exploration of temporal stability and prognostic power of radiomic features based on electronic portal imaging device images

  • Soufi, M.
  • Arimura, H.
  • Nakamoto, T.
  • Hirose, T. A.
  • Ohga, S.
  • Umezu, Y.
  • Honda, H.
  • Sasaki, T.
Phys Med 2018 Journal Article, cited 7 times
Website
PURPOSE: We aimed to explore the temporal stability of radiomic features in the presence of tumor motion and the prognostic powers of temporally stable features. METHODS: We selected single fraction dynamic electronic portal imaging device (EPID) (n=275 frames) and static digitally reconstructed radiographs (DRRs) of 11 lung cancer patients, who received stereotactic body radiation therapy (SBRT) under free breathing. Forty-seven statistical radiomic features, which consisted of 14 histogram-based features and 33 texture features derived from the graylevel co-occurrence and graylevel run-length matrices, were computed. The temporal stability was assessed by using a multiplication of the intra-class correlation coefficients (ICCs) between features derived from the EPID and DRR images at three quantization levels. The prognostic powers of the features were investigated using a different database of lung cancer patients (n=221) based on a Kaplan-Meier survival analysis. RESULTS: Fifteen radiomic features were found to be temporally stable for various quantization levels. Among these features, seven features have shown potentials for prognostic prediction in lung cancer patients. CONCLUSIONS: This study suggests a novel approach to select temporally stable radiomic features, which could hold prognostic powers in lung cancer patients.

Identification of optimal mother wavelets in survival prediction of lung cancer patients using wavelet decomposition‐based radiomic features

  • Soufi, Mazen
  • Arimura, Hidetaka
  • Nagami, Noriyuki
Medical Physics 2018 Journal Article, cited 1 times
Website

3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction

  • Sood, R. R.
  • Shao, W.
  • Kunder, C.
  • Teslovich, N. C.
  • Wang, J. B.
  • Soerensen, S. J. C.
  • Madhuripan, N.
  • Jawahar, A.
  • Brooks, J. D.
  • Ghanouni, P.
  • Fan, R. E.
  • Sonn, G. A.
  • Rusu, M.
Med Image Anal 2021 Journal Article, cited 0 times
Website
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.

MR and mammographic imaging features of HER2-positive breast cancers according to hormone receptor status: a retrospective comparative study

  • Song, Sung Eun
  • Bae, Min Sun
  • Chang, Jung Min
  • Cho, Nariya
  • Ryu, Han Suk
  • Moon, Woo Kyung
Acta Radiologica 2016 Journal Article, cited 2 times
Website
Background Human epidermal growth factor receptor 2-positive (HER2+) breast cancer has two distinct subtypes according to hormone receptor (HR) status. Survival, pattern of recurrence, and treatment response differ between HR-/HER2+ and HR+/HER2+ cancers. Purpose To investigate imaging and clinicopathologic features of HER2+ cancers and their correlation with HR expression. Material and Methods Between 2011 and 2013, 252 consecutive patients with 252 surgically confirmed HER2+ cancers (125 HR- and 127 HR+) were included. Two experienced breast radiologists blinded to the clinicopathologic findings reviewed the mammograms and magnetic resonance (MR) images using the BI-RADS lexicon. Tumor kinetic features were acquired by computer-aided detection (CAD). The imaging and clinicopathologic features of 125 HR-/HER2+ cancers were compared with those of 127 HR+/HER2+ cancers. Association between the HR status and each feature was assessed. Results Multiple logistic regression analysis showed that circumscribed mass margin (odds ratio [OR], 4.73; P < 0.001), associated non-mass enhancement (NME) on MR images (OR, 3.29; P = 0.001), high histologic grade (OR, 3.89; P = 0.002), high Ki-67 index (OR, 3.06; P = 0.003), and older age (OR, 2.43; P = 0.006) remained independent indicators associated with HR-/HER2+ cancers. Between the two HER2+ subtypes, there were no differences in mammographic imaging presentations and calcification features and MR kinetic features by a CAD. Conclusion HER2+ breast cancers have different MR imaging (MRI) phenotypes and clinicopathologic feature according to HR status. MRI features related to HR and HER2 status have the potential to be used for the diagnosis and treatment decisions in HER2+ breast cancer patients.

Using Deep Learning for Classification of lung nodules on Computed Tomography Images

  • Song, QingZeng
  • Zhao, Lei
  • Luo, XingKe
  • Dou, XueChen
2017 Journal Article, cited 19 times
Website
Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.

Non-small cell lung cancer: quantitative phenotypic analysis of CT images as a potential marker of prognosis

  • Song, Jiangdian
  • Liu, Zaiyi
  • Zhong, Wenzhao
  • Huang, Yanqi
  • Ma, Zelan
  • Dong, Di
  • Liang, Changhong
  • Tian, Jie
Sci RepScientific reports 2016 Journal Article, cited 14 times
Website

Dynamic Co-occurrence of Local Anisotropic Gradient Orientations (DyCoLIAGe) Descriptors from Pre-treatment Perfusion DSC-MRI to Predict Overall Survival in Glioblastoma

  • Song, Bolin
2019 Thesis, cited 0 times
Website
A significant clinical challenge in glioblastoma is to risk-stratify patients for clinical trials, preferably using MRI scans. Radiomics involves mining of sub-visual features that could serve as surrogate markers of tumor heterogeneity from routine imaging. Previously our group had developed a new gradient-based radiomic descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (COLLAGE), to capture tumor heterogeneity on structural MRI. I present an extension of CoLLAGE on perfusion MRI, termed dynamic COLLAGE (DyCoLIAGe), and demonstrate its application in predicting overall survival in glioblastoma. Following manual segmentation, 52 CoLIAGe features were extracted from edema and enhancing tumor at different time phases during contrast administration of perfusion MRI. Each feature was separately plotted across different time-points, and a 3rd-order polynomial was fit to each feature curve. The corresponding polynomial coefficients were evaluated in terms of their prognosis performance. My results suggest that DyCoLIAGe may be prognostic of overall survival in glioblastoma.

Efficient MRI Brain Tumor Segmentation Using Multi-resolution Encoder-Decoder Networks

  • Soltaninejad, Mohammadreza
  • Pridmore, Tony
  • Pound, Michael
2021 Book Section, cited 0 times
In this paper, we propose an automated three dimensional (3D) deep learning approach for the segmentation of gliomas in pre-operative brain MRI scans. We introduce a state-of-the-art multi-resolution architecture based on encoder-decoder which comprise of separate branches to incorporate local high-resolution image features and wider low-resolution contextual information. We also used a unified multi-task loss function to provide end-to-end segmentation training. For the task of survival prediction, we propose a regression algorithm based on random forests to predict the survival days for the patients. Our proposed network is fully automated and designed to take input as patches that can work on input images of any arbitrary size. We trained our proposed network on the BraTS 2020 challenge dataset that consists of 369 training cases, and then validated on 125 unseen validation datasets, and tested on 166 unseen cases from the testing dataset using a blind testing approach. The quantitative and qualitative results demonstrate that our proposed network provides efficient segmentation of brain tumors. The mean Dice overlap measures for automatic brain tumor segmentation of the validation dataset against ground truth are 0.87, 0.80, and 0.66 for the whole tumor, core, and enhancing tumor, respectively. The corresponding results for the testing dataset are 0.78, 0.70, and 0.66, respectively. The accuracy measures of the proposed model for the survival prediction tasks are 0.45 and 0.505 for the validation and testing datasets, respectively.

Efficacy of Location-Based Features for Survival Prediction of Patients With Glioblastoma Depending on Resection Status

  • Soltani, Madjid
  • Bonakdar, Armin
  • Shakourifar, Nastaran
  • Babaei, Reza
  • Raahemifar, Kaamran
Front Oncol 2021 Journal Article, cited 0 times
Website
Cancer stands out as one of the fatal diseases people are facing all the time. Each year, a countless number of people die because of the late diagnosis of cancer or wrong treatments. Glioma, one of the most common primary brain tumors, has different aggressiveness and sub-regions, which can affect the risk of disease. Although prediction of overall survival based on multimodal magnetic resonance imaging (MRI) is challenging, in this study, we assess if and how location-based features of tumors can affect overall survival prediction. This approach is evaluated independently and in combination with radiomic features. The process is carried out on a data set entailing MRI images of patients with glioblastoma. To assess the impact of resection status, the data set is divided into two groups, patients were reported as gross total resection and unknown resection status. Then, different machine learning algorithms were used to evaluate how location features are linked with overall survival. Results from regression models indicate that location-based features have considerable effects on the patients' overall survival independently. Additionally, classifier models show an improvement in prediction accuracy by the addition of location-based features to radiomic features.

Innovative Design Methodology for Patient-Specific Short Femoral Stems

  • Solorzano-Requejo, W.
  • Ojeda, C.
  • Diaz Lantada, A.
Materials (Basel) 2022 Journal Article, cited 0 times
Website
The biomechanical performance of hip prostheses is often suboptimal, which leads to problems such as strain shielding, bone resorption and implant loosening, affecting the long-term viability of these implants for articular repair. Different studies have highlighted the interest of short stems for preserving bone stock and minimizing shielding, hence providing an alternative to conventional hip prostheses with long stems. Such short stems are especially valuable for younger patients, as they may require additional surgical interventions and replacements in the future, for which the preservation of bone stock is fundamental. Arguably, enhanced results may be achieved by combining the benefits of short stems with the possibilities of personalization, which are now empowered by a wise combination of medical images, computer-aided design and engineering resources and automated manufacturing tools. In this study, an innovative design methodology for custom-made short femoral stems is presented. The design process is enhanced through a novel app employing elliptical adjustment for the quasi-automated CAD modeling of personalized short femoral stems. The proposed methodology is validated by completely developing two personalized short femoral stems, which are evaluated by combining in silico studies (finite element method (FEM) simulations), for quantifying their biomechanical performance, and rapid prototyping, for evaluating implantability.

Deep variational clustering framework for self-labeling large-scale medical images

  • Soleymani, Farzin
  • Eslami, Mohammad
  • Elze, Tobias
  • Bischl, Bernd
  • Rezaei, Mina
  • Išgum, Ivana
  • Colliot, Olivier
2022 Conference Paper, cited 0 times
Website
One of the most promising approaches for unsupervised learning is combining deep representation learning and deep clustering. Recent studies propose to simultaneously learn representation using deep neural networks and perform clustering by defining a clustering loss on top of embedded features. Unsupervised image clustering naturally requires good feature representations to capture the distribution of the data and subsequently differentiate data points from one another. Among existing deep learning models, the generative variational autoencoder explicitly learns data generating distribution in a latent space. We propose a Deep Variational Clustering (DVC) framework for unsupervised representation learning and clustering of large-scale medical images. DVC simultaneously learns the multivariate Gaussian posterior through the probabilistic convolutional encoder, and the likelihood distribution with the probabilistic convolutional decoder; and optimizes cluster labels assignment. Here, the learned multivariate Gaussian posterior captures the latent distribution of a large set of unlabeled images. Then, we perform unsupervised clustering on top of the variational latent space using a clustering loss. In this approach, the probabilistic decoder helps to prevent the distortion of data points in the latent space, and to preserve local structure of data generating distribution. The training process can be considered as a self-training process to refine the latent space and simultaneously optimizing cluster assignments iteratively. We evaluated our proposed framework on three public datasets that represented different medical imaging modalities. Our experimental results show that our proposed framework generalizes better across different datasets. It achieves compelling results on several medical imaging benchmarks. Thus, our approach offers potential advantages over conventional deep unsupervised learning in real-world applications. The source code of the method and of all the experiments are available publicly at: https://github.com/csfarzin/DVC

Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer

  • Soleimani, Hossein
2021 Thesis, cited 0 times
Website
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum of medical conditions. However, different modalities of medical imaging employ/apply di erent contrast mechanisms and, consequently, provide different depictions of bodily anatomy. As a result, there is a frequent problem where the same pathology can be detected by one type of medical imaging while being missed by others. This problem brings forward the importance of the development of image processing tools for integrating the information provided by different imaging modalities via the process of information fusion. One particularly important example of clinical application of such tools is in the diagnostic management of breast cancer, which is a prevailing cause of cancer-related mortality in women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and Magnetic Resonance Imaging (MRI), which are both important throughout different stages of detection, localization, and treatment of the disease. The sensitivity of mammography, however, is known to be limited in the case of relatively dense breasts, while contrast enhanced MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this situation, it is critical to find reliable ways of fusing the mammography and MRI scans in order to improve the sensitivity of the former while boosting the specificity of the latter. Unfortunately, fusing the above types of medical images is known to be a difficult computational problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital mammograms are always planar (2-D). Moreover, mammograms are invariably acquired under the force of compression paddles, thus making the breast anatomy undergo sizeable deformations. In the case of MRI, on the other hand, the breast is rarely constrained and imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely di erent physical mechanisms, which produce distinct diagnostic contrasts which are related in a non-trivial way. Under such conditions, the success of information fusion depends on one's ability to establish spatial correspondences between mammograms and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the presence of spatial deformations (+SD). Solving the problem of information fusion in the CMCD+SD setting is a very challenging analytical/computational problem, still in need of efficient solutions. In the literature, there is a lack of a generic and consistent solution to the problem of fusing mammograms and breast MRIs and using their complementary information. Most of the existing MRI to mammogram registration techniques are based on a biomechanical approach which builds a speci c model for each patient to simulate the effect of mammographic compression. The biomechanical model is not optimal as it ignores the common characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all patients. Regardless of the size, shape, or internal con guration of the breast tissue, one can predict the major part of the deformation only by considering the geometry of the breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical modeling, we developed a new and relatively simple approach to estimate the deformation and nd the correspondences. We consider the total deformation to consist of two components: a large-magnitude global deformation due to mammographic compression and a residual deformation of relatively smaller amplitude. We propose a much simpler way of predicting the global deformation which compares favorably to FEM in terms of its accuracy. The residual deformation, on the other hand, is recovered in a variational framework using an elastic transformation model. The proposed algorithm provides us with a computational pipeline that takes breast MRIs and mammograms as inputs and returns the spatial transformation which establishes the correspondences between them. This spatial transformation can be applied in different applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving the quality of surgical care) and correlating between different types of mammograms. We investigate the performance of our proposed pipeline on the application of enhancing mammograms by means of MRIs and we have shown improvements over the state of the art.

Glioblastoma radiomics: can genomic and molecular characteristics correlate with imaging response patterns?

  • Soike, Michael H
  • McTyre, Emory R
  • Shah, Nameeta
  • Puchalski, Ralph B
  • Holmes, Jordan A
  • Paulsson, Anna K
  • Miller, Lance D
  • Cramer, Christina K
  • Lesser, Glenn J
  • Strowd, Roy E
Neuroradiology 2018 Journal Article, cited 1 times
Website

Automatic detection and segmentation of malignant lesions from [18F]FDG PET/CT images using machine learning techniques: application in lymphomas

  • Sobral, Miriam Norinha Gomes
2023 Thesis, cited 0 times
Website
New studies have arisen trying to automatically perform some clinical tasks, such as the detection and segmentation of medical images. Manual and, sometimes, semi-automatic methods, are very time-consuming and prone to inter-observer variability. This is especially significant when the lesions spread throughout the entire body, as happens with lymphomas. The main goal was to develop fully automatic deep learning-based models (U-Net and ResUNet) for detecting and segmenting lymphoma lesions in [ 18F]FDG PET images. A secondary goal was to study the impact the training data has on the final performance, namely the impact of the patient's primary tumour type, the acquisition scanner, the number of images, and the use of transfer learning. The Dice similarity coefficient (DSC) and the lesion detection index (LDI) were used to study the models’ performance. The training dataset contains 491 [ 18F]FDG PET images from the MICCAI AutoPET 2022 Challenge and 87 [ 18F]FDG PET images from the Champalimaud Clinical Centre (CCC). Primary tumours are lymphoma, melanoma, and lung cancer, among others The test set contains 39 [ 18F]FDG PET images from lymphoma patients from the CCC. Regarding the results, using data from the lymphoma patients during training positively impacts the performance of both models on lymphoma lesions’segmentation. The results also showed that when the training dataset increases in size and has images acquired in the same equipment as the images used in the test dataset, both DSC and LDI increase. The best model using a U-Net achieved a DSC of 0.593 and a LDI of 0.186. When using a ResU-Net, the best model had a DSC of 0.524 and a LDI of 0.200. In conclusion, this study confirms the adequacy of the U-Net and ResU-Net architectures for lesion segmentation in [18F]FDG PET/CT images of patients with lymphoma. Moreover, it pointed out some clues for future training strategies.

MRI imaging texture features in prostate lesions classification

  • Sobecki, Piotr
  • Życka-Malesa, Dominika
  • Mykhalevych, Ihor
  • Sklinda, Katarzyna
  • Przelaskowski, Artur
2018 Book Section, cited 0 times
Prostate cancer (PCa) is the most common diagnosed cancer and cause of cancer-related death among men. Computer Aided Diagnosis (CAD) systems are used to support radiologists in multiparametric Magnetic Resonance (mpMR) image-based analysis in order to avoid unnecessary biopsis and increase radiologist’s specificity. CAD systems have been reported in many papers for the last decade. The reported results have been obtained on small, private data sets and are impossible to reproduce or verify concluded remarks. PROSTATEx challenge organizers provided database that contains approximately 350 MRI cases, each from a distinct patient, allowing benchmarking of various CAD systems. This paper describes novel, deep learning based PCa CAD system that uses statistical central moments and Haralick features extracted from MR images, integrated with anamnestic data. Developed system has been trained on the dataset consisting of 330 lesions and evaluated on the challenge dataset using area under curve (AUC) related to estimated receiver operating characteristic (ROC). Two configurations of our method, based on statistical and Haralick features, scored 0.63 and 0.73 of AUC values. We draw conclusions from the challenge participation and discussed further improvements that could be made to the model to improve prostate classification.

Multisite Technical and Clinical Performance Evaluation of Quantitative Imaging Biomarkers from 3D FDG PET Segmentations of Head and Neck Cancer Images

  • Smith, Brian J
  • Buatti, John M
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
  • Kinahan, Paul E
  • Muzi, John P
  • Muzi, Mark
  • Laymon, Charles M
  • Mountz, James M
  • Nehmeh, Sadek
  • Oborski, Matthew J
  • Zhao, Binsheng
  • Sunderland, John J
  • Beichel, Reinhard R
Tomography 2020 Journal Article, cited 1 times
Website
Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.

A Novel Noise Removal Method for Lung CT SCAN Images Using Statistical Filtering Techniques

  • Sivakumar, S
  • Chandrasekar, C
International Journal of Algorithms Design and Analysis 2015 Journal Article, cited 0 times

A STUDY ON IMAGE DENOISING FOR LUNG CT SCAN IMAGES

  • Sivakumar, S
  • Chandrasekar, C
International Journal of Emerging Technologies in Computational and Applied Sciences 2014 Journal Article, cited 1 times
Website
Medical imaging is the technique and process used to create images of the human body for clinical purposes and diagnosis. Medical imaging is often perceived to designate the set of techniques that non- invasively produce images of the internal aspect of the body. The x-ray computed tomographic (CT) scanner has made it possible to detect the presence of lesions of very low contrast. The noise in the reconstructed CT images is significantly reduced through the use of efficient x-ray detectors and electronic processing. The CT reconstruction technique almost completely eliminates the superposition of anatomic structures, leading to a reduction of "structural" noise. It is the random noise in a CT image that ultimately limits the ability of the radiologist to discriminate between two regions of different density. Because of its unpredictable nature, such noise cannot be completely eliminated from the image and will always lead to some uncertainty in the interpretation of the image. The noise present in the images may appear as additive or multiplicative components and the main purpose of denoising is to remove these noisy components while preserving the important signal as much as possible. In this paper we analyzed the denoising filters such as Mean, Median, Midpoint, Wiener filters and the three more modified filter approaches for the Lung CT scan images to remove the noise present in the images and compared by the quality parameters.

Lung nodule detection using fuzzy clustering and support vector machines

  • Sivakumar, S
  • Chandrasekar, C
International Journal of Engineering and Technology 2013 Journal Article, cited 43 times
Website
Lung cancer is the primary cause of tumor deaths for both sexes in most countries. Lung nodule, an abnormality which leads to lung cancer is detected by various medical imaging techniques like X-ray, Computerized Tomography (CT), etc. Detection of lung nodules is a challenging task since the nodules are commonly attached to the blood vessels. Many studies have shown that early diagnosis is the most efficient way to cure this disease. This paper aims to develop an efficient lung nodule detection scheme by performing nodule segmentation through fuzzy based clustering models; classification by using a machine learning technique called Support Vector Machine (SVM). This methodology uses three different types of kernels among these RBF kernel gives better class performance.

Lungs image segmentation through weighted FCM

  • Sivakumar, S
  • Chandrasekar, C
2012 Conference Proceedings, cited 8 times
Website

Brain tumor segmentation approach based on the extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms running on Raspberry Pi hardware

  • ŞİŞİK, Fatih
  • Sert, Eser
Medical Hypotheses 2020 Journal Article, cited 0 times
Automatic decision support systems have gained importance in health sector in recent years. In parallel with recent developments in the fields of artificial intelligence and image processing, embedded systems are also used in decision support systems for tumor diagnosis. Extreme learning machine (ELM), is a recently developed, quick and efficient algorithm which can quickly and flawlessly diagnose tumors using machine learning techniques. Similarly, significantly fast and robust fuzzy C-means clustering algorithm (FRFCM) is a novel and fast algorithm which can display a high performance. In the present study, a brain tumor segmentation approach is proposed based on extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms (BTS-ELM-FRFCM) running on Raspberry Pi (PRI) hardware. The present study mainly aims to introduce a new segmentation system hardware containing new algorithms and offering a high level of accuracy the health sector. PRI’s are useful mobile devices due to their cost-effectiveness and satisfying hardware. 3200 training images were used to train ELM in the present study. 20 pieces of MRI images were used for testing process. Figure of merid (FOM), Jaccard similarity coefficient (JSC) and Dice indexes were used in order to evaluate the performance of the proposed approach. In addition, the proposed method was compared with brain tumor segmentation based on support vector machine (BTS-SVM), brain tumor segmentation based on fuzzy C-means (BTS-FCM) and brain tumor segmentation based on self-organizing maps and k-means (BTS-SOM). The statistical analysis on FOM, JSC and Dice results obtained using four different approaches indicated that BTS-ELM-FRFCM displayed the highest performance. Thus, it can be concluded that the embedded system designed in the present study can perform brain tumor segmentation with a high accuracy rate.

Improving lung cancer detection using faster region‐based convolutional neural network aided with fuzzy butterfly optimization algorithm

  • Sinthia, P.
  • Malathi, M.
  • K, Anitha
  • Suresh Anand, M.
Concurrency and Computation: Practice and Experience 2022 Journal Article, cited 0 times
Website
Lung cancer is the most deadly type of cancer and it is caused by genetic variations in lung tissues. Other causes of lung cancer are alcohol, smoking, and threatening gas exposures. The diagnosis of lung cancer is an intricate task and early detection of lung cancer can help to get the exact treatment in advance. The application of a computer-aided diagnosis process helps to predict lung cancer earlier, nonetheless, it does not provide better accuracy. The overfitting nature of features and dimensionality of lung cancer can prevent it from obtaining maximum accuracy. Hence, we proposed a novel faster region convolutional neural network (RCNN) based fuzzy butterfly optimization algorithm (FBOA) to achieve better prediction accuracy and effectiveness. The proposed Faster RCNN can provide better positioning of lung cancer swiftly and effectively and the FBOA approach can be used to perform two-stage classification. The fuzzy rules used in the FBOA can be utilized to find the severity of the lung cancer and differentiate the Benign stage and Malignant stage effectively. The experimental analyses are performed in MATLAB simulation. The preprocessing of images is performed by different tools from MATLAB and is to format the images as required. The cancer imaging archive (TCIA) dataset is utilized to analyze the performance of the proposed method and compared with various state-of-art works. The performance of the proposed method is evaluated by using different evaluation metrics such as precision, recall, F-measure, and accuracy and attained 99, 98, 99, and 97% respectively. Thus, our proposed method outperforms all the other approaches.

Simultaneous segmentation and correspondence improvement using statistical modes

  • Sinhaa, Ayushi
  • Reitera, Austin
  • Leonarda, Simon
  • Ishiib, Masaru
  • Hagera, Gregory D
  • Taylora, Russell H
2017 Conference Proceedings, cited 3 times
Website

Recovering Physiological Changes in Nasal Anatomy with Confidence Estimates

  • Sinha, A.
  • Liu, X.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, Russell H
2019 Conference Proceedings, cited 0 times
Purpose Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. Methods We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. Results We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. Conclusion Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.

Endoscopic navigation in the clinic: registration in the absence of preoperative imaging

  • Sinha, A.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, R. H.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
PURPOSE: Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. METHODS: We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. RESULTS: We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. CONCLUSION: Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.

The deformable most-likely-point paradigm

  • Sinha, A.
  • Billings, S. D.
  • Reiter, A.
  • Liu, X.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, R. H.
Med Image Anal 2019 Journal Article, cited 1 times
Website
In this paper, we present three deformable registration algorithms designed within a paradigm that uses 3D statistical shape models to accomplish two tasks simultaneously: 1) register point features from previously unseen data to a statistically derived shape (e.g., mean shape), and 2) deform the statistically derived shape to estimate the shape represented by the point features. This paradigm, called the deformable most-likely-point paradigm, is motivated by the idea that generative shape models built from available data can be used to estimate previously unseen data. We developed three deformable registration algorithms within this paradigm using statistical shape models built from reliably segmented objects with correspondences. Results from several experiments show that our algorithms produce accurate registrations and reconstructions in a variety of applications with errors up to CT resolution on medical datasets. Our code is available at https://github.com/AyushiSinha/cisstICP.

Deformable registration using shape statistics with applications in sinus surgery

  • Sinha, Ayushi
2018 Thesis, cited 3 times
Website
Evaluating anatomical variations in structures like the nasal passage and sinuses is challenging because their complexity can often make it difficult to differentiate normal and abnormal anatomy. By statistically modeling these variations and estimating individual patient anatomy using these models, quantitative estimates of similarity or dissimilarity between the patient and the sample population can be made. In order to do this, a spatial alignment, or registration, between patient anatomy and the statistical model must first be computed. In this dissertation, a deformable most likely point paradigm is introduced that incorporates statistical variations into probabilistic feature-based registration algorithms. This paradigm is a variant of the most likely point paradigm, which incorporates feature uncertainty into the registration process. The deformable registration algorithms optimize the probability of feature alignment as well as the probability of model deformation allowing statistical models of anatomy to estimate, for instance, structures seen in endoscopic video without the need for patient specific computed tomography (CT) scans. The probabilistic framework also enables the algorithms to assess the quality of registrations produced, allowing users to know when an alignment can be trusted. This dissertation covers three algorithms built within this paradigm and evaluated in simulation and in-vivo experiments.

Brain Tumor Extraction from MRI Using Clustering Methods and Evaluation of Their Performance

  • Singh, Vipula
  • Tunga, P. Prakash
2019 Conference Paper, cited 0 times
Website
In this paper, we consider the extraction of brain tumor from MRI (Magnetic Resonance Imaging) images using K-means, Fuzzy c-means and Region growing clustering methods. After extraction, various parameters related to performance of clustering methods, and also, parameters related to description of tumor are calculated. MRI is a non-invasive method which provides the view of structural features of tissues in the body at very high resolution (typically on 100 μm scale). Therefore, it will be advantageous if the detection and segmentation of brain tumors are based on MRI. This work is in the direction of replacing the manual identification and separation of tumor structures from brain MRI by computer aided techniques, which would add great value with respect to accuracy, reproducibility, diagnosis and treatment planning. The brain tumor separated from original image is referred as Region of Interest (ROI) and remaining portion of original image is referred as Non-region of Interest (NROI).

A novel deep learning-based technique for detecting prostate cancer in MRI images

  • Singh, Sanjay Kumar
  • Sinha, Amit
  • Singh, Harikesh
  • Mahanti, Aniket
  • Patel, Abhishek
  • Mahajan, Shubham
  • Pandit, Amit Kant
  • Varadarajan, Vijayakumar
Multimedia Tools and Applications 2023 Journal Article, cited 0 times
Website
In the western world,the prostate cancer is major cause of death in males. Magnetic Resonance Imaging (MRI) is widely used for the detection of prostate cancer due to which it is an open area of research. The proposed method uses deep learning framework for the detection of prostate cancer using the concept of Gleason grading of the historical images. A 3D convolutional neural network has been used to observe the affected region and predicting the affected region with the help of Epithelial and the Gleason grading network. The proposed model has performed the state-of-art while detecting epithelial and the Gleason score simultaneously. The performance has been measured by considering all the slices of MRI, volumes of MRI with the test fold, and segmenting prostate cancer with help of Endorectal Coil for collecting the images of MRI of the prostate 3D CNN network. Experimentally, it was observed that the proposed deep learning approach has achieved overall specificity of 85% with an accuracy of 87% and sensitivity 89% over the patient-level for the different targeted MRI images of the challenge of the SPIE-AAPM-NCI Prostate dataset.

Evaluation of reader variability in the interpretation of follow-up CT scans at lung cancer screening

  • Singh, Satinder
  • Pinsky, Paul
  • Fineberg, Naomi S
  • Gierada, David S
  • Garg, Kavita
  • Sun, Yanhui
  • Nath, P Hrudaya
RadiologyRadiology 2011 Journal Article, cited 47 times
Website

Reader variability in identifying pulmonary nodules on chest radiographs from the national lung screening trial

  • Singh, Satinder
  • Gierada, David S
  • Pinsky, Paul
  • Sanders, Colleen
  • Fineberg, Naomi
  • Sun, Yanhui
  • Lynch, David
  • Nath, Hrudaya
Journal of thoracic imaging 2012 Journal Article, cited 4 times
Website

Brain Tumor Segmentation Using Deep Learning Technique

  • Singh, Oyesh Mann
2017 Thesis, cited 0 times
Website

Performance analysis of various machine learning-based approaches for detection and classification of lung cancer in humans

  • Singh, Gur Amrit Pal
  • Gupta, PK
Neural Computing and Applications 2018 Journal Article, cited 0 times
Website
Lung cancer is one of the most common causes of death among all cancer-related diseases (Cancer Research UK in Cancer mortality for common cancers. http://www.cancerresearchuk.org/health-professional/cancer-statistics/mortality/common-cancers-compared, 2017). It is primarily diagnosed by performing a scan analysis of the patient’s lung. This scan analysis could be of X-ray, CT scan, or MRI. Automated classification of lung cancer is one of the difficult tasks, attributing to the varying mechanisms used for imaging patient’s lungs. Image processing and machine learning approaches have shown a great potential for detection and classification of lung cancer. In this paper, we have demonstrated effective approach for detection and classification of lung cancer-related CT scan images into benign and malignant category. Proposed approach firstly processes these images using image processing techniques, and then further supervised learning algorithms are used for their classification. Here, we have extracted texture features along with statistical features and supplied various extracted features to classifiers. We have used seven different classifiers known as k-nearest neighbors classifier, support vector machine classifier, decision tree classifier, multinomial naive Bayes classifier, stochastic gradient descent classifier, random forest classifier, and multi-layer perceptron (MLP) classifier. We have used dataset of 15750 clinical images consisting of both 6910 benign and 8840 malignant lung cancer related images to train and test these classifiers. In the obtained results, it is found that accuracy of MLP classifier is higher with value of 88.55% in comparison with the other classifiers.

Multimodal Brain Tumor Segmentation Using Modified UNet Architecture

  • Singh, Gaurav
  • Phophalia, Ashish
2022 Book Section, cited 0 times
Segmentation of brain tumor is challenging due presence of healthy or background region more compared to tumor regions and also the tumor region itself divided in edema, tumor core and non enhancing regions makes it hard to segment. Given the scarcity of such data, it becomes more challenging. In this paper, we built a 3D-UNet based architecture for multimodal brain tumor segmentation task. We have reported results on BraTS 2021 Validation and Test Dataset. We achieved a Dice value of 0.87, 0.76 and 0.73 on whole tumor region, tumor core region and enhancing part respectively for Validation Data and 0.73, 0.67 and 0.63 on whole tumor region, tumor core region and enhancing part respectively for Test Data.

Image Segmentation and Pre-Processing for Lung Cancer Detection in Humans Based on Deep-Learning

  • Singh, Drishti
  • Singh, Jaspreet
2023 Conference Paper, cited 0 times
When it comes to cancer and its linked disorders, lung cancer is consistently ranked among the top causes of mortality. The primary method for making the diagnosis is to do a scan analysis of the patient's lungs. It's possible that an MRI, CT scan, or X-ray will be analyzed in this scan analysis. Due to the wide variety of imaging techniques that can be used to a patient's lungs, one of the challenging jobs that must be completed is the automated classification of lung cancer. Methods involving machine learning, deep learning and image processing have demonstrated a significant amount of promise for the classification and identification of lung cancer. In this research, we demonstrate a successful strategy for detecting and classifying malignant and benign lung cancer-related CT scan pictures. This approach was developed as part of our research for this paper. The proposed method begins by classifying the photos after they have been processed with image processing techniques. After that, the supervised learning algorithms are utilized to further process the images. In this section, we have extracted statistical features as well as textural characteristics, and then we have fed several extracted features to multiple classifiers. We have utilized a total of seven distinct classifiers, which are referred to as the KNN classifier, SVM classifier, multinomial naive Bayes classifier, decision tree, SGD(stochastic gradient descent), MLP (multi-layer perceptron) and random forest. When training and testing these classifiers, we employed a dataset that included both benign and malignant lung cancer-related images. In the findings that were collected, it was discovered that the accuracy is highest which is approx. 88 percent(for MLP classifier), when compared to other classifiers.

Segmentation of prostate zones using probabilistic atlas-based method with diffusion-weighted MR images

  • Singh, D.
  • Kumar, V.
  • Das, C. J.
  • Singh, A.
  • Mehndiratta, A.
Comput Methods Programs Biomed 2020 Journal Article, cited 10 times
Website
BACKGROUND AND OBJECTIVE: Accurate segmentation of prostate and its zones constitute an essential preprocessing step for computer-aided diagnosis and detection system for prostate cancer (PCa) using diffusion-weighted imaging (DWI). However, low signal-to-noise ratio and high variability of prostate anatomic structures are challenging for its segmentation using DWI. We propose a semi-automated framework that segments the prostate gland and its zones simultaneously using DWI. METHODS: In this paper, the Chan-Vese active contour model along with morphological opening operation was used for segmentation of prostate gland. Then segmentation of prostate zones into peripheral zone (PZ) and transition zone (TZ) was carried out using in-house developed probabilistic atlas with partial volume (PV) correction algorithm. The study cohort included MRI dataset of 18 patients (n = 18) as our dataset and methodology were also independently evaluated using 15 MRI scans (n = 15) of QIN-PROSTATE-Repeatability dataset. The atlas for zones of prostate gland was constructed using dataset of twelve patients of our patient cohort. Three-fold cross-validation was performed with 10 repetitions, thus total 30 instances of training and testing were performed on our dataset followed by independent testing on the QIN-PROSTATE-Repeatability dataset. Dice similarity coefficient (DSC), Jaccard coefficient (JC), and accuracy were used for quantitative assessment of the segmentation results with respect to boundaries delineated manually by an expert radiologist. A paired t-test was performed to evaluate the improvement in zonal segmentation performance with the proposed PV correction algorithm. RESULTS: For our dataset, the proposed segmentation methodology produced improved segmentation with DSC of 90.76 +/- 3.68%, JC of 83.00 +/- 5.78%, and accuracy of 99.42 +/- 0.36% for the prostate gland, DSC of 77.73 +/- 2.76%, JC of 64.46 +/- 3.43%, and accuracy of 82.47 +/- 2.22% for the PZ, and DSC of 86.05 +/- 1.50%, JC of 75.80 +/- 2.10%, and accuracy of 91.67 +/- 1.56% for the TZ. The segmentation performance for QIN-PROSTATE-Repeatability dataset was, DSC of 85.50 +/- 4.43%, JC of 75.00 +/- 6.34%, and accuracy of 81.52 +/- 5.55% for prostate gland, DSC of 74.40 +/- 1.79%, JC of 59.53 +/- 8.70%, and accuracy of 80.91 +/- 5.16% for PZ, and DSC of 85.80 +/- 5.55%, JC of 74.87 +/- 7.90%, and accuracy of 90.59 +/- 3.74% for TZ. With the implementation of the PV correction algorithm, statistically significant (p<0.05) improvements were observed in all the metrics (DSC, JC, and accuracy) for both prostate zones, PZ and TZ segmentation. CONCLUSIONS: The proposed segmentation methodology is stable, accurate, and easy to implement for segmentation of prostate gland and its zones (PZ and TZ). The atlas-based segmentation framework with PV correction algorithm can be incorporated into a computer-aided diagnostic system for PCa localization and treatment planning.

Quantitative evaluation of denoising techniques of lung computed tomography images: an experimental investigation

  • Singh, Bikesh Kumar
  • Nair, Neeti
  • Falgun, Patle Ashwini
  • Jain, Pankaj
International Journal of Biomedical Engineering and Technology 2022 Journal Article, cited 0 times
Website
Appropriate selection of denoising method is critical component of lung computed tomography (CT)-based computer aided diagnosis (CAD) systems since noises and artefacts may deteriorate the image quality significantly thereby leading to incorrect diagnosis. This study presents a comparative investigation of various techniques used for denoising lung CT images. Current practices, evaluation measures, research gaps and future challenges in this area are also discussed. Experiments on 20 real-time lung CT images indicate that Gaussian filter with 3 × 3 window size outperformed others achieving high picture signal-to-noise ratio (PSNR), Pratt's figure of merit (PFOM), signal-to-noise ratio (SNR) and root mean square error (RMSE) of 45.476, 97.964, 32.811, 0.948 and 0.008, respectively. Further, this approach also demonstrates good edge retrieval efficiency. Future work is needed to evaluate various filters in clinical practice along with segmentation, feature extraction, and classification of lung nodules in CT images.

A Novel Imaging-Genomic Approach to Predict Outcomes of Radiation Therapy

  • Singh, Apurva
  • Goyal, Sharad
  • Rao, Yuan James
  • Loew, Murray
2019 Thesis, cited 0 times
Introduction: Tumor regions are populated by various cellular species. Intra-tumor radiogenomic heterogeneity can be attributed to factors including variations in the blood flow to the different parts of the tumor and variations in the gene mutation frequencies. This heterogeneity is further propagated by cancer cells which adopt an “evolutionarily enlightened” growth approach. This growth, which focuses on developing an adaptive mechanism to progressively develop a strong resistance to therapy, follows a unique pattern in each patient. This makes the development of a uniform treatment technique very challenging and makes the concept of “precision medicine”, which is developed using information unique to each patient, very crucial to the development of effective cancer treatment methods. Our study aims to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of patients and in their gene mutation status can measure the efficacy of radiation therapy in their treatment. We wish to develop a scheme which could predict the effectiveness of therapy at the pre-treatment stage, reduce the unnecessary exposure of the patient to radiation which would ultimately not be helpful in curing the patient and thus help in choosing alternative cancer therapy measures for the patients under consideration. Materials and methods: Our radiomics analysis was developed using PET scans for 20 patients from the HNSCC database from TCIA (The Cancer Imaging Archive). Clinical data were used to divide the patients into two categories based on the recurrence status of the tumor. Radiation structures are overlain on the PET scans for tumor delineation. Texture features extracted from tumor regions are reduced using correlation matrix-based technique and are classified by methods including Weighted KNN, Linear SVM and Bagged Trees. Slice-wise classification results are computed, treating each slice as a 2D image and treating the collection of slices as a 3D volume. Patient-wise results are computed by a voting scheme which assigns to each patient the class label possessed by more than half of its slices. After the voting is complete, the assigned labels are compared to the actual labels to compute the patient-wise classification accuracies. This workflow was tested on a group of 53 patients of the database- Head-Neck-PET-CT. We further proceeded to develop a radiogenomic workflow by combining gene expression features with tumor texture features for a group of 11 patients of our third database: TCGA-HNSC. We developed geometric transform-based database augmentation method and used it to generate PET scans using images from the existing dataset. To evaluate our analysis, we decided to test our workflow on patients with tumors at different sites, using scans of different modalities. We included PET scans for 24 lung cancer patients (15 from TCGA-LUSC (Lung Squamous Cell Carcinoma) and 9 from TCGA-LUAD (Lung Adenocarcinoma) databases). We used wavelet features along with the existing group of texture features to improve the classification scores. Further, we used non-rigid transform-based techniques for database augmentation. We also included MR scans for 54 cervical cancer patients (from TCGA-CESC (Cervical Squamous Cell Carcinoma and Endocervical Carcinoma) database) in our study and employed Fisher based selection technique for reduction of the high dimensional feature space. Results: The classification accuracy obtained by the 2D and 3D texture analysis is about 70% for slice-wise classification and 80% for patient-wise classification for the head and neck cancer patients (HNSCC and Head-Neck-PT-CT databases). The overall classification accuracies obtained from the transformed tumor slices are comparable to the original tumor slices. Thus, geometric transformation is an effective method for database augmentation. The addition of binary genomic features to the texture features (TCGA-HNSC patients) increases the classification accuracies (from 80%-100% for 2D and from 60%-100% for 3D patient-wise classification). The classification accuracies increase from 58% to 84% (2D slice-wise) and from 58% to 70% (2D patient-wise) in the case of lung cancer patients with the inclusion of wavelet features to the existing texture feature group and by augmenting the database (non-rigid transformation) to include equal number of patients and slices in the recurrent and non-recurrent categories. The accuracies are about 64% for 2D slice-wise and patient-wise classification for cervical cancer patients (using correlation-matrix based feature selection) and increase to about 72% using Fisher- based selection criteria Conclusion: Our study has introduced the novel approach of fusing the information present in The Cancer Imaging Archive (TCIA) and TCGA to develop a combined imaging phenotype and genotype expression for therapy personalization. Texture measures provide a measure of tumor heterogeneity, which can be used to predict recurrence status. Information from gene expression patterns of the patients, when combined with texture measures, provides a unique radiogenomic feature which substantially improves therapy response prediction scores.

Tumor Heterogeneity and Genomics to Predict Radiation Therapy Outcome for Head-and-Neck Cancer: A Machine Learning Approach

  • Singh, A.
  • Goyal, S.
  • Rao, Y. J.
  • Loew, M.
International Journal of Radiation Oncology*Biology*Physics 2019 Journal Article, cited 0 times
Website
Head and Neck Squamous Cell Carcinoma (HNSCC) is usually treated with Radiation Therapy (RT). Recurrence of the tumor occurs in some patients. The purpose of this study was to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of HNSCC patients can be used to predict recurrence. We then extended our study to include gene mutation information of a patient group to assess its value as an additional feature to determine treatment efficacy. Materials/Methods Pre-treatment PET scans of 20 patients from the first database (HNSCC), included in The Cancer Imaging Archive (TCIA), were analyzed. The follow-up duration for those patients varied between two and ten years. Accompanying clinical data were used to divide the patients into two categories according to whether they had a recurrence of the tumor. Radiation structures included in the database were overlain on the PET scans to delineate the tumor, whose heterogeneity is measured by texture analysis. The classification is carried out in two ways: making a decision for each image slice, and treating the collection of slices as a 3D volume. This approach was tested on an independent set of 53 patients from a second TCIA database (Head-Neck-PET-CT [HNPC]). The Cancer Genome Atlas (TCGA) identified frequent mutations in the expression of PIK3CA, CDKN2A and TP53 genes in HNSCC patients. We combined gene expression features with texture features for 11 patients of the third database (TCGA-HNSC), and re-evaluated the classification accuracies.

Automatic Lung Segmentation for the Inclusion of Juxtapleural Nodules and Pulmonary Vessels using Curvature based Border Correction

  • Singadkar, Ganesh
  • Mahajan, Abhishek
  • Thakur, Meenakshi
  • Talbar, Sanjay
Journal of King Saud University-Computer and Information Sciences 2018 Journal Article, cited 1 times
Website

Preoperative CT and survival data for patients undergoing resection of colorectal liver metastases

  • Simpson, A. L.
  • Peoples, J.
  • Creasy, J. M.
  • Fichtinger, G.
  • Gangai, N.
  • Keshavamurthy, K. N.
  • Lasso, A.
  • Shia, J.
  • D'Angelica, M. I.
  • Do, R. K. G.
Sci Data 2024 Journal Article, cited 0 times
Website
The liver is a common site for the development of metastases in colorectal cancer. Treatment selection for patients with colorectal liver metastases (CRLM) is difficult; although hepatic resection will cure a minority of CRLM patients, recurrence is common. Reliable preoperative prediction of recurrence could therefore be a valuable tool for physicians in selecting the best candidates for hepatic resection in the treatment of CRLM. It has been hypothesized that evidence for recurrence could be found via quantitative image analysis on preoperative CT imaging of the future liver remnant before resection. To investigate this hypothesis, we have collected preoperative hepatic CT scans, clinicopathologic data, and recurrence/survival data, from a large, single-institution series of patients (n = 197) who underwent hepatic resection of CRLM. For each patient, we also created segmentations of the liver, vessels, tumors, and future liver remnant. The largest of its kind, this dataset is a resource that may aid in the development of quantitative imaging biomarkers and machine learning models for the prediction of post-resection hepatic recurrence of CRLM.

Open osteology: Medical imaging databases as skeletal collections

  • Simmons-Ehrhardt, Terrie
Forensic Imaging 2021 Journal Article, cited 1 times
Website
Highlights •Medical imaging datasets can be used as skeletal reference collections •Computed tomography data from TCIA can be accessed via 3D Slicer •Many tools in 3D Slicer support skeletal analyses and dissemination products •3D bone models can be used for education, research, training, web-based reference •Bone modeling in 3D Slicer will support common workflows and shareable datasets Abstract The increasing availability of de-identified medical image databases, especially of computed tomography (CT) scans, presents an opportunity for “open osteology,” or the establishment of new skeletal reference collections. The number of free and/or open-source software packages for generating three-dimensional (3D) CT models, such as 3D Slicer, reduces financial obstacles to working with CT data and encourages the development of common workflows and datasets. The direct link to the Cancer Imaging Archive from 3D Slicer facilitates access to medical imaging datasets to support education and research with virtual skeletal data. Generation of 3D models enables computational methods for skeletal analyses and can also lead to the generation of virtual libraries representing large amounts of human skeletal variation. 3D printing of 3D CT models can supplement physical skeletal collections for the classroom and research beyond the standard commercially available specimens. Web-based technologies support 3D model and CT volume visualization, interaction, and measurement, increasing opportunities for dissemination and collaboration as well as the possible integration of 3D data as references for skeletal analysis tools. Increasing awareness and usage of pre-existing free and open-source resources applicable to forensic anthropology will facilitate method/workflow development, validation, and eventually standardization. This presentation will discuss online sources of skeletal data, outline methods for processing CT scans with free software into 3D digital models and discuss web-based technologies and repositories that allow interaction with 3D skeletal models. The demonstration of these methods will contribute to discussions on the expansion of virtual anthropology and open osteology.

Multi-stage Deep Layer Aggregation for Brain Tumor Segmentation

  • Silva, Carlos A.
  • Pinto, Adriano
  • Pereira, Sérgio
  • Lopes, Ana
2021 Book Section, cited 0 times
Gliomas are among the most aggressive and deadly brain tumors. This paper details the proposed Deep Neural Network architecture for brain tumor segmentation from Magnetic Resonance Images. The architecture consists of a cascade of three Deep Layer Aggregation neural networks, where each stage elaborates the response using the feature maps and the probabilities of the previous stage, and the MRI channels as inputs. The neuroimaging data are part of the publicly available Brain Tumor Segmentation (BraTS) 2020 challenge dataset, where we evaluated our proposal in the BraTS 2020 Validation and Test sets. In the Test set, the experimental results achieved a Dice score of 0.8858, 0.8297 and 0.7900, with an Hausdorff Distance of 5.32 mm, 22.32 mm and 20.44 mm for the whole tumor, core tumor and enhanced tumor, respectively.

Learning a Metric for Multimodal Medical Image Registration without Supervision Based on Cycle Constraints

  • Siebert, Hanna
  • Hansen, Lasse
  • Heinrich, Mattias P.
Sensors 2022 Journal Article, cited 0 times
Website
Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations—including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans. Keywords: image registration; cycle constraint; multimodal features; self-supervision; rigid alignment

Breast Lesion Segmentation in DCE-MRI using Multi-Objective Clustering with NSGA-II

  • Si, Tapas
  • Dipak Kumar Patra
  • Sukumar Mondal
  • Prakash Mukherjee
2022 Conference Paper, cited 0 times
Website
Breast cancer causes the highest death among all types of cancers in women. Early detection and diagnosis leading to early treatment can save the life. The computer-assisted methodologies for breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) segmentation can help the radiologists/doctors in the diagnosis of the disease as well as further treatment planning. In this article, we propose a breast DCE-MRI segmentation method using a hard-clustering technique with a Non-dominated Sorting Genetic Algorithm (NSGA-II). The well-known cluster validity metrics namely DB-index and Dunn-index are utilized as objective functions in NSGA-II algorithm. The noise and intensity inhomogeneities in MRI are removed from MRI in the preprocessing step as these artifacts affect the segmentation process. After segmentation, the lesions are separated and finally, localized in the MRI. The devised method is applied to segment 10 Sagittal T2-Weighted fat-suppressed DCE-MRI of the breast. A comparative study has been conducted with the K-means algorithm and the devised method outperforms K-means both quantitatively and qualitatively.

Kidney MRI Segmentation for Lesion Detection Using Clustering with Slime Mould Algorithm

  • Si, Tapas
  • Nayak, Somen
  • Sarkar, Achyuth
2021 Conference Paper, cited 0 times
Website
Both the incidence and mortality rates of kidney cancer are increasing worldwide. Imaging examinations followed by effective systemic therapies can reduce the mortality rate. In this article, a new method to segment the kidney MRI for lesion detection is developed using a hard-clustering technique with Slime Mould Algorithm (SMA). First, a new partitional or hard clustering technique is developed using SMA which searches the optimal cluster centers for segmentation. In the preprocessing steps of the proposed method, the noise and intensity inhomogeneities are removed from the MR images as these artifacts affect the segmentation process. Region of Interests (ROIs) are selected and the clustering process is carried out using the SMA-based clustering technique. After the clustering, i.e., segmentation, the lesions are separated from the segmented images and finally, localized in the MR images as the postprocessing steps. The quantitative results are measured in terms of a well-known cluster validity index named Dunn-index and compared with that of the K-means algorithm. Both the quantitative and qualitative (i.e., visual) results show that the proposed method performs better than K-means.

Breast DCE-MRI Segmentation for Lesion Detection Using Clustering with Fireworks Algorithm

  • Si, T.
  • Mukhopadhyay, A.
2020 Conference Proceedings, cited 0 times
Website

2D MRI registration using glowworm swarm optimization with partial opposition-based learning for brain tumor progression

  • Si, Tapas
Pattern Analysis and Applications 2023 Journal Article, cited 0 times
Magnetic resonance imaging (MRI) registration is important in detection, diagnosis, treatment planning, determining radiographic progression, functional studies, computer-guided surgeries, and computer-guided therapies. The registration process is the way to solve the correspondence problem between features on MRI scans acquired at different time-points to study the changes while analyzing the brain tumor progression. Registration method generally requires a search strategy (optimizer) to search the transformation parameters of the registration to optimize some similarity metric between images. Metaheuristic algorithms are becoming more popular recently for image registration. In this paper, at the outset, a metaheuristic algorithm, namely glowworm swarm optimization (GSO), is improved by incorporating partial opposition-based learning (POBL) strategy. The improved GSO is applied to register the pre- and post-treatment MR images for brain tumor progression. A comparative study has been made with basic GSO, GSO with generalized opposition-based learning (GOBL-GSO), and existing particle swarm optimizer (PSO)-based registration method. The experimental results demonstrate that the proposed method has an extremely higher statistical significance in performance than others in brain MRI registration.

Kidney Lesion Segmentation in MRI Using Clustering with Salp Swarm Algorithm

  • Si, Tapas
2020 Conference Proceedings, cited 0 times
Website
In this paper, kidney lesion segmentation in MRI using clustering with salp swarm algorithm (SSA) is proposed. The segmentation results of kidney MRI are degraded by the noise and intensity inhomogeneities (IIHs) in MR images. Therefore, at the outset, the MR images are denoised using median filter. Then IIHs are corrected using the max filter-based method. A hard-clustering technique using SSA is developed to segment the MR images. Finally, the lesions are extracted from the segmented MR images. The proposed method is compared with the K-means algorithm using well-known clustering validity measure DB-index. The experimental results demonstrate that the proposed method performs better than the K-means algorithm in the segmentation of kidney lesions in MRI.

Total Lesion Glycolysis Estimated by a Radiomics Model From CT Image Alone

  • Si, H.
  • Hao, X.
  • Zhang, L.
  • Xu, X.
  • Cao, J.
  • Wu, P.
  • Li, L.
  • Wu, Z.
  • Zhang, S.
  • Li, S.
Front Oncol 2021 Journal Article, cited 0 times
Website
Purpose: In this study, total lesion glycolysis (TLG) on positron emission tomography images was estimated by a trained and validated CT radiomics model, and its prognostic ability was explored among lung cancer (LC) and esophageal cancer patients (EC). Methods: Using the identical features between the combined and thin-section CT, the estimation model of SUVsum (summed standard uptake value) was trained from the lymph nodes (LNs) of LC patients (n = 1239). Besides LNs of LC patients from other centers, the validation cohorts also included LNs and primary tumors of LC/EC from the same center. After calculating TLG (accumulated SUVsum of each individual) based on the model, the prognostic ability of the estimated and measured values was compared and analyzed. Results: In the training cohort, the model of 3 features was trained by the deep learning and linear regression method. It performed well in all validation cohorts (n = 5), and a linear regression could correct the bias from different scanners. Additionally, the absolute biases of the model were not significantly affected by the evaluated factors whether they included LN metastasis or not. Between the estimated natural logarithm of TLG (elnTLG) and the measured values (mlnTLG), significant difference existed among both LC (n = 137, bias = 0.510 +/- 0.519, r = 0.956, P<0.001) and EC patients (n = 56, bias = 0.251+/- 0.463, r = 0.934, P<0.001). However, for both cancers, the overall shapes of the curves of hazard ratio (HR) against elnTLG or mlnTLG were quite alike. Conclusion: Total lesion glycolysis can be estimated by three CT features with particular coefficients for different scanners, and it similar to the measured values in predicting the outcome of cancer patients.

Beyond Non-maximum Suppression - Detecting Lesions in Digital Breast Tomosynthesis Volumes

  • Shoshan, Yoel
  • Zlotnick, Aviad
  • Ratner, Vadim
  • Khapun, Daniel
  • Barkan, Ella
  • Gilboa-Solomon, Flora
2021 Conference Paper, cited 0 times
Website
Detecting the specific locations of malignancy signs in a medical image is a non-trivial and time-consuming task for radiologists. A complex, 3D version of this task, was presented in the DBTex 2021 Grand Challenge on Digital Breast Tomosynthesis Lesion Detection. Teams from all over the world competed in an attempt to build AI models that predict the 3D locations that require biopsy. We describe a novel method to combine detection candidates from multiple models with minimum false positives. This method won the second place in the DBTex competition, with a very small margin from being first and a standout from the rest. We performed an ablation study to show the contribution of each one of the different new components in the proposed ensemble method, including additional performance improvements done after the competition.

Deep Learning for Brain Tumor Segmentation in Radiosurgery: Prospective Clinical Evaluation

  • Shirokikh, Boris
  • Dalechina, Alexandra
  • Shevtsov, Alexey
  • Krivov, Egor
  • Kostjuchenko, Valery
  • Durgaryan, Amayak
  • Galkin, Mikhail
  • Osinov, Ivan
  • Golanov, Andrey
  • Belyaev, Mikhail
2020 Book Section, cited 0 times
Stereotactic radiosurgery is a minimally-invasive treatment option for a large number of patients with intracranial tumors. As part of the therapy treatment, accurate delineation of brain tumors is of great importance. However, slice-by-slice manual segmentation on T1c MRI could be time-consuming (especially for multiple metastases) and subjective. In our work, we compared several deep convolutional networks architectures and training procedures and evaluated the best model in a radiation therapy department for three types of brain tumors: meningiomas, schwannomas and multiple brain metastases. The developed semiautomatic segmentation system accelerates the contouring process by 2.2 times on average and increases inter-rater agreement from 92.0% to 96.5%.

Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning

  • Shiri, Isaac
  • Vafaei Sadr, Alireza
  • Akhavan, Azadeh
  • Salimi, Yazdan
  • Sanaat, Amirhossein
  • Amini, Mehdi
  • Razeghi, Behrooz
  • Saberi, Abdollah
  • Arabi, Hossein
  • Ferdowsi, Sohrab
  • Voloshynovskiy, Slava
  • Gündüz, Deniz
  • Rahmim, Arman
  • Zaidi, Habib
European journal of nuclear medicine and molecular imaging 2023 Journal Article, cited 0 times
Website
Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images.

Multi-institutional PET/CT image segmentation using federated deep transformer learning

  • Shiri, I.
  • Razeghi, B.
  • Vafaei Sadr, A.
  • Amini, M.
  • Salimi, Y.
  • Ferdowsi, S.
  • Boor, P.
  • Gunduz, D.
  • Voloshynovskiy, S.
  • Zaidi, H.
Comput Methods Programs Biomed 2023 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. METHODS: A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). RESULTS: The Dice coefficient was 0.80+/-0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUV(max) and SUV(mean) for all FL and centralized methods. Centralized and FL algorithms significantly outperformed the single center-based baseline. CONCLUSIONS: The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.

Next-generation radiogenomics sequencing for prediction of EGFR and KRAS mutation status in NSCLC patients using multimodal imaging and machine learning algorithms

  • Shiri, Isaac
  • Maleki, Hasan
  • Hajianfar, Ghasem
  • Abdollahi, Hamid
  • Ashrafinia, Saeed
  • Hatt, Mathieu
  • Zaidi, Habib
  • Oveisi, Mehrdad
  • Rahmim, Arman
Molecular Imaging and Biology 2020 Journal Article, cited 60 times
Website

COLI‐Net: Deep learning‐assisted fully automated COVID‐19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images

  • Shiri, Isaac
  • Arabi, Hossein
  • Salimi, Yazdan
  • Sanaat, Amirhossein
  • Akhavanallaf, Azadeh
  • Hajianfar, Ghasem
  • Askari, Dariush
  • Moradi, Shakiba
  • Mansouri, Zahra
  • Pakbin, Masoumeh
  • Sandoughdaran, Saleh
  • Abdollahi, Hamid
  • Radmard, Amir Reza
  • Rezaei‐Kalantari, Kiara
  • Ghelich Oghli, Mostafa
  • Zaidi, Habib
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2021 Journal Article, cited 0 times
Website
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347′259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7′333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98–0.99) and 0.91 ± 0.038 (95% CI, 0.90–0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, −0.12 to 0.18) and −0.18 ± 3.4% (95% CI, −0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16–0.59) and 0.81 ± 6.6% (95% CI, −0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (−6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.

Impact of feature harmonization on radiogenomics analysis: Prediction of EGFR and KRAS mutations from non-small cell lung cancer PET/CT images

  • Shiri, I.
  • Amini, M.
  • Nazari, M.
  • Hajianfar, G.
  • Haddadi Avval, A.
  • Abdollahi, H.
  • Oveisi, M.
  • Arabi, H.
  • Rahmim, A.
  • Zaidi, H.
Comput Biol Med 2022 Journal Article, cited 19 times
Website
OBJECTIVE: To investigate the impact of harmonization on the performance of CT, PET, and fused PET/CT radiomic features toward the prediction of mutations status, for epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma viral oncogene (KRAS) genes in non-small cell lung cancer (NSCLC) patients. METHODS: Radiomic features were extracted from tumors delineated on CT, PET, and wavelet fused PET/CT images obtained from 136 histologically proven NSCLC patients. Univariate and multivariate predictive models were developed using radiomic features before and after ComBat harmonization to predict EGFR and KRAS mutation statuses. Multivariate models were built using minimum redundancy maximum relevance feature selection and random forest classifier. We utilized 70/30% splitting patient datasets for training/testing, respectively, and repeated the procedure 10 times. The area under the receiver operator characteristic curve (AUC), accuracy, sensitivity, and specificity were used to assess model performance. The performance of the models (univariate and multivariate), before and after ComBat harmonization was compared using statistical analyses. RESULTS: While the performance of most features in univariate modeling was significantly improved for EGFR prediction, most features did not show any significant difference in performance after harmonization in KRAS prediction. Average AUCs of all multivariate predictive models for both EGFR and KRAS were significantly improved (q-value < 0.05) following ComBat harmonization. The mean ranges of AUCs increased following harmonization from 0.87-0.90 to 0.92-0.94 for EGFR, and from 0.85-0.90 to 0.91-0.94 for KRAS. The highest performance was achieved by harmonized F_R0.66_W0.75 model with AUC of 0.94, and 0.93 for EGFR and KRAS, respectively. CONCLUSION: Our results demonstrated that regarding univariate modelling, while ComBat harmonization had generally a better impact on features for EGFR compared to KRAS status prediction, its effect is feature-dependent. Hence, no systematic effect was observed. Regarding the multivariate models, ComBat harmonization significantly improved the performance of all radiomics models toward more successful prediction of EGFR and KRAS mutation statuses in lung cancer patients. Thus, by eliminating the batch effect in multi-centric radiomic feature sets, harmonization is a promising tool for developing robust and reproducible radiomics using vast and variant datasets.

Radiogenomics of clear cell renal cell carcinoma: preliminary findings of The Cancer Genome Atlas–Renal Cell Carcinoma (TCGA–RCC) Imaging Research Group

  • Shinagare, Atul B
  • Vikram, Raghu
  • Jaffe, Carl
  • Akin, Oguz
  • Kirby, Justin
  • Huang, Erich
  • Freymann, John
  • Sainani, Nisha I
  • Sadow, Cheryl A
  • Bathala, Tharakeswara K
  • Rubin, D. L.
  • Oto, A.
  • Heller, M. T.
  • Surabhi, V. R.
  • Katabathina, V.
  • Silverman, S. G.
Abdominal imaging 2015 Journal Article, cited 47 times
Website
PURPOSE: To investigate associations between imaging features and mutational status of clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: This multi-institutional, multi-reader study included 103 patients (77 men; median age 59 years, range 34-79) with ccRCC examined with CT in 81 patients, MRI in 19, and both CT and MRI in three; images were downloaded from The Cancer Imaging Archive, an NCI-funded project for genome-mapping and analyses. Imaging features [size (mm), margin (well-defined or ill-defined), composition (solid or cystic), necrosis (for solid tumors: 0%, 1%-33%, 34%-66% or >66%), growth pattern (endophytic, <50% exophytic, or >/=50% exophytic), and calcification (present, absent, or indeterminate)] were reviewed independently by three readers blinded to mutational data. The association of imaging features with mutational status (VHL, BAP1, PBRM1, SETD2, KDM5C, and MUC4) was assessed. RESULTS: Median tumor size was 49 mm (range 14-162 mm), 73 (71%) tumors had well-defined margins, 98 (95%) tumors were solid, 95 (92%) showed presence of necrosis, 46 (45%) had >/=50% exophytic component, and 18 (19.8%) had calcification. VHL (n = 52) and PBRM1 (n = 24) were the most common mutations. BAP1 mutation was associated with ill-defined margin and presence of calcification (p = 0.02 and 0.002, respectively, Pearson's chi (2) test); MUC4 mutation was associated with an exophytic growth pattern (p = 0.002, Mann-Whitney U test). CONCLUSIONS: BAP1 mutation was associated with ill-defined tumor margins and presence of calcification; MUC4 mutation was associated with exophytic growth. Given the known prognostic implications of BAP1 and MUC4 mutations, these results support using radiogenomics to aid in prognostication and management.

Style transfer strategy for developing a generalizable deep learning application in digital pathology

  • Shin, Seo Jeong
  • You, Seng Chan
  • Jeon, Hokyun
  • Jung, Ji Won
  • An, Min Ho
  • Park, Rae Woong
  • Roh, Jin
Computer Methods and Programs in Biomedicine 2021 Journal Article, cited 1 times
Website

Combination of fuzzy c-means clustering and texture pattern matrix for brain MRI segmentation

  • Shijin Kumar, P.S.
  • Dharun, V.S.
Biomedical Research 2017 Journal Article, cited 0 times
The process of image segmentation can be defined as splitting an image into different regions. It is an important step in medical image analysis. We introduce a hybrid tumor tracking and segmentation algorithm for Magnetic Resonance Images (MRI). This method is based on Fuzzy C-means clustering algorithm (FCM) and Texture Pattern Matrix (TPM). The key idea is to use texture features along with intensity while performing segmentation. The performance parameters can be improved by using Texture Pattern Matrix (TPM). FCM is capable of predicting tumor cells with high accuracy. In FCM homogeneous regions in an image are obtained based on intensity. Texture Pattern Matrix (TPM) provides details about spatial distribution of pixels in an image. Experimental results obtained by applying proposed segmentation method for tracking tumors are presented. Various performance parameters are evaluated by comparing the outputs of proposed method and Fuzzy C-means algorithm. The computational complexity and computation time can be reduced by using this hybrid segmentation method.

Generating missing patient anatomy from partially acquired cone-beam computed tomography images using deep learning: a proof of concept

  • Shields, B.
  • Ramachandran, P.
2023 Journal Article, cited 0 times
Website
The patient setup technique currently in practice in most radiotherapy departments utilises on-couch cone-beam computed tomography (CBCT) imaging. Patients are positioned on the treatment couch using visual markers, followed by fine adjustments to the treatment couch position depending on the shift observed between the computed tomography (CT) image acquired for treatment planning and the CBCT image acquired immediately before commencing treatment. The field of view of CBCT images is limited to the size of the kV imager which leads to the acquisition of partial CBCT scans for lateralised tumors. The cone-beam geometry results in high amounts of streaking artifacts and in conjunction with limited anatomical information reduces the registration accuracy between planning CT and the CBCT image. This study proposes a methodology that can improve radiotherapy patient setup CBCT images by removing streaking artifacts and generating the missing patient anatomy with patient-specific precision. This research was split into two separate studies. In Study A, synthetic CBCT (sCBCT) data was created and used to train two machine learning models, one for removing streaking artifacts and the other for generating the missing patient anatomy. In Study B, planning CT and on-couch CBCT data from several patients was used to train a base model, from which a transfer of learning was performed using imagery from a single patient, producing a patient-specific model. The models developed for Study A performed well at removing streaking artifacts and generating the missing anatomy. The outputs yielded in Study B show that the model understands the individual patient and can generate the missing anatomy from partial CBCT datasets. The outputs generated demonstrate that there is utility in the proposed methodology which could improve the patient setup and ultimately lead to improving overall treatment quality.

MRI-based Quantification of Intratumoral Heterogeneity for Predicting Treatment Response to Neoadjuvant Chemotherapy in Breast Cancer

  • Shi, Z.
  • Huang, X.
  • Cheng, Z.
  • Xu, Z.
  • Lin, H.
  • Liu, C.
  • Chen, X.
  • Liu, C.
  • Liang, C.
  • Lu, C.
  • Cui, Y.
  • Han, C.
  • Qu, J.
  • Shen, J.
  • Liu, Z.
RadiologyRadiology 2023 Journal Article, cited 0 times
Website
Background Breast cancer is highly heterogeneous, resulting in different treatment responses to neoadjuvant chemotherapy (NAC) among patients. A noninvasive quantitative measure of intratumoral heterogeneity (ITH) may be valuable for predicting treatment response. Purpose To develop a quantitative measure of ITH on pretreatment MRI scans and test its performance for predicting pathologic complete response (pCR) after NAC in patients with breast cancer. Materials and Methods Pretreatment MRI scans were retrospectively acquired in patients with breast cancer who received NAC followed by surgery at multiple centers from January 2000 to September 2020. Conventional radiomics (hereafter, C-radiomics) and intratumoral ecological diversity features were extracted from the MRI scans, and output probabilities of imaging-based decision tree models were used to generate a C-radiomics score and ITH index. Multivariable logistic regression analysis was used to identify variables associated with pCR, and significant variables, including clinicopathologic variables, C-radiomics score, and ITH index, were combined into a predictive model for which performance was assessed using the area under the receiver operating characteristic curve (AUC). Results The training data set was comprised of 335 patients (median age, 48 years [IQR, 42-54 years]) from centers A and B, and 590, 280, and 384 patients (median age, 48 years [IQR, 41-55 years]) were included in the three external test data sets. Molecular subtype (odds ratio [OR] range, 4.76-8.39 [95% CI: 1.79, 24.21]; all P < .01), ITH index (OR, 30.05 [95% CI: 8.43, 122.64]; P < .001), and C-radiomics score (OR, 29.90 [95% CI: 12.04, 81.70]; P < .001) were independently associated with the odds of achieving pCR. The combined model showed good performance for predicting pCR to NAC in the training data set (AUC, 0.90) and external test data sets (AUC range, 0.83-0.87). Conclusion A model that combined an index created from pretreatment MRI-based imaging features quantitating ITH, C-radiomics score, and clinicopathologic variables showed good performance for predicting pCR to NAC in patients with breast cancer. (c) RSNA, 2023 Supplemental material is available for this article. See also the editorial by Rauch in this issue.

Brain Tumor Segmentation Using Dense Channels 2D U-net and Multiple Feature Extraction Network

  • Shi, Wei
  • Pang, Enshuai
  • Wu, Qiang
  • Lin, Fengming
2020 Book Section, cited 0 times
Semantic segmentation plays an important role in the prevention, diagnosis and treatment of brain glioma. In this paper, we propose a dense channels 2D U-net segmentation model with residual unit and feature pyramid unit. The main difference compared with other U-net models is that the number of bottom feature components is increased, so that the network can learn more abundant patterns. We also develop a multiple feature extraction network model to extract rich and diverse features, which is conducive to segmentation. Finally, we employ decision tree regression model to predict patient overall survival by the different texture, shape and first-order features extracted from BraTS 2019 dataset.

Joint few-shot registration and segmentation self-training of 3D medical images

  • Shi, Huabang
  • Lu, Liyun
  • Yin, Mengxiao
  • Zhong, Cheng
  • Yang, Feng
Biomedical Signal Processing and Control 2023 Journal Article, cited 0 times
Website
Medical image segmentation and registration are very important related steps in clinical medical diagnosis. In the past few years, deep learning techniques for joint segmentation and registration have achieved good results in both segmentation and registration tasks through one-way assisted learning or mutual utilization. However, they often rely on large labeled datasets for supervised training or directly use pseudo-labels without quality estimation. We propose a joint registration and segmentation self-training framework (JRSS), which aims to use segmentation pseudo-labels to promote shared learning between segmentation and registration in scenarios with few manually labeled samples while improving the performance of dual tasks. JRSS combines weakly supervised registration and semi-supervised segmentation learning in a self-training framework. Segmentation self-training generates high-quality pseudo-labels for unlabeled data by injecting noise, pseudo-labels screening, and uncertainty correction. Registration utilizes pseudo-labels to facilitate weakly supervised learning, and as input noise as well as data augmentation to facilitate segmentation self-training. Experiments on two public 3D medical image datasets, abdominal CT and brain MRI, demonstrate that our proposed method achieves simultaneous improvements in segmentation and registration accuracy under few-shot scenarios. Outperforms the single-task fully-supervised training state-of-the-art model in the metrics of Dice similarity coefficient and standard deviation of the Jacobian determinant.

Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy

  • Shi, Feng
  • Hu, Weigang
  • Wu, Jiaojiao
  • Han, Miaofei
  • Wang, Jiazhou
  • Zhang, Wei
  • Zhou, Qing
  • Zhou, Jingjie
  • Wei, Ying
  • Shao, Ying
  • Chen, Yanbo
  • Yu, Yue
  • Cao, Xiaohuan
  • Zhan, Yiqiang
  • Zhou, Xiang Sean
  • Gao, Yaozong
  • Shen, Dinggang
2022 Journal Article, cited 0 times
Website
In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.

Optimized Deformable Model-based Segmentation and Deep Learning for Lung Cancer Classification

  • Shetty, M. V.
  • D, J.
  • Tunga, S.
2022 Journal Article, cited 0 times
Website
Lung cancer is one of the life taking disease and causes more deaths worldwide. Early detection and treatment is necessary to save life. It is very difficult for doctors to interpret and identify diseases using imaging modalities alone. Therefore computer aided diagnosis can assist doctors for the early detection of cancer very accurately. In the proposed work, optimized deformable models and deep learning techniques are applied for the detection and classification of lung cancer. This method involves pre-processing, lung lobe segmentation, lung cancer segmentation, Data augmentation and lung cancer classification. The median filtering is considered for pre-processing and the Bayesian fuzzy clustering is applied for segmenting the lung lobes. The lung cancer segmentation is carried out using Water Cycle Sea Lion Optimization (WSLnO) based deformable model. The data augmentation process is used to augment the size of segmented region in order to perform better classification. The lung cancer classification is done effectively using Shepard Convolutional Neural Network (ShCNN), which is trained by WSLnO algorithm. The proposed WSLnO algorithm is designed by incorporating Water cycle algorithm (WCA) and Sea Lion Optimization (SLnO) algorithm. The performance of the proposed technique is analyzed with various performance metrics and attained the better results in terms of accuracy, sensitivity, specificity and average segmentation accuracy of 0.9303, 0.9123, 0.9133 and 0.9091 respectively. J. Med. Invest. 69 : 244-255, August, 2022.

An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization

  • Shen, Y.
  • Wu, N.
  • Phang, J.
  • Park, J.
  • Liu, K.
  • Tyagi, S.
  • Heacock, L.
  • Kim, S. G.
  • Moy, L.
  • Cho, K.
  • Geras, K. J.
Med Image Anal 2021 Journal Article, cited 0 times
Website
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.

An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy

  • Shen, Shiwen
  • Bui, Alex AT
  • Cong, Jason
  • Hsu, William
2015 Journal Article, cited 31 times
Website
Computer-aided detection and diagnosis (CAD) has been widely investigated to improve radiologists' diagnostic accuracy in detecting and characterizing lung disease, as well as to assist with the processing of increasingly sizable volumes of imaging. Lung segmentation is a requisite preprocessing step for most CAD schemes. This paper proposes a parameter-free lung segmentation algorithm with the aim of improving lung nodule detection accuracy, focusing on juxtapleural nodules. A bidirectional chain coding method combined with a support vector machine (SVM) classifier is used to selectively smooth the lung border while minimizing the over-segmentation of adjacent regions. This automated method was tested on 233 computed tomography (CT) studies from the lung imaging database consortium (LIDC), representing 403 juxtapleural nodules. The approach obtained a 92.6% re-inclusion rate. Segmentation accuracy was further validated on 10 randomly selected CT series, finding a 0.3% average over-segmentation ratio and 2.4% under-segmentation rate when compared to manually segmented reference standards done by an expert. (C) 2014 Elsevier Ltd. All rights reserved.

Unsupervised domain adaptation with adversarial learning for mass detection in mammogram

  • Shen, Rongbo
  • Yao, Jianhua
  • Yan, Kezhou
  • Tian, Kuan
  • Jiang, Cheng
  • Zhou, Ke
Neurocomputing 2020 Journal Article, cited 0 times
Website
Many medical image datasets have been collected without proper annotations for deep learning training. In this paper, we propose a novel unsupervised domain adaptation framework with adversarial learning to minimize the annotation efforts. Our framework employs a task specific network, i.e., fully convolutional network (FCN), for spatial density prediction. Moreover, we employ a domain discriminator, in which adversarial learning is adopted to align the less-annotated target domain features with the well-annotated source domain features in the feature space. We further propose a novel training strategy for the adversarial learning by coupling data from source and target domains and alternating the subnet updates. We employ the public CBIS-DDSM dataset as the source domain, and perform two sets of experiments on two target domains (i.e., the public INbreast dataset and a self-collected dataset), respectively. Experimental results suggest consistent and comparable performance improvement over the state-of-the-art methods. Our proposed training strategy is also proved to converge much faster.

Noninvasive Evaluation of the Notch Signaling Pathway via Radiomic Signatures Based on Multiparametric MRI in Association With Biological Functions of Patients With Glioma: A Multi-institutional Study

  • Shen, N.
  • Lv, W.
  • Li, S.
  • Liu, D.
  • Xie, Y.
  • Zhang, J.
  • Zhang, J.
  • Jiang, J.
  • Jiang, R.
  • Zhu, W.
2022 Journal Article, cited 0 times
Website
BACKGROUND: Noninvasive determination of Notch signaling is important for prognostic evaluation and therapeutic intervention in glioma. PURPOSE: To predict Notch signaling using multiparametric (mp) MRI radiomics and correlate with biological characteristics in gliomas. STUDY TYPE: Retrospective. POPULATION: A total of 63 patients for model construction and 47 patients from two public databases for external testing. FIELD STRENGTH/SEQUENCE: A 1.5 T and 3.0 T, T1-weighted imaging (T1WI), T2WI, T2 fluid attenuated inversion recovery (FLAIR), contrast-enhanced (CE)-T1WI. ASSESSMENT: Radiomic features were extracted from CE-T1WI, T1WI, T2WI, and T2FLAIR and imaging signatures were selected using a least absolute shrinkage and selection operator. Diagnostic performance was compared between single modality and a combined mpMRI radiomics model. A radiomic-clinical nomogram was constructed incorporating the mpMRI radiomic signature and Karnofsky Performance score. The performance was validated in the test set. The radiomic signatures were correlated with immunohistochemistry (IHC) analysis of downstream Notch pathway components. STATISTICAL TESTS: Receiver operating characteristic curve, decision curve analysis (DCA), Pearson correlation, and Hosmer-Lemeshow test. A P value < 0.05 was considered statistically significant. RESULTS: The radiomic signature derived from the combination of all sequences numerically showed highest area under the curve (AUC) in both training and external test sets (AUCs of 0.857 and 0.823). The radiomics nomogram that incorporated the mpMRI radiomic signature and KPS status resulted in AUCs of 0.891 and 0.859 in the training and test sets. The calibration curves showed good agreement between prediction and observation in both sets (P= 0.279 and 0.170, respectively). DCA confirmed the clinical usefulness of the nomogram. IHC identified Notch pathway inactivation and the expression levels of Hes1 correlated with higher combined radiomic scores (r = -0.711) in Notch1 mutant tumors. DATA CONCLUSION: The mpMRI-based radiomics nomogram may reflect the intratumor heterogeneity associated with downstream biofunction that predicts Notch signaling in a noninvasive manner. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.

URO-GAN: An untrustworthy region optimization approach for adipose tissue segmentation based on adversarial learning

  • Shen, Kaifei
  • Quan, Hongyan
  • Han, Jun
  • Wu, Min
Applied Intelligence 2022 Journal Article, cited 0 times
Website
Automatic segmentation of adipose tissue from CT images is an essential module of medical assistant diagnosis. A large scale of abdominal cross-section CT images can be used to segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) with deep learning method. However, the CT images still need to be professionally and accurately annotated to improve the segmentation quality. The paper proposes a semi-supervised segmentation network based on adversarial learning. The model is called URO-GAN and consists of two paths used to segment SAT and VAT, respectively. An SAT-to-VAT transmission mechanism is set up between these two paths, where several inverse-SAT excitation blocks are set to help the SAT segmentation network guide the VAT segmentation network. An untrustworthy region optimization mechanism is proposed to improve the segmentation quality and keep the adversarial learning stable. With the confidence map output from the discriminator network, an optimizer network is used to fix the error in the masks predicted by the segmentation network. The URO-GAN achieves good results by training with 84 annotated images and 3969 unannotated images. Experimental results demonstrate the effectiveness of our approach on the segmentation of adipose tissue in medical images.

Anatomical attention can help to segment the dilated pancreatic duct in abdominal CT

  • Shen, C.
  • Roth, H. R.
  • Hayashi, Y.
  • Oda, M.
  • Sato, G.
  • Miyamoto, T.
  • Rueckert, D.
  • Mori, K.
Int J Comput Assist Radiol Surg 2024 Journal Article, cited 0 times
Website
PURPOSE: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. METHODS: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas' anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. RESULTS: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. CONCLUSIONS: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.

A cascaded fully convolutional network framework for dilated pancreatic duct segmentation

  • Shen, C.
  • Roth, H. R.
  • Hayashi, Y.
  • Oda, M.
  • Miyamoto, T.
  • Sato, G.
  • Mori, K.
Int J Comput Assist Radiol Surg 2022 Journal Article, cited 1 times
Website
PURPOSE: Pancreatic duct dilation can be considered an early sign of pancreatic ductal adenocarcinoma (PDAC). However, there is little existing research focused on dilated pancreatic duct segmentation as a potential screening tool for people without PDAC. Dilated pancreatic duct segmentation is difficult due to the lack of readily available labeled data and strong voxel imbalance between the pancreatic duct region and other regions. To overcome these challenges, we propose a two-step approach for dilated pancreatic duct segmentation from abdominal computed tomography (CT) volumes using fully convolutional networks (FCNs). METHODS: Our framework segments the pancreatic duct in a cascaded manner. The pancreatic duct occupies a tiny portion of abdominal CT volumes. Therefore, to concentrate on the pancreas regions, we use a public pancreas dataset to train an FCN to generate an ROI covering the pancreas and use a 3D U-Net-like FCN for coarse pancreas segmentation. To further improve the dilated pancreatic duct segmentation, we deploy a skip connection on each corresponding resolution level and an attention mechanism in the bottleneck layer. Moreover, we introduce a combined loss function based on Dice loss and Focal loss. Random data augmentation is adopted throughout the experiments to improve the generalizability of the model. RESULTS: We manually created a dilated pancreatic duct dataset with semi-automated annotation tools. Experimental results showed that our proposed framework is practical for dilated pancreatic duct segmentation. The average Dice score and sensitivity were 49.9% and 51.9%, respectively. These results show the potential of our approach as a clinical screening tool. CONCLUSIONS: We investigate an automated framework for dilated pancreatic duct segmentation. The cascade strategy effectively improved the segmentation performance of the pancreatic duct. Our modifications to the FCNs together with random data augmentation and the proposed combined loss function facilitate automated segmentation.

2D and 3D CT Radiomics Features Prognostic Performance Comparison in Non-Small Cell Lung Cancer

  • Shen, Chen
  • Liu, Zhenyu
  • Guan, Min
  • Song, Jiangdian
  • Lian, Yucheng
  • Wang, Shuo
  • Tang, Zhenchao
  • Dong, Di
  • Kong, Lingfei
  • Wang, Meiyun
Transl OncolTranslational oncology 2017 Journal Article, cited 10 times
Website

Topological Data Analysis for Medical Imaging and RNA Data Analysis on Tree Spaces

  • SHEN, Chen
2021 Thesis, cited 2 times
Website
Ideas from the algebraic topology of studying object data are used to introduce a framework for using persistence landscapes to vectorized objects. These methods are applied to analyze data from The Cancer Imaging Archive (TCIA), using a technique developed earlier for regular digital images. Our study aims at tumor differentiation from medical images, including brain images from CPTAC Glioblastoma patients. The result shows that persistence landscapes that capture topological features are distinguishing on average between tumor and normal brains. Besides topological object data analysis, asymptotics of sample means on stratified spaces are also introduced and developed in this dissertation. A stratified space is a metric space that admits a filtration by closed subspaces, such that the difference between the d-th indexed subspace and the (d − 1) indexed subspace is empty or is a d-dimensional manifold, called the d-th stratum. Examples of stratified sample spaces, which are not themselves manifolds, include similarity shape spaces, affine shape spaces, projective shape spaces, phylogenetic tree spaces, and graphs. The behavior of the Frechet sample ´ means is different around the singular Frechet mean points in some cases of stratified spaces, such as on ´ open books. The asymptotic results for the Frechet sample mean are extended from data on spiders, which ´ are open books, to a more general class of stratified spaces that are not open books. Phylogenetic tree spaces are typically stratified spaces, including genetic information from nucleotide data, such as DNA, RNA data. Coronavirus disease 2019 (Covid-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The raw RNA sequences from SARS-CoV-2 are studied. The ideas from the phylogenetic tree and statistical analysis on stratified spaces are applied to study distributions on phylogenetic tree spaces. A framework is also presented for computing mean and applying Central Limit Theorem(CLT), to provide statistical inference on data. We apply these methods to analyze RNA sequences of SARS-CoV-2 from multiple sources. By building sample trees and applying the ensuing statistical analysis, we could compare evolutionary results for SARS-CoV-2 vs other coronaviruses.

Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data

  • Sheller, Micah J
  • Edwards, Brandon
  • Reina, G Anthony
  • Martin, Jason
  • Pati, Sarthak
  • Kotrotsou, Aikaterini
  • Milchenko, Mikhail
  • Xu, Weilin
  • Marcus, Daniel
  • Colen, Rivka R
  • Bakas, Spyridon
2020 Journal Article, cited 4 times
Website
Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.

An efficient denoising of impulse noise from MRI using adaptive switching modified decision based unsymmetric trimmed median filter

  • Sheela, C. Jaspin Jeba
  • Suganthi, G.
Biomedical Signal Processing and Control 2020 Journal Article, cited 0 times

Joint Modeling of RNAseq and Radiomics Data for Glioma Molecular Characterization and Prediction

  • Shboul, Z. A.
  • Diawara, N.
  • Vossough, A.
  • Chen, J. Y.
  • Iftekharuddin, K. M.
Front Med (Lausanne) 2021 Journal Article, cited 0 times
Website
RNA sequencing (RNAseq) is a recent technology that profiles gene expression by measuring the relative frequency of the RNAseq reads. RNAseq read counts data is increasingly used in oncologic care and while radiology features (radiomics) have also been gaining utility in radiology practice such as disease diagnosis, monitoring, and treatment planning. However, contemporary literature lacks appropriate RNA-radiomics (henceforth, radiogenomics ) joint modeling where RNAseq distribution is adaptive and also preserves the nature of RNAseq read counts data for glioma grading and prediction. The Negative Binomial (NB) distribution may be useful to model RNAseq read counts data that addresses potential shortcomings. In this study, we propose a novel radiogenomics-NB model for glioma grading and prediction. Our radiogenomics-NB model is developed based on differentially expressed RNAseq and selected radiomics/volumetric features which characterize tumor volume and sub-regions. The NB distribution is fitted to RNAseq counts data, and a log-linear regression model is assumed to link between the estimated NB mean and radiomics. Three radiogenomics-NB molecular mutation models (e.g., IDH mutation, 1p/19q codeletion, and ATRX mutation) are investigated. Additionally, we explore gender-specific effects on the radiogenomics-NB models. Finally, we compare the performance of the proposed three mutation prediction radiogenomics-NB models with different well-known methods in the literature: Negative Binomial Linear Discriminant Analysis (NBLDA), differentially expressed RNAseq with Random Forest (RF-genomics), radiomics and differentially expressed RNAseq with Random Forest (RF-radiogenomics), and Voom-based count transformation combined with the nearest shrinkage classifier (VoomNSC). Our analysis shows that the proposed radiogenomics-NB model significantly outperforms (ANOVA test, p < 0.05) for prediction of IDH and ATRX mutations and offers similar performance for prediction of 1p/19q codeletion, when compared to the competing models in the literature, respectively.

Prediction of Molecular Mutations in Diffuse Low-Grade Gliomas using MR Imaging Features

  • Shboul, Zeina A
  • Chen, James
  • M Iftekharuddin, Khan
2020 Journal Article, cited 0 times
Website
Diffuse low-grade gliomas (LGG) have been reclassified based on molecular mutations, which require invasive tumor tissue sampling. Tissue sampling by biopsy may be limited by sampling error, whereas non-invasive imaging can evaluate the entirety of a tumor. This study presents a non-invasive analysis of low-grade gliomas using imaging features based on the updated classification. We introduce molecular (MGMT methylation, IDH mutation, 1p/19q co-deletion, ATRX mutation, and TERT mutations) prediction methods of low-grade gliomas with imaging. Imaging features are extracted from magnetic resonance imaging data and include texture features, fractal and multi-resolution fractal texture features, and volumetric features. Training models include nested leave-one-out cross-validation to select features, train the model, and estimate model performance. The prediction models of MGMT methylation, IDH mutations, 1p/19q co-deletion, ATRX mutation, and TERT mutations achieve a test performance AUC of 0.83 +/- 0.04, 0.84 +/- 0.03, 0.80 +/- 0.04, 0.70 +/- 0.09, and 0.82 +/- 0.04, respectively. Furthermore, our analysis shows that the fractal features have a significant effect on the predictive performance of MGMT methylation IDH mutations, 1p/19q co-deletion, and ATRX mutations. The performance of our prediction methods indicates the potential of correlating computed imaging features with LGG molecular mutations types and identifies candidates that may be considered potential predictive biomarkers of LGG molecular classification.

Predicting Lung Cancer Patients’ Survival Time via Logistic Regression-based Models in a Quantitative Radiomic Framework

  • Shayesteh, S. P.
  • Shiri, I.
  • Karami, A. H.
  • Hashemian, R.
  • Kooranifar, S.
  • Ghaznavi, H.
  • Shakeri-Zadeh, A.
Journal of Biomedical Physics and Engineering 2019 Journal Article, cited 0 times
Objectives: The aim of this study was to predict the survival time of lung cancer patients using the advantages of both radiomics and logistic regression-based classification models. Material and Methods: Fifty-nine patients with primary lung adenocarcinoma were included in this retrospective study and pre-treatment contrast-enhanced CT images were acquired. The patients lived more than 2 years were classified as the ‘Alive’ class and otherwise as the ‘Dead’ class. In our proposed quantitative radiomic framework, we first extracted the associated regions of each lung lesion from pre-treatment CT images for each patient via grow cut segmentation algorithm. Then, 40 radiomic features were extracted from the segmented lung lesions. In order to enhance the generalizability of the classification models, the mutual information-based feature selection method was applied to each feature vector. We investigated the performance of six logistic regression-based classification models with consider to acceptable evaluation measures such as F1 score and accuracy. Results: It was observed that the mutual information feature selection method can help the classifier to achieve better predictive results. In our study, the Logistic regression (LR) and Dual Coordinate Descent method for Logistic Regression (DCD-LR) models achieved the best results indicating that these classification models have strong potential for classifying the more important class (i.e., the ‘Alive’ class). Conclusion: The proposed quantitative radiomic framework yielded promising results, which can guide physicians to make better and more precise decisions and increase the chance of treatment success.

Fully automatic and accurate detection of lung nodules in CT images using a hybrid feature set

  • Shaukat, Furqan
  • Raja, Gulistan
  • Gooya, Ali
  • Frangi, Alejandro F
Medical Physics 2017 Journal Article, cited 2 times
Website

A Block Adaptive Near-Lossless Compression Algorithm for Medical Image Sequences and Diagnostic Quality Assessment

  • Sharma, Urvashi
  • Sood, Meenakshi
  • Puthooran, Emjee
J Digit Imaging 2019 Journal Article, cited 0 times
Website
The near-lossless compression technique has better compression ratio than lossless compression technique while maintaining a maximum error limit for each pixel. It takes the advantage of both the lossy and lossless compression methods providing high compression ratio, which can be used for medical images while preserving diagnostic information. The proposed algorithm uses a resolution and modality independent threshold-based predictor, optimal quantization (q) level, and adaptive block size encoding. The proposed method employs resolution independent gradient edge detector (RIGED) for removing inter-pixel redundancy and block adaptive arithmetic encoding (BAAE) is used after quantization to remove coding redundancy. Quantizer with an optimum q level is used to implement the proposed method for high compression efficiency and for the better quality of the recovered images. The proposed method is implemented on volumetric 8-bit and 16-bit standard medical images and also validated on real time 16-bit-depth images collected from government hospitals. The results show the proposed algorithm yields a high coding performance with BPP of 1.37 and produces high peak signal-to-noise ratio (PSNR) of 51.35 dB for 8-bit-depth image dataset as compared with other near-lossless compression. The average BPP values of 3.411 and 2.609 are obtained by the proposed technique for 16-bit standard medical image dataset and real-time medical dataset respectively with maintained image quality. The improved near-lossless predictive coding technique achieves high compression ratio without losing diagnostic information from the image.

Early detection of lung cancer from CT images: nodule segmentation and classification using deep learning

  • Sharma, Manu
  • Bhatt, Jignesh S
  • Joshi, Manjunath V
2018 Conference Proceedings, cited 0 times
Website
Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.

Technical Note‐In silico imaging tools from the VICTRE clinical trial

  • Sharma, Diksha
  • Graff, Christian G.
  • Badal, Andreu
  • Zeng, Rongping
  • Sawant, Purva
  • Sengupta, Aunnasha
  • Dahal, Eshan
  • Badano, Aldo
Medical Physics 2019 Journal Article, cited 0 times
Website
PURPOSE: In silico imaging clinical trials are emerging alternative sources of evidence for regulatory evaluation and are typically cheaper and faster than human trials. In this Note, we describe the set of in silico imaging software tools used in the VICTRE (Virtual Clinical Trial for Regulatory Evaluation) which replicated a traditional trial using a computational pipeline. MATERIALS AND METHODS: We describe a complete imaging clinical trial software package for comparing two breast imaging modalities (digital mammography and digital breast tomosynthesis). First, digital breast models were developed based on procedural generation techniques for normal anatomy. Second, lesions were inserted in a subset of breast models. The breasts were imaged using GPU-accelerated Monte Carlo transport methods and read using image interpretation models for the presence of lesions. All in silico components were assembled into a computational pipeline. The VICTRE images were made available in DICOM format for ease of use and visualization. RESULTS: We describe an open-source collection of in silico tools for running imaging clinical trials. All tools and source codes have been made freely available. CONCLUSION: The open-source tools distributed as part of the VICTRE project facilitate the design and execution of other in silico imaging clinical trials. The entire pipeline can be run as a complete imaging chain, modified to match needs of other trial designs, or used as independent components to build additional pipelines.

An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor

  • Sharif, Muhammad
  • Amin, Javaria
  • Raza, Mudassar
  • Yasmin, Mussarat
  • Satapathy, Suresh Chandra
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Tumor in brain is a major cause of death in human beings. If not treated properly and timely, there is a high chance of it to become malignant. Therefore, brain tumor detection at an initial stage is a significant requirement. In this work, initially the skull is removed through brain surface extraction (BSE) method. The skull removed image is then fed to particle swarm optimization (PSO) to achieve better segmentation. In the next step, Local binary patterns (LBP) and deep features of segmented images are extracted and genetic algorithm (GA) is applied for best features selection. Finally, artificial neural network (ANN) and other classifiers are utilized to classify the tumor grades. The publicly available complex brain datasets such as RIDER and BRATS 2018 Challenge are utilized for evaluation of method and attained 99% maximum accuracy. The results are also compared with existing methods which evident that the presented technique provided improved outcomes which are clear proof of its effectiveness and novelty.

Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm

  • Shapey, Jonathan
  • Kujawa, Aaron
  • Dorent, Reuben
  • Wang, Guotai
  • Dimitriadis, Alexis
  • Grishchuk, Diana
  • Paddick, Ian
  • Kitchen, Neil
  • Bradford, Robert
  • Saeed, Shakeel R.
  • Bisdas, Sotirios
  • Ourselin, Sébastien
  • Vercauteren, Tom
Scientific data 2021 Journal Article, cited 4 times
Website
Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.

Content based medical image retrieval using topic and location model

  • Shamna, P.
  • Govindan, V. K.
  • Abdul Nazeer, K. A.
Journal of Biomedical Informatics 2019 Journal Article, cited 0 times
Website
Background and objective Retrieval of medical images from an anatomically diverse dataset is a challenging task. Objective of our present study is to analyse the automated medical image retrieval system incorporating topic and location probabilities to enhance the performance. Materials and methods In this paper, we present an automated medical image retrieval system using Topic and Location Model. The topic information is generated using Guided Latent Dirichlet Allocation (GuidedLDA) method. A novel Location Model is proposed to incorporate the spatial information of visual words. We also introduce a new metric called position weighted Precision (wPrecision) to measure the rank order of the retrieved images. Results Experiments on two large medical image datasets - IRMA 2009 and Multimodal dataset - revealed that the proposed method outperforms existing medical image retrieval systems in terms of Precision and Mean Average Precision. The proposed method achieved better Mean Average Precision (86.74%) compared to the recent medical image retrieval systems using the Multimodal dataset with 7200 images. The proposed system achieved better Precision (97.5%) for top ten images compared to the recent medical image retrieval systems using IRMA 2009 dataset with 14,410 images. Conclusion Supplementing spatial details of visual words to the Topic Model enhances the retrieval efficiency of medical images from large repositories. Such automated medical image retrieval systems can be used to assist physician to retrieve medical images with better precision compared to the state-of-the-art retrieval systems.

Radiomics based likelihood functions for cancer diagnosis

  • Shakir, Hina
  • Deng, Yiming
  • Rasheed, Haroon
  • Khan, Tariq Mairaj Rasool
2019 Journal Article, cited 0 times
Website
Radiomic features based classifiers and neural networks have shown promising results in tumor classification. The classification performance can be further improved greatly by exploring and incorporating the discriminative features towards cancer into mathematical models. In this research work, we have developed two radiomics driven likelihood models in Computed Tomography(CT) images to classify lung, colon, head and neck cancer. Initially, two diagnostic radiomic signatures were derived by extracting 105 3-D features from 200 lung nodules and by selecting the features with higher average scores from several supervised as well as unsupervised feature ranking algorithms. The signatures obtained from both the ranking approaches were integrated into two mathematical likelihood functions for tumor classification. Validation of the likelihood functions was performed on 265 public data sets of lung, colon, head and neck cancer with high classification rate. The achieved results show robustness of the models and suggest that diagnostic mathematical functions using general tumor phenotype can be successfully developed for cancer diagnosis.

A deep learning-based cancer survival time classifier for small datasets

  • Shakir, Hina
  • Aijaz, Bushra
  • Khan, Tariq Mairaj Rasool
  • Hussain, Muhammad
2023 Journal Article, cited 0 times
Website
Cancer survival time prediction using Deep Learning (DL) has been an emerging area of research. However, non-availability of large-sized annotated medical imaging databases affects the training performance of DL models leading to their arguable usage in many clinical applications. In this research work, a neural network model is customized for small sample space to avoid data over-fitting for DL training. A set of prognostic radiomic features is selected through an iterative process using average of multiple dropouts which results in back-propagated gradients with low variance, thus increasing the network learning capability, reliable feature selection and better training over a small database. The proposed classifier is further compared with erasing feature selection method proposed in the literature for improved network training and with other well-known classifiers on small sample size. Achieved results which were statistically validated show efficient and improved classification of cancer survival time into three intervals of 6 months, between 6 months up to 2 years, and above 2 years; and has the potential to aid health care professionals in lung tumor evaluation for timely treatment and patient care.

A Deep Random Forest Approach for Multimodal Brain Tumor Segmentation

  • Shaikh, Sameer
  • Phophalia, Ashish
2021 Book Section, cited 0 times
Locating brain tumor and its various sub-regions are crucial for treating tumor in humans. The challenge lies in taking cues for identification of tumors having different size, shape, and location in the brain using multimodal data. Numerous work has been done in the recent past in BRATS challenge [16]. In this work, an ensemble based approach using Deep Random Forest [23] in incremental learning mechanism is deployed. The proposed approach divides data and features into disjoint subsets and learn in chunk as cascading architecture of multi layer RFs. Each layer is also a combination of RFs to use sample of the data to learn diversity present. Given the huge amount of data, the proposed approach is fast and paralleled. In addition, we have proposed new kind of Local Binary Pattern (LBP) features with rotation. Also, few more handcrafted are designed primarily texture based features, appearance based features, statistical based features. The experiments are performed only on MICCAI BRATS 2020 dataset.

Neural Network Based Brain Tumor Segmentation

  • Shah, Darshat
  • Biswas, Avishek
  • Sonpatki, Pranali
  • Chakravarty, Sunder
  • Shah, Nameeta
2022 Book Section, cited 0 times
Website
Glioblastoma is the most common and lethal primary brain tumor in adults. Magnetic resonance imaging (MRI) is a critical diagnostic tool for glioblastoma. Besides MRI, histopathology features and molecular subtypes like MGMT methylation, IDH mutation, 1p19q co-deletion, etc. are used for prognosis. Accurate tumor segmentation is a step towards fully utilizing the MRI data for radiogenomics that will allow use of MRI to predict genomic features of glioblastoma. With accurate tumor segmentation, we can get precise quantitative information about the 3D tumor volumetric features. We have developed an inference model for brain tumor segmentation using neural network algorithm with Resnet50 as an encoding layer. Major feature of our algorithm is the use of composite image generated from T1, T2, T1ce and FLAIR series. We report average Dice scores of 0.88716 for the whole tumor, 0.79052 for the necrotic core, and 0.72760 for the contrast-enhancing tumor on the validation set of BraTS 2021 Task1 challenge. For the final unseen test data, we report average Dice scores of 0.89656 for the whole tumor, 0.83734 for the necrotic core, and 0.81162 for the contrast-enhancing tumor.

Efficient Brain Tumour Segmentation Using Co-registered Data and Ensembles of Specialised Learners

  • Shah, Beenitaben
  • Madabushi, Harish Tayyar
2021 Book Section, cited 0 times
Gliomas are the most common and aggressive form of all brain tumours, leading to a very short survival time at their highest grade. Hence, swift and accurate treatment planning is key. Magnetic resonance imaging (MRI) is a widely used imaging technique for the assessment of these tumours but the large amount of data generated by them prevents rapid manual segmentation, the task of dividing visual input into tumorous and non-tumorous regions. Hence, reliable automatic segmentation methods are required. This paper proposes, tests and validates two different approaches to achieving this. Firstly, it is hypothesised that co-registering multiple MRI modalities into a single volume will result in a more time and memory efficient approach which captures the same, if not more, information resulting in accurate segmentation. Secondly, it is hypothesised that training models independently on different MRI modalities allow models to specialise on certain labels or regions, which can then be ensembled to achieve improved predictions. These hypotheses were tested by training and evaluating 3D U-Net models on the BraTS 2020 data set. The experiments show that these hypotheses are indeed valid.

CT Evaluation of Lymph Nodes That Merge or Split during the Course of a Clinical Trial: Limitations of RECIST 1.1

  • Shafiei, A.
  • Bagheri, M.
  • Farhadi, F.
  • Apolo, A. B.
  • Biassou, N. M.
  • Folio, L. R.
  • Jones, E. C.
  • Summers, R. M.
Radiol Imaging Cancer 2021 Journal Article, cited 4 times
Website
Purpose To compare Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 with volumetric measurement in the setting of target lymph nodes that split into two or more nodes or merge into one conglomerate node. Materials and Methods In this retrospective study, target lymph nodes were evaluated on CT scans from 166 patients with different types of cancer; 158 of the scans came from The Cancer Imaging Archive. Each target node was measured using RECIST 1.1 criteria before and after merging or splitting, followed by volumetric segmentation. To compare RECIST 1.1 with volume, a single-dimension hypothetical diameter (HD) was determined from the nodal volume. The nodes were divided into three groups: (a) one-target merged (one target node merged with other nodes); (b) two-target merged (two neighboring target nodes merged); and (c) split node (a conglomerate node cleaved into smaller fragments). Bland-Altman analysis and t test were applied to compare RECIST 1.1 with HD. On the basis of the RECIST 1.1 concept, we compared response category changes between RECIST 1.1 and HD. Results The data set consisted of 30 merged nodes (19 one-target merged and 11 two-target merged) and 20 split nodes (mean age for all 50 included patients, 50 years +/- 7 [standard deviation]; 38 men). RECIST 1.1, volumetric, and HD measurements indicated an increase in size in all one-target merged nodes. While volume and HD indicated an increase in size for nodes in the two-target merged group, RECIST 1.1 showed a decrease in size in all two-target merged nodes. Although volume and HD demonstrated a decrease in size of all split nodes, RECIST 1.1 indicated an increase in size in 60% (12 of 20) of the nodes. Discrepancy of the response categories between RECIST 1.1 and HD was observed in 5% (one of 19) in one-target merged, 82% (nine of 11) in two-target merged, and 55% (11 of 20) in split nodes. Conclusion RECIST 1.1 does not optimally reflect size changes when lymph nodes merge or split. Keywords: CT, Lymphatic, Tumor Response Supplemental material is available for this article. (c) RSNA, 2021.

Brain Tumor Detection Using ResNet Architectures

  • Sevak, Mayur
  • Dwivedi, Vedvyas
  • Shraddha Patel, Shraddha
  • Pandya, Rahul
  • Shah, Vatsalkumar Vipulkumar
2023 Conference Paper, cited 0 times
Any tumor occurs in the body due to uncontrolled and rapid growth of cells at a particular body part and then also affecting the healthy cells in the vicinity. On the off chance that not treated at an underlying stage, it might prove fatal. Despite many important efforts and promising results, accurate predictive testing and classification of these tumors is a daunting task. Important tests for the identification of this uncontrolled cell growth come from changes in the cancer's area, shape, and volume. The primary focus of this research is to convey a technique for early detection and on tumor growth recognition from MRIs so that the decision taking can be even more spontaneous in terms of detection and beginning of the treatment. This research aims at detecting tumors in a human brain depending on a dataset, which includes images of MRI scans of about 110 patients with lower-grade gliomas. A deep learning algorithmic approach, which is ResNet architecture, is implemented for detection of these tumors in the dataset. The model performed with significant accuracy levels following preprocessing and training with segmented masks of tumors.

Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge

  • Setio, A. A. A.
  • Traverso, A.
  • de Bel, T.
  • Berens, M. S. N.
  • Bogaard, C. V. D.
  • Cerello, P.
  • Chen, H.
  • Dou, Q.
  • Fantacci, M. E.
  • Geurts, B.
  • Gugten, R. V.
  • Heng, P. A.
  • Jansen, B.
  • de Kaste, M. M. J.
  • Kotov, V.
  • Lin, J. Y.
  • Manders, Jtmc
  • Sonora-Mengana, A.
  • Garcia-Naranjo, J. C.
  • Papavasileiou, E.
  • Prokop, M.
  • Saletta, M.
  • Schaefer-Prokop, C. M.
  • Scholten, E. T.
  • Scholten, L.
  • Snoeren, M. M.
  • Torres, E. L.
  • Vandemeulebroucke, J.
  • Walasek, N.
  • Zuidhof, G. C. A.
  • Ginneken, B. V.
  • Jacobs, C.
Med Image Anal 2017 Journal Article, cited 87 times
Website
Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems.

A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network

  • Sert, Eser
  • Özyurt, Fatih
  • Doğantekin, Akif
Med Hypotheses 2019 Journal Article, cited 0 times
Website
Magnetic resonance imaging (MRI) images can be used to diagnose brain tumors. Thanks to these images, some methods have so far been proposed in order to distinguish between benign and malignant brain tumors. Many systems attempting to define these tumors are based on tissue analysis methods. However, various factors such as the quality of an MRI device, noisy images and low image resolution may decrease the quality of MRI images. To eliminate these problems, super resolution approaches are preferred as a complementary source for brain tumor images. The proposed method benefits from single image super resolution (SISR) and maximum fuzzy entropy segmentation (MFES) for brain tumor segmentation on an MRI image. Later, pre-trained ResNet architecture, which is a convolutional neural network (CNN) architecture, and support vector machine (SVM) are used to perform feature extraction and classification, respectively. It was observed in experimental studies that SISR displayed a higher performance in terms of brain tumor segmentation. Similarly, it displayed a higher performance in terms of classifying brain tumor regions as well as benign and malignant brain tumors. As a result, the present study indicated that SISR yielded an accuracy rate of 95% in the diagnosis of segmented brain tumors, which exceeds brain tumor segmentation using MFES without SISR by 7.5%.

Analysis and Application of clustering and visualization methods of computed tomography radiomic features to contribute to the characterization of patients with non-metastatic Non-small-cell lung cancer.

  • Serra, Maria Mercedes
2022 Thesis, cited 0 times
Website
Background: The lung is the most common site for cancer and has the highest worldwide cancer-related mortality. Routine study of patients with lung cancer usually includes at least one computed tomography (CT) study previous to the histopathological diagnosis. In the last decade the development of tools that help extract quantitative measures from medical imaging, known as radiomic characteristics, have become increasingly relevant in this domain, including mathematically extracted measures of volume, shape, texture analysis, etc. Radiomics can quantify tumor phenotypic characteristics non-invasively and could potentially contribute with objective elements to support these patients' diagnosis, management and prognosis in routine clinical practice. Methodology: LUNG1 dataset frommUniversity of Maastricht and publicly available in The Cancer Imaging Archive was obtained. Radiomic feature extraction was performed with pyRadiomics package v3.0.1 using CT scans from 422 non-small cell lung cancer (NSCLC) patients, including manual segmentations of the gross tumor volume. A single data frame was constructed including clinical data, radiomic features output, CT manufacturer and study date acquisition information. Exploratory data analysis, curation, feature selection, modeling and visualization was performed using R Software. Model based clustering was performed using VarselLCM library both with and without wrapper feature selection. Results: During exploratory data analysis lack of independence was found between histology and age and overall stage, and between survival curves and scanner manufacturer model. Features related to the manufacturer model were excluded from further analysis. Additional feature filtering was performed using the MRMR algorithm. When performing clustering analysis both models, with and without variable selection, showed significant association between partitions generated and survival curves, significance of this association was greater for the model with wrapper variable selection which selected only radiomic variables. original\_shape\_VoxelVolume feature showed the highest discriminative power for both models along with log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis. Clusters with significant lower median survival were also related to higher Clinical T stages, greater mean values of original\_shape\_VoxelVolume, log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis and lower mean wavelet.HHl\_glcm\_ClusterProminence. A weaker relationship was found between histology and selected clusters. Conclusions: Potential sources of bias given by relationship between different variables of interest and technical sources should be taken into account when analyzing this data set. Aside from original\_shape\_VoxelVolume feature, texture features applied to images with LoG and wavelet filters where found most significantly associated with different clinical characteristics in the present analysis. Value: This work highlights the relevance of analyzing clinical data and technical sources when performing radiomic analysis. It also goes through the different steps needed to extract, analyze and visualize a high dimensional dataset of radiomic features and describes associations between radiomic features and clinical variables establishing the base for future work.

Deep Learning Architectures for Automated Image Segmentation

  • Sengupta, Debleena
2019 Thesis, cited 0 times
Website
Image segmentation is widely used in a variety of computer vision tasks, such as object localization and recognition, boundary detection, and medical imaging. This thesis proposes deep learning architectures to improve automatic object localization and boundary delineation for salient object segmentation in natural images and for 2D medical image segmentation. First, we propose and evaluate a novel dilated dense encoder-decoder architecture with a custom dilated spatial pyramid pooling block to accurately localize and delineate boundaries for salient object segmentation. The dilation offers better spatial understanding and the dense connectivity preserves features learned at shallower levels of the network for better localization. Tested on three publicly available datasets, our architecture outperforms the state-of-the-art for one and is very competitive on the other two. Second, we propose and evaluate a custom 2D dilated dense UNet architecture for accurate lesion localization and segmentation in medical images. This architecture can be utilized as a stand alone segmentation framework or used as a rich feature extracting backbone to aid other models in medical image segmentation. Our architecture outperforms all baseline models for accurate lesion localization and segmentation on a new dataset. We furthermore explore the main considerations that should be taken into account for 3D medical image segmentation, among them preprocessing techniques and specialized loss functions.

A Population-Based Digital Reference Object (DRO) for Optimizing Dynamic Susceptibility Contrast (DSC)-MRI Methods for Clinical Trials

  • Semmineh, Natenael B
  • Stokes, Ashley M
  • Bell, Laura C
  • Boxerman, Jerrold L
  • Quarles, C Chad
Tomography 2017 Journal Article, cited 5 times
Website
The standardization and broad-scale integration of dynamic susceptibility contrast (DSC)-magnetic resonance imaging (MRI) have been confounded by a lack of consensus on DSC-MRI methodology for preventing potential relative cerebral blood volume inaccuracies, including the choice of acquisition protocols and postprocessing algorithms. Therefore, we developed a digital reference object (DRO), using physiological and kinetic parameters derived from in vivo data, unique voxel-wise 3-dimensional tissue structures, and a validated MRI signal computational approach, aimed at validating image acquisition and analysis methods for accurately measuring relative cerebral blood volume in glioblastomas. To achieve DSC-MRI signals representative of the temporal characteristics, magnitude, and distribution of contrast agent-induced T1 and changes observed across multiple glioblastomas, the DRO's input parameters were trained using DSC-MRI data from 23 glioblastomas (>40 000 voxels). The DRO's ability to produce reliable signals for combinations of pulse sequence parameters and contrast agent dosing schemes unlike those in the training data set was validated by comparison with in vivo dual-echo DSC-MRI data acquired in a separate cohort of patients with glioblastomas. Representative applications of the DRO are presented, including the selection of DSC-MRI acquisition and postprocessing methods that optimize CBV accuracy, determination of the impact of DSC-MRI methodology choices on sample size requirements, and the assessment of treatment response in clinical glioblastoma trials.

Dimension reduction and outlier detection of 3-D shapes derived from multi-organ CT images

  • Selle, M.
  • Kircher, M.
  • Schwennen, C.
  • Visscher, C.
  • Jung, K.
2024 Journal Article, cited 0 times
Website
BACKGROUND: Unsupervised clustering and outlier detection are important in medical research to understand the distributional composition of a collective of patients. A number of clustering methods exist, also for high-dimensional data after dimension reduction. Clustering and outlier detection may, however, become less robust or contradictory if multiple high-dimensional data sets per patient exist. Such a scenario is given when the focus is on 3-D data of multiple organs per patient, and a high-dimensional feature matrix per organ is extracted. METHODS: We use principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE) and multiple co-inertia analysis (MCIA) combined with bagplots to study the distribution of multi-organ 3-D data taken by computed tomography scans. After point-set registration of multiple organs from two public data sets, multiple hundred shape features are extracted per organ. While PCA and t-SNE can only be applied to each organ individually, MCIA can project the data of all organs into the same low-dimensional space. RESULTS: MCIA is the only approach, here, with which data of all organs can be projected into the same low-dimensional space. We studied how frequently (i.e., by how many organs) a patient was classified to belong to the inner or outer 50% of the population, or as an outlier. Outliers could only be detected with MCIA and PCA. MCIA and t-SNE were more robust in judging the distributional location of a patient in contrast to PCA. CONCLUSIONS: MCIA is more appropriate and robust in judging the distributional location of a patient in the case of multiple high-dimensional data sets per patient. It is still recommendable to apply PCA or t-SNE in parallel to MCIA to study the location of individual organs.

UDA-CT: A General Framework for CT Image Standardization

  • Selim, Md
  • Zhang, Jie
  • Fei, Baowei
  • Lewis, Matthew
  • Zhang, Guo-Qiang
  • Chen, Jin
2022 Conference Paper, cited 0 times
Large-scale CT image studies often suffer from a lack of homogeneity regarding radiomic characteristics due to the images acquired with scanners from different vendors or with different reconstruction algorithms. We propose a deep learning-based framework called UDA-CT to tackle the homogeneity issue by leveraging both paired and unpaired images. Using UDA-CT, the CT images can be standardized both from different acquisition protocols of the same scanner and CT images acquired using a similar protocol but scanners from different vendors. UDA-CT incorporates recent advances in deep learning including domain adaptation and adversarial augmentation. It includes a unique design for model training batch which integrates nonstandard images and their adversarial variations to enhance model generalizability. The experimental results show that UDA-CT significantly improves the performance of the cross-scanner image standardization by utilizing both paired and unpaired data.

2d view aggregation for lymph node detection using a shallow hierarchy of linear classifiers

  • Seff, Ari
  • Lu, Le
  • Cherry, Kevin M
  • Roth, Holger R
  • Liu, Jiamin
  • Wang, Shijun
  • Hoffman, Joanne
  • Turkbey, Evrim B
  • Summers, Ronald M
2014 Book Section, cited 21 times
Website
Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both simple pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.

COMPUTER AIDED DETECTION OF LUNG CYSTS USING CONVOLUTIONAL NEURAL NETWORK (CNN)

  • Kishore Sebastian
  • S. Devi
Turkish Journal of Physiotherapy and Rehabilitation 2021 Journal Article, cited 0 times
Website
Lung cancer is one of the baleful diseases. The survival rate will be low if the diagonisation and treatment of lung tumour gets delayed. But the survival rate and saving lives can be enhanced with opportune diagnosis and prompt treatment. The seriousness of the disease calls for a highly efficient system that can identify cancerous growth with high accuracy level. Computer Tomography (CT) scan is used to obtain detailed picture of different body parts. However it is difficult to scrutinize the presence and coverage of cancerous cells in the lungs using this scan; even for professionals.So a new model based on the Mumford and Shah Model using convolutional neural network (CNN) classification is proposed in this paper. The proposed model will provide an output with higher efficiency and accuracy in lesser amount of time. This system uses seven metrics for assessment used in this system are Classification Accuracy, sensitivity, AUC, F Measure, Specificity, precision, Brier Score and MCC. And finally the results obtained using SVM are then compared in terms of these seven metrics with the results obtained using Decision-Tree, KNN, CNN and Adaptive Boosting algorithms, and this clearly shows the higher accuracy of the proposed system over the existing system

Repeatability of Multiparametric Prostate MRI Radiomics Features

  • Schwier, Michael
  • van Griethuysen, Joost
  • Vangel, Mark G
  • Pieper, Steve
  • Peled, Sharon
  • Tempany, Clare
  • Aerts, Hugo J W L
  • Kikinis, Ron
  • Fennessy, Fiona M
  • Fedorov, Andriy
2019 Journal Article, cited 46 times
Website
In this study we assessed the repeatability of radiomics features on small prostate tumors using test-retest Multiparametric Magnetic Resonance Imaging (mpMRI). The premise of radiomics is that quantitative image-based features can serve as biomarkers for detecting and characterizing disease. For such biomarkers to be useful, repeatability is a basic requirement, meaning its value must remain stable between two scans, if the conditions remain stable. We investigated repeatability of radiomics features under various preprocessing and extraction configurations including various image normalization schemes, different image pre-filtering, and different bin widths for image discretization. Although we found many radiomics features and preprocessing combinations with high repeatability (Intraclass Correlation Coefficient > 0.85), our results indicate that overall the repeatability is highly sensitive to the processing parameters. Neither image normalization, using a variety of approaches, nor the use of pre-filtering options resulted in consistent improvements in repeatability. We urge caution when interpreting radiomics features and advise paying close attention to the processing configuration details of reported results. Furthermore, we advocate reporting all processing details in radiomics studies and strongly recommend the use of open source implementations.

Wwox deficiency in human cancers: Role in treatment resistance

  • Schrock, Morgan S
2017 Thesis, cited 0 times
Website

Wwox–Brca1 interaction: role in DNA repair pathway choice

  • Schrock, MS
  • Batar, B
  • Lee, J
  • Druck, T
  • Ferguson, B
  • Cho, JH
  • Akakpo, K
  • Hagrass, H
  • Heerema, NA
  • Xia, F
Oncogene 2016 Journal Article, cited 12 times
Website

Classification of CT pulmonary opacities as perifissural nodules: reader variability

  • Schreuder, Anton
  • van Ginneken, Bram
  • Scholten, Ernst T
  • Jacobs, Colin
  • Prokop, Mathias
  • Sverzellati, Nicola
  • Desai, Sujal R
  • Devaraj, Anand
  • Schaefer-Prokop, Cornelia M
RadiologyRadiology 2018 Journal Article, cited 3 times
Website

Predicting all-cause and lung cancer mortality using emphysema score progression rate between baseline and follow-up chest CT images: A comparison of risk model performances

  • Schreuder, Anton
  • Jacobs, Colin
  • Gallardo-Estrella, Leticia
  • Prokop, Mathias
  • Schaefer-Prokop, Cornelia M
  • van Ginneken, Bram
PLoS One 2019 Journal Article, cited 0 times
Website

Anatomical Segmentation of CT images for Radiation Therapy planning using Deep Learning

  • Schreier, Jan
2018 Thesis, cited 0 times
Website

Tens of images can suffice to train neural networks for malignant leukocyte detection

  • Schouten, J. P. E.
  • Matek, C.
  • Jacobs, L. F. P.
  • Buck, M. C.
  • Bosnacki, D.
  • Marr, C.
2021 Journal Article, cited 0 times
Website
Convolutional neural networks (CNNs) excel as powerful tools for biomedical image classification. It is commonly assumed that training CNNs requires large amounts of annotated data. This is a bottleneck in many medical applications where annotation relies on expert knowledge. Here, we analyze the binary classification performance of a CNN on two independent cytomorphology datasets as a function of training set size. Specifically, we train a sequential model to discriminate non-malignant leukocytes from blast cells, whose appearance in the peripheral blood is a hallmark of leukemia. We systematically vary training set size, finding that tens of training images suffice for a binary classification with an ROC-AUC over 90%. Saliency maps and layer-wise relevance propagation visualizations suggest that the network learns to increasingly focus on nuclear structures of leukocytes as the number of training images is increased. A low dimensional tSNE representation reveals that while the two classes are separated already for a few training images, the distinction between the classes becomes clearer when more training images are used. To evaluate the performance in a multi-class problem, we annotated single-cell images from a acute lymphoblastic leukemia dataset into six different hematopoietic classes. Multi-class prediction suggests that also here few single-cell images suffice if differences between morphological classes are large enough. The incorporation of deep learning algorithms into clinical practice has the potential to reduce variability and cost, democratize usage of expertise, and allow for early detection of disease onset and relapse. Our approach evaluates the performance of a deep learning based cytology classifier with respect to size and complexity of the training data and the classification task.

Dynamic susceptibility contrast MRI measures of relative cerebral blood volume as a prognostic marker for overall survival in recurrent glioblastoma: results from the ACRIN 6677/RTOG 0625 multicenter trial

  • Schmainda, K. M.
  • Zhang, Z.
  • Prah, M.
  • Snyder, B. S.
  • Gilbert, M. R.
  • Sorensen, A. G.
  • Barboriak, D. P.
  • Boxerman, J. L.
2015 Journal Article, cited 0 times
Website
Background. The study goal was to determine whether changes in relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) MRI are predictive of overall survival (OS) in patients with recurrent glioblastoma multiforme (GBM) when measured 2, 8, and 16 weeks after treatment initiation. Methods. Patients with recurrent GBM (37/123) enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized, phase II trial of bevacizumab with irinotecan or temozolomide, consented to DSC-MRI plus conventional MRI, 21 with DSC-MRI at baseline and at least 1 postbaseline scan. Contrast-enhancing regions of interest were determined semi-automatically using pre- and postcontrast T1-weighted images. Mean tumor rCBV normalized to white matter (nRCBV) and standardized rCBV (sRCBV) were determined for these regions of interest. The OS rates for patients with positive versus negative changes from baseline in nRCBV and sRCBV were compared using Wilcoxon rank-sum and Kaplan-Meier survival estimates with log-rank tests. Results. Patients surviving at least 1 year (OS-1) had significantly larger decreases in nRCBV at week 2 (P=.0451) and sRCBV at week 16 (P=.014). Receiver operating characteristic analysis found the percent changes of nRCBV and sRCBV at week 2 and sRCBV at week 16, but not rCBV data at week 8, to be good prognostic markers for OS-1. Patients with positive change from baseline rCBV had significantly shorter OS than those with negative change at both week 2 and week 16 (P=.0015 and P=.0067 for nRCBV and P=.0251 and P=.0004 for sRCBV, respectively). Conclusions. Early decreases in rCBV are predictive of improved survival in patients with recurrent GBM treated with bevacizumab.

Quantitative Delta T1 (dT1) as a Replacement for Adjudicated Central Reader Analysis of Contrast-Enhancing Tumor Burden: A Subanalysis of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 Multicenter Brain Tumor Trial.

  • Schmainda, K M
  • Prah, M A
  • Zhang, Z
  • Snyder, B S
  • Rand, S D
  • Jensen, T R
  • Barboriak, D P
  • Boxerman, J L
AJNR Am J Neuroradiol 2019 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Brain tumor clinical trials requiring solid tumor assessment typically rely on the 2D manual delineation of enhancing tumors by >/=2 expert readers, a time-consuming step with poor interreader agreement. As a solution, we developed quantitative dT1 maps for the delineation of enhancing lesions. This retrospective analysis compares dT1 with 2D manual delineation of enhancing tumors acquired at 2 time points during the post therapeutic surveillance period of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 (ACRIN 6677/RTOG 0625) clinical trial. MATERIALS AND METHODS: Patients enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized Phase II trial of bevacizumab in recurrent glioblastoma, underwent standard MR imaging before and after treatment initiation. For 123 patients from 23 institutions, both 2D manual delineation of enhancing tumors and dT1 datasets were evaluable at weeks 8 (n = 74) and 16 (n = 57). Using dT1, we assessed the radiologic response and progression at each time point. Percentage agreement with adjudicated 2D manual delineation of enhancing tumor reads and association between progression status and overall survival were determined. RESULTS: For identification of progression, dT1 and adjudicated 2D manual delineation of enhancing tumor reads were in perfect agreement at week 8, with 73.7% agreement at week 16. Both methods showed significant differences in overall survival at each time point. When nonprogressors were further divided into responders versus nonresponders/nonprogressors, the agreement decreased to 70.3% and 52.6%, yet dT1 showed a significant difference in overall survival at week 8 (P = .01), suggesting that dT1 may provide greater sensitivity for stratifying subpopulations. CONCLUSIONS: This study shows that dT1 can predict early progression comparable with the standard method but offers the potential for substantial time and cost savings for clinical trials.

Multisite Concordance of DSC-MRI Analysis for Brain Tumors: Results of a National Cancer Institute Quantitative Imaging Network Collaborative Project

  • Schmainda, KM
  • Prah, MA
  • Rand, SD
  • Liu, Y
  • Logan, B
  • Muzi, M
  • Rane, SD
  • Da, X
  • Yen, Y-F
  • Kalpathy-Cramer, J
American Journal of Neuroradiology 2018 Journal Article, cited 0 times
Website

Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

  • Scarpelli, M.
  • Eickhoff, J.
  • Cuna, E.
  • Perlman, S.
  • Jeraj, R.
Phys Med Biol 2018 Journal Article, cited 2 times
Website
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. METHODS: The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (lambda) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent (18)F-fluorodeoxyglucose ((18)F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent (18)F-Fluorothymidine ((18)F-FLT) PET scans at our institution. RESULTS: After applying the optimal Box-Cox transformations, neither the pre nor the post treatment (18)F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for (18)F-FLT PET SUV distributions (P > 0.10). For both (18)F-FDG and (18)F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both (18)F-FDG and (18)F-FLT where a log transformation was not optimal for providing normal SUV distributions. CONCLUSION: Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

Enhancing the REMBRANDT MRI collection with expert segmentation labels and quantitative radiomic features

  • Sayah, A.
  • Bencheqroun, C.
  • Bhuvaneshwar, K.
  • Belouali, A.
  • Bakas, S.
  • Sako, C.
  • Davatzikos, C.
  • Alaoui, A.
  • Madhavan, S.
  • Gusev, Y.
Sci Data 2022 Journal Article, cited 0 times
Website
Malignancy of the brain and CNS is unfortunately a common diagnosis. A large subset of these lesions tends to be high grade tumors which portend poor prognoses and low survival rates, and are estimated to be the tenth leading cause of death worldwide. The complex nature of the brain tissue environment in which these lesions arise offers a rich opportunity for translational research. Magnetic Resonance Imaging (MRI) can provide a comprehensive view of the abnormal regions in the brain, therefore, its applications in the translational brain cancer research is considered essential for the diagnosis and monitoring of disease. Recent years has seen rapid growth in the field of radiogenomics, especially in cancer, and scientists have been able to successfully integrate the quantitative data extracted from medical images (also known as radiomics) with genomics to answer new and clinically relevant questions. In this paper, we took raw MRI scans from the REMBRANDT data collection from public domain, and performed volumetric segmentation to identify subregions of the brain. Radiomic features were then extracted to represent the MRIs in a quantitative yet summarized format. This resulting dataset now enables further biomedical and integrative data analysis, and is being made public via the NeuroImaging Tools & Resources Collaboratory (NITRC) repository ( https://www.nitrc.org/projects/rembrandt_brain/ ).

Comparison of segmentation-free and segmentation-dependent computer-aided diagnosis of breast masses on a public mammography dataset

  • Sawyer Lee, Rebecca
  • Dunnmon, Jared A
  • He, Ann
  • Tang, Siyi
  • Re, Christopher
  • Rubin, Daniel L
J Biomed Inform 2021 Journal Article, cited 1 times
Website
PURPOSE: To compare machine learning methods for classifying mass lesions on mammography images that use predefined image features computed over lesion segmentations to those that leverage segmentation-free representation learning on a standard, public evaluation dataset. METHODS: We apply several classification algorithms to the public Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM), in which each image contains a mass lesion. Segmentation-free representation learning techniques for classifying lesions as benign or malignant include both a Bag-of-Visual-Words (BoVW) method and a Convolutional Neural Network (CNN). We compare classification performance of these techniques to that obtained using two different segmentation-dependent approaches from the literature that rely on specific combinations of end classifiers (e.g. linear discriminant analysis, neural networks) and predefined features computed over the lesion segmentation (e.g. spiculation measure, morphological characteristics, intensity metrics). RESULTS: We report area under the receiver operating characteristic curve (AZ) values for malignancy classification on CBIS-DDSM for each technique. We find average AZ values of 0.73 for a segmentation-free BoVW method, 0.86 for a segmentation-free CNN method, 0.75 for a segmentation-dependent linear discriminant analysis of Rubber-Band Straightening Transform features, and 0.58 for a hybrid rule-based neural network classification using a small number of hand-designed features. CONCLUSIONS: We find that malignancy classification performance on the CBIS-DDSM dataset using segmentation-free BoVW features is comparable to that of the best segmentation-dependent methods we study, but also observe that a common segmentation-free CNN model substantially and significantly outperforms each of these (p < 0.05). These results reinforce recent findings suggesting that representation learning techniques such as BoVW and CNNs are advantageous for mammogram analysis because they do not require lesion segmentation, the quality and specific characteristics of which can vary substantially across datasets. We further observe that segmentation-dependent methods achieve performance levels on CBIS-DDSM inferior to those achieved on the original evaluation datasets reported in the literature. Each of these findings reinforces the need for standardization of datasets, segmentation techniques, and model implementations in performance assessments of automated classifiers for medical imaging.

BRAIN CANCER DETECTION FROM MRI: A MACHINE LEARNING APPROACH (TENSORFLOW)

  • Sawant, Aaswad
  • Bhandari, Mayur
  • Yadav, Ravikumar
  • Yele, Rohan
  • Bendale, Mrs Sneha
BRAIN 2018 Journal Article, cited 0 times
Website

Brain Tumour Segmentation Using Probabilistic U-Net

  • Savadikar, Chinmay
  • Kulhalli, Rahul
  • Garware, Bhushan
2021 Book Section, cited 0 times
We describe our approach towards the segmentation task of the BRATS 2020 challenge. We use the Probabilistic UNet to explore the effect of sampling different segmentation maps, which may be useful to experts when the opinions of different experts vary. We use 2D segmentation models and approach the problem in a slice-by-slice manner. To explore the possibility of designing robust models, we use self attention in the UNet, and the prior and posterior networks, and explore the effect of varying the number of attention blocks on the quality of the segmentation. Our model achieves Dice scores of 0.81898 on Whole Tumour, 0.71681 on Tumour Core, and 0.68893 on Enhancing Tumour on the Validation data, and 0.7988 on Whole Tumour, 0.7771 on Tumour Core, and 0.7249 on Enhancing Tumour on the Testing data. Our code is available at https://github.com/rahulkulhalli/BRATS2020.

Intensity-modulated irradiation for superficial tumors by overlapping irradiation fields using intensity modulators in accelerator-based BNCT

  • Sasaki, Akinori
  • Hu, Naonori
  • Takata, Takushi
  • Matsubayashi, Nishiki
  • Sakurai, Yoshinori
  • Suzuki, Minoru
  • Tanaka, Hiroki
Journal of Radiation Research 2022 Journal Article, cited 0 times
Website

Development of optimization method for uniform dose distribution on superficial tumor in an accelerator-based boron neutron capture therapy system

  • Sasaki, A.
  • Hu, N.
  • Matsubayashi, N.
  • Takata, T.
  • Sakurai, Y.
  • Suzuki, M.
  • Tanaka, H.
J Radiat Res 2023 Journal Article, cited 0 times
Website
To treat superficial tumors using accelerator-based boron neutron capture therapy (ABBNCT), a technique was investigated, based on which, a single-neutron modulator was placed inside a collimator and was irradiated with thermal neutrons. In large tumors, the dose was reduced at their edges. The objective was to generate a uniform and therapeutic intensity dose distribution. In this study, we developed a method for optimizing the shape of the intensity modulator and irradiation time ratio to generate a uniform dose distribution to treat superficial tumors of various shapes. A computational tool was developed, which performed Monte Carlo simulations using 424 different source combinations. We determined the shape of the intensity modulator with the highest minimum tumor dose. The homogeneity index (HI), which evaluates uniformity, was also derived. To evaluate the efficacy of this method, the dose distribution of a tumor with a diameter of 100 mm and thickness of 10 mm was evaluated. Furthermore, irradiation experiments were conducted using an ABBNCT system. The thermal neutron flux distribution outcomes that have considerable impacts on the tumor's dose confirmed a good agreement between experiments and calculations. Moreover, the minimum tumor dose and HI improved by 20 and 36%, respectively, compared with the irradiation case wherein a single-neutron modulator was used. The proposed method improves the minimum tumor volume and uniformity. The results demonstrate the method's efficacy in ABBNCT for the treatment of superficial tumors.

An Accuracy vs. Complexity Comparison of Deep Learning Architectures for the Detection of COVID-19 Disease

  • Sarv Ahrabi, Sima
  • Scarpiniti, Michele
  • Baccarelli, Enzo
  • Momenzadeh, Alireza
Computation 2021 Journal Article, cited 0 times
Website

CCCD: Corner detection and curve reconstruction for improved 3D surface reconstruction from 2D medical images

  • Sarmah, Mriganka
  • Neelima, Arambam
2023 Journal Article, cited 0 times
The conventional approach to creating 3D surfaces from 2D medical images is the marching cube algorithm, but it often results in rough surfaces. On the other hand, B-spline curves and nonuniform rational B-splines (NURBSs) offer a smoother alternative for 3D surface reconstruction. However, NURBSs use control points (CTPs) to define the object shape and corners play an important role in defining the boundary shape as well. Thus, in order to fill the research gap in applying corner detection (CD) methods to generate the most favorable CTPs, in this paper corner points are identified to predict organ shape. However, CTPs must be in ordered coordinate pairs. This ordering problem is resolved using curve reconstruction (CR) or chain code (CC) algorithms. Existing CR methods lead to issues like holes, while some chain codes have junction-induced errors that need preprocessing. To address the above issues, a new graph neural network (GNN)-based approach named curvature and chain code-based corner detection (CCCD) is introduced that not only orders the CTPs but also removes junction errors. The goal is to improve accuracy and reliability in generating smooth surfaces. The paper fuses well-known CD methods with a curve generation technique and compares these alternative fused methods with CCCD. CCCD is also compared against other curve reconstruction techniques to establish its superiority. For validation, CCCD?s accuracy in predicting boundaries is compared with deep learning models like Polar U-Net, KiU-Net 3D, and HdenseUnet, achieving an impressive Dice score of 98.49%, even with only 39.13% boundary points.

Semi-automatic 3D lung nodule segmentation in CT using dynamic programming

  • Sargent, Dustin
  • Park, Sun Young
2017 Conference Proceedings, cited 0 times
Website

Multimodal Retrieval Framework for Brain Volumes in 3D MR Volumes

  • Sarathi, Mangipudi Partha
  • Ansari, Mohammad Ahmad
Journal of Medical and Biological Engineering 2017 Journal Article, cited 1 times
Website
The paper presents retrieval framework for extracting similar 3D tumor volumes in magnetic resonance brain volumes in response to a query tumor volume. Similar volumes correspond to closeness in spatial location of the brain structures. Query slice pertains to a new tumor volume of a patient and the output slices belong to the tumor volumes related to previous case histories stored in the database. The framework could be of immense help to the medical practitioners. It might prove to be a useful diagnostic aid for the medical expert and also serve as a teaching aid for researchers.

A scheme for patient study retrieval from 3D brain MR volumes

  • Sarathi, Mangipudi Partha
  • Ansari, MA
2015 Conference Proceedings, cited 1 times
Website
The paper presents a pipeline for case retrieval in magnetic resonance (MR) brain volumes acquired from biomedical image sensors. The framework proposed in this paper, inputs a patient study consisting of MR brain image slices and outputs similar patient case studies present in the brain MR volume database. Query slice pertains to a new case and the output slices belong to the previous case histories stored in the database. The framework could be of immense help to the medical practitioners. It might prove to be a useful diagnostic aid for the medical expert and also serve as a teaching aid for students and researchers in the medical field. Apart from diagnosis, radiologists can use the tumor location to past case studies relevant to the present patient study, which can aid in the treatment of the patients. Similarity distance employed in this work is the three dimensional Hausdorff distance which is significant as it takes into account the spatial location of the tumors. The preliminary results are encouraging and therefore the scheme could be adapted to various modalities and pathologies.

Weakly supervised temporal model for prediction of breast cancer distant recurrence

  • Sanyal, J.
  • Tariq, A.
  • Kurian, A. W.
  • Rubin, D.
  • Banerjee, I.
2021 Journal Article, cited 0 times
Website
Efficient prediction of cancer recurrence in advance may help to recruit high risk breast cancer patients for clinical trial on-time and can guide a proper treatment plan. Several machine learning approaches have been developed for recurrence prediction in previous studies, but most of them use only structured electronic health records and only a small training dataset, with limited success in clinical application. While free-text clinic notes may offer the greatest nuance and detail about a patient's clinical status, they are largely excluded in previous predictive models due to the increase in processing complexity and need for a complex modeling framework. In this study, we developed a weak-supervision framework for breast cancer recurrence prediction in which we trained a deep learning model on a large sample of free-text clinic notes by utilizing a combination of manually curated labels and NLP-generated non-perfect recurrence labels. The model was trained jointly on manually curated data from 670 patients and NLP-curated data of 8062 patients. It was validated on manually annotated data from 224 patients with recurrence and achieved 0.94 AUROC. This weak supervision approach allowed us to learn from a larger dataset using imperfect labels and ultimately provided greater accuracy compared to a smaller hand-curated dataset, with less manual effort invested in curation.

Improving Generalizability to Out-of-Distribution Data in Radiogenomic Models to Predict IDH Mutation Status in Glioma Patients

  • Santinha, Joao
  • Matos, Celso
  • Papanikolaou, Nickolas
  • Figueiredo, Mario A. T.
2022 Conference Paper, cited 0 times
Website
Radiogenomics offers a potential virtual and non-invasive biopsy, being very promising in cases where genomic testing is not available or possible. However, radiogenomics mod-els often lack generalizability, where a performance degradation on unseen data caused by differences in the MRI sequence parameters, MRI manufacturers, and scanners make this issue worse. Therefore, selecting the radiomic features to be included in the model is of paramount importance, as a proper feature selection may lead to robustness and generalizability of the models in unseen data. This study developed and assessed a novel unsupervised, yet biological-based, feature selection method capable of improving the performance of radiogenomic models in unseen data. We assessed 63 low-grade gliomas and glioblastomas multiform patients acquired in 4 different institutions/centers and publicly available in The Cancer Genome Archive (TCGA) and The Cancer Imaging Archive (TCIA). Radiomics features were extracted from multiparametric MRI images (pre-contrast T1-weighted - T1w, post-contrast T1-weighted - cT1w, T2-weighted - T2w, and FLAIR) and different regions-of-interest (enhancing tumor, non-enhancing tumor/necrosis, and edema). The proposed method was compared with an embedded feature selection approach commonly used in radiomics/radiogenomics studies by leaving data from a center as an independent held-out test set and tuning the model with the data from the remaining centers. The performances of the proposed method was consistently better in all test sets showing that it improves robustness and generalizability to out-of-distribution data.

Improving performance and generalizability in radiogenomics: a pilot study for prediction of IDH1/2 mutation status in gliomas with multicentric data

  • Santinha, J.
  • Matos, C.
  • Figueiredo, M.
  • Papanikolaou, N.
J Med Imaging (Bellingham) 2021 Journal Article, cited 0 times
Website
Purpose: Radiogenomics offers a potential virtual and noninvasive biopsy. However, radiogenomics models often suffer from generalizability issues, which cause a performance degradation on unseen data. In MRI, differences in the sequence parameters, manufacturers, and scanners make this generalizability issue worse. Such image acquisition information may be used to define different environments and select robust and invariant radiomic features associated with the clinical outcome that should be included in radiomics/radiogenomics models. Approach: We assessed 77 low-grade gliomas and glioblastomas multiform patients publicly available in TCGA and TCIA. Radiomics features were extracted from multiparametric MRI images (T1-weighted, contrast-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery) and different regions-of-interest (enhancing tumor, nonenhancing tumor/necrosis, and edema). A method developed to find variables that are part of causal structures was used for feature selection and compared with an embedded feature selection approach commonly used in radiomics/radiogenomics studies, across two different scenarios: (1) leaving data from a center as an independent held-out test set and tuning the model with the data from the remaining centers and (2) use stratified partitioning to obtain the training and the held-out test sets. Results: In scenario (1), the performance of the proposed methodology and the traditional embedded method was AUC: 0.75 [0.25; 1.00] versus 0.83 [0.50; 1.00], Sens.: 0.67 [0.20; 0.93] versus 0.67 [0.20; 0.93], Spec.: 0.75 [0.30; 0.95] versus 0.75 [0.30; 0.95], and MCC: 0.42 [0.19; 0.68] versus 0.42 [0.19; 0.68] for center 1 as the held-out test set. The performance of both methods for center 2 as the held-out test set was AUC: 0.64 [0.36; 0.91] versus 0.55 [0.27; 0.82], Sens.: 0.00 [0.00; 0.73] versus 0.00 [0.00; 0.73], Spec.: 0.82 [0.52; 0.94] versus 0.91 [0.62; 0.98], and MCC: - 0.13 [ - 0.38 ; - 0.04 ] versus - 0.09 [ - 0.38 ; - 0.02 ] , whereas for center 3 was AUC: 0.80 [0.62; 0.95] versus 0.89 [0.56; 0.96], Sens.: 0.86 [0.48; 0.97] versus 0.86 [0.48; 0.97], Spec.: 0.72 [0.54; 0.85] versus 0.79 [0.61; 0.90], and MCC: 0.47 [0.41; 0.53] versus 0.55 [0.48; 0.60]. For center 4, the performance of both methods was AUC: 0.77 [0.51; 1.00] versus 0.75 [0.47; 0.97], Sens.: 0.53 [0.30; 0.75] versus 0.00 [0.00; 0.15], Spec.: 0.71 [0.35; 0.91] versus 0.86 [0.48; 0.97], and MCC: 0.23 [0.16; 0.31] versus. - 0.32 [ - 0.46 ; - 0.20 ] . In scenario (2), the performance of these methods was AUC: 0.89 [0.71; 1.00] versus 0.79 [0.58; 0.94], Sens.: 0.86 [0.80; 0.92] versus 0.43 [0.15; 0.74], Spec.: 0.87 [0.62; 0.96] versus 0.87 [0.62; 0.96], and MCC: 0.70 [0.60; 0.77] versus 0.33 [0.24; 0.42]. Conclusions: This proof-of-concept study demonstrated good performance by the proposed feature selection method in the majority of the studied scenarios, as it promotes robustness of features included in the models and the models' generalizability by making used imaging data of different scanners or with sequence parameters.

Development of End-to-End AI–Based MRI Image Analysis System for Predicting IDH Mutation Status of Patients with Gliomas: Multicentric Validation

  • Santinha, João
  • Katsaros, Vasileios
  • Stranjalis, George
  • Liouta, Evangelia
  • Boskos, Christos
  • Matos, Celso
  • Viegas, Catarina
  • Papanikolaou, Nickolas
2024 Journal Article, cited 0 times
Radiogenomics has shown potential to predict genomic phenotypes from medical images. The development of models using standard-of-care pre-operative MRI images, as opposed to advanced MRI images, enables a broader reach of such models. In this work, a radiogenomics model for IDH mutation status prediction from standard-of-care MRIs in patients with glioma was developed and validated using multicentric data. A cohort of 142 (wild-type: 32.4%) patients with glioma retrieved from the TCIA/TCGA was used to train a logistic regression model to predict the IDH mutation status. The model was evaluated using retrospective data collected in two distinct hospitals, comprising 36 (wild-type: 63.9%) and 53 (wild-type: 75.5%) patients. Model development utilized ROC analysis. Model discrimination and calibration were used for validation. The model yielded an AUC of 0.741 vs. 0.716 vs. 0.938, a sensitivity of 0.784 vs. 0.739 vs. 0.875, and a specificity of 0.657 vs. 0.692 vs. 1.000 on the training, test cohort 1, and test cohort 2, respectively. The assessment of model fairness suggested an unbiased model for age and sex, and calibration tests showed a p < 0.05. These results indicate that the developed model allows the prediction of the IDH mutation status in gliomas using standard-of-care MRI images and does not appear to hold sex and age biases.

Hierarchical Compositionality in Hyperbolic Space for Robust Medical Image Segmentation

  • Santhirasekaram, Ainkaran
  • Winkler, Mathias
  • Rockall, Andrea
  • Glocker, Ben
2024 Book Section, cited 0 times
Deep learning based medical image segmentation models need to be robust to domain shifts and image distortion for the safe translation of these models into clinical practice. The most popular methods for improving robustness are centred around data augmentation and adversarial training. Many image segmentation tasks exhibit regular structures with only limited variability. We aim to exploit this notion by learning a set of base components in the latent space whose composition can account for the entire structural variability of a specific segmentation task. We enforce a hierarchical prior in the composition of the base components and consider the natural geometry in which to build our hierarchy. Specifically, we embed the base components on a hyperbolic manifold which we claim leads to a more natural composition. We demonstrate that our method improves model robustness under various perturbations and in the task of single domain generalisation.

Hierarchical Compositionality in Hyperbolic Space for Robust Medical Image Segmentation

  • Santhirasekaram, Ainkaran
  • Winkler, Mathias
  • Rockall, Andrea
  • Glocker, Ben
2024 Conference Paper, cited 0 times
Deep learning based medical image segmentation models need to be robust to domain shifts and image distortion for the safe translation of these models into clinical practice. The most popular methods for improving robustness are centred around data augmentation and adversarial training. Many image segmentation tasks exhibit regular structures with only limited variability. We aim to exploit this notion by learning a set of base components in the latent space whose composition can account for the entire structural variability of a specific segmentation task. We enforce a hierarchical prior in the composition of the base components and consider the natural geometry in which to build our hierarchy. Specifically, we embed the base components on a hyperbolic manifold which we claim leads to a more natural composition. We demonstrate that our method improves model robustness under various perturbations and in the task of single domain generalisation.

A Sheaf Theoretic Perspective for Robust Prostate Segmentation

  • Santhirasekaram, Ainkaran
  • Pinto, Karen
  • Winkler, Mathias
  • Rockall, Andrea
  • Glocker, Ben
2023 Book Section, cited 0 times
Deep learning based methods have become the most popular approach for prostate segmentation in MRI. However, domain variations due to the complex acquisition process result in textural differences as well as imaging artefacts which significantly affects the robustness of deep learning models for prostate segmentation across multiple sites. We tackle this problem by using multiple MRI sequences to learn a set of low dimensional shape components whose combinatorially large learnt composition is capable of accounting for the entire distribution of segmentation outputs. We draw on the language of cellular sheaf theory to model compositionality driven by local and global topological correctness. In our experiments, our method significantly improves the domain generalisability of anatomical and tumour segmentation of the prostate. Code is available at https://github.com/AinkaranSanthi/A-Sheaf-Theoretic-Perspective-for-Robust-Segmentation.git.

A Modern Approach to Osteosarcoma Tumor Identification Through Integration of FP-Growth, Transfer Learning and Stacking Model

  • Sanmartín, John
  • Azuero, Paulina
  • Hurtado, Remigio
2024 Book Section, cited 0 times
Website
The early detection of cancer through radiographs is crucial for identifying indicative signs of its presence or status. However, the analysis of histological images of osteosarcoma faces significant challenges due to discrepancies among pathologists, intra-class variations, inter-class similarities, complex contexts, and data noise. In this article, we present a novel deep learning method that helps address these issues. The architecture of our model consists of the following phases: 1) Dataset construction: advanced image processing techniques such as dimensionality reduction, identification of frequent patterns through unsupervised learning (FP-Growth), and data augmentation are applied in this phase. 2) Stacking model: we apply a stacking model that combines the strengths of two models: convolutional neural networks (CNN) with transfer learning, allowing us to leverage pre-trained knowledge from related datasets, and a Random Forest (RF) model to enhance the classification and diagnosis of osteosarcoma images. The models were trained on a dataset of publicly available images from The Cancer Imaging Archive (TCIA) [12]. The accuracy of our models is evaluated using classification metrics such as Accuracy, F1 Score, Precision, and Recall. This work provides a solid foundation for ongoing innovation in histology and the potential to apply and adapt this approach to broader clinical challenges in the future.

Regression based overall survival prediction of glioblastoma multiforme patients using a single discovery cohort of multi-institutional multi-channel MR images

  • Sanghani, Parita
  • Ang, Beng Ti
  • King, Nicolas Kon Kam
  • Ren, Hongliang
2019 Journal Article, cited 0 times
Website
Glioblastoma multiforme (GBM) are malignant brain tumors, associated with poor overall survival (OS). This study aims to predict OS of GBM patients (in days) using a regression framework and assess the impact of tumor shape features on OS prediction. Multi-channel MR image derived texture features, tumor shape, and volumetric features, and patient age were obtained for 163 GBM patients. In order to assess the impact of tumor shape features on OS prediction, two feature sets, with and without tumor shape features, were created. For the feature set with tumor shape features, the mean prediction error (MPE) was 14.6 days and its 95% confidence interval (CI) was 195.8 days. For the feature set excluding shape features, the MPE was 17.1 days and its 95% CI was observed to be 212.7 days. The coefficient of determination (R2) value obtained for the feature set with shape features was 0.92, while it was 0.90 for the feature set excluding shape features. Although marginal, inclusion of shape features improves OS prediction in GBM patients. The proposed OS prediction method using regression provides good accuracy and overcomes the limitations of GBM OS classification, like choosing data-derived or pre-decided thresholds to define the OS groups.

4D-CBCT Registration with a FBCT-derived Plug-and-Play Feasibility Regularizer

  • Sang, Y.
  • Ruan, D.
2021 Conference Paper, cited 0 times
Website
Deformable registration of phase-resolved lung images is an important procedure to appreciate respiratory motion and enhance image quality. Compared to high-resolution fan-beam CTs (FBCTs), cone-beam CTs (CBCTs) are more readily available for on-table acquisition in companion with treatment. However, CBCT registration is challenging because classic regularization energies in convention methods usually cannot overcome the strong artifacts and the lack of structural details. In this study, we propose to learn an implicit feasibility prior of respiratory motion and incorporate it in a plug-and-play (PnP) fashion into the training of an unsupervised image registration network to improve registration accuracy and robustness to noise and artifacts. In particular, we propose a novel approach to develop a feasibility descriptor from a set of deformation vector fields (DVFs) generated from FBCTs. Subsequently, this FBCT-derived feasibility descriptor was used as a spatially variant regularizer on DVF Jacobian during the unsupervised training for 4D-CBCT registration. In doing so, the higher-quality, higher-confidence information from FBCT is transferred into the much challenging problem of CBCT registration, without explicit FB-CB synthesis. The method was evaluated using manually identified landmarks on real CBCTs and automatically detected landmarks on simulated CBCTs. The method presented good robustness to noise and artifacts and generated physically more feasible DVFs. The target registration errors on the real and simulated data were (1.63 ± 0.98) and (2.16 ± 1.91) mm, respectively, significantly better than the classic bending energy regularization in both the conventional method in SimpleElastix and the unsupervised network. The average registration time was 0.04 s. Keywords Deep learning Image registration 4D cone-beam CT

Real-time interactive holographic 3D display with a 360 degrees horizontal viewing zone

  • Sando, Yusuke
  • Satoh, Kazuo
  • Barada, Daisuke
  • Yatagai, Toyohiko
Appl Opt 2019 Journal Article, cited 0 times
Website
To realize a real-time interactive holographic three-dimensional (3D) display system, we synthesize a set of 24 full high-definition (HD) binary computer-generated holograms (CGHs) based on a 3D fast-Fourier-transform-based approach. These 24 CGHs are streamed into a digital micromirror device (DMD) as a single 24-bit image at 60 Hz: 1440 CGHs are synthesized in less than a second. Continual updates of the CGHs displayed on the DMD and synchronization with a rotating mirror enlarges the horizontal viewing zone to 360 degrees using a time-division approach. We successfully demonstrate interactive manipulation, such as object rotation, rendering mode switching, and threshold value alteration, for a medical dataset of a human head obtained by X-ray computed tomography.

Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks

  • Sandfort, Veit
  • Yan, Ke
  • Pickhardt, Perry J
  • Summers, Ronald M
2019 Journal Article, cited 0 times
Website
Labeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.

Resolving the molecular complexity of brain tumors through machine learning approaches for precision medicine

  • Sandanaraj, Edwin
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) tumors are highly aggressive malignant brain tumors and are resistant to conventional therapies. The Cancer Genome Atlas (TCGA) efforts distinguished histologically similar GBM tumors into unique molecular subtypes. The World Health Organization (WHO) has also since incorporated key molecular indicators such as IDH mutations and 1p/19q co-deletions in the clinical classification scheme. The National Neuroscience Institute (NNI) Brain Tumor Resource distinguishes itself as the exclusive collection of patient tumors with corresponding live cells capable of re-creating the full spectrum of the original patient tumor molecular heterogeneity. These cells are thus important to re-create “mouse-patient tumor replicas” that can be prospectively tested with novel compounds, yet have retrospective clinical history, transcriptomic data and tissue paraffin blocks for data mining. My thesis aims to establish a computational framework for the molecular subtyping of brain tumors using machine learning approaches. The applicability of the empirical Bayes model has been demonstrated in the integration of various transcriptomic databases. We utilize predictive algorithms such as template-based, centroid-based, connectivity map (CMAP) and recursive feature elimination combined with random forest approaches to stratify primary tumors and GBM cells. These subtyping approaches serve as key factors for the development of predictive models and eventually, improving precision medicine strategies. We validate the robustness and clinical relevance of our Brain Tumor Resource by evaluating two critical pathways for GBM maintenance. We identify a sialyltransferase enzyme (ST3Gal1) transcriptomic program contributing to tumorigenicity and tumor cell invasiveness. Further, we generate a STAT3 functionally-tuned signature and demonstrate its pivotal role in patient prognosis and chemoresistance. We show that IGF1-R mediates resistance in non-responders to STAT3 inhibitors. Taken together, our studies demonstrate the application of machine learning approaches in revealing molecular insights into brain tumors and subsequently, the translation of these integrative analyses into more effective targeted therapies in the clinics.

Morphological and Fractal Properties of Brain Tumors

  • Sánchez, J.
  • Martin-Landrove, M.
2022 Journal Article, cited 0 times
Website
Tumor interface dynamics is a complex process determined by cell proliferation and invasion to neighboring tissues. Parameters extracted from the tumor interface fluctuations allow for the characterization of the particular growth model, which could be relevant for an appropriate diagnosis and the correspondent therapeutic strategy. Previous work, based on scaling analysis of the tumor interface, demonstrated that gliomas strictly behave as it is proposed by the Family-Vicsek ansatz, which corresponds to a proliferative-invasive growth model, while for meningiomas and acoustic schwannomas, a proliferative growth model is more suitable. In the present work, other morphological and dynamical descriptors are used as a complementary view, such as surface regularity, one-dimensional fluctuations represented as ordered series and bi-dimensional fluctuations of the tumor interface. These fluctuations were analyzed by Detrended Fluctuation Analysis to determine generalized fractal dimensions. Results indicate that tumor interface fractal dimension, local roughness exponent and surface regularity are parameters that discriminate between gliomas and meningiomas/schwannomas.

Customized Deep Learning Classifier for Detection of Acute Lymphoblastic Leukemia Using Blood Smear Images

  • Sampathila, Niranjana
  • Chadaga, Krishnaraj
  • Goswami, Neelankit
  • Chadaga, Rajagopala P
  • Pandya, Mayur
  • Prabhu, Srikanth
  • Bairy, Muralidhar G
  • Katta, Swathi S
  • Bhat, Devadas
  • Upadya, Sudhakara P
2022 Journal Article, cited 0 times
Website

Lung Cancer Detection on CT Scan Images Using Artificial Neural Network

  • SAMMEDH, MP
2021 Thesis, cited 0 times
Website
These days, image processing techniques are generally utilized in a few clinical regions for image improvement in prior discovery and treatment stages, where the time factor is critical to find the variation from the norm issues in target images, particularly in different malignant growth tumors, for example, lung disease, malignancy, and so forth. Image quality and precision is the center components of this exploration, image quality evaluation just as progress are relying upon the upgrade stage where low pre-processing methods is utilized dependent on channels inside principles. Following the segmentation principles, an improved area of the object of interest that is utilized as a fundamental establishment of highlight extraction is acquired. Depending on broad highlights, a typicality examination is made. In this exploration, the principle identified highlights for exact images examination are pixels rate which assists with recognizing the harmful nodules present in the CT scan images and gives the qualification between the pictures containing nodule is benign or malignant.

Classification of Lung CT Images using BRISK Features

  • Sambasivarao, B.
  • Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT) 2019 Journal Article, cited 0 times
Website
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.

Identifying key radiogenomic associations between DCE-MRI and micro-RNA expressions for breast cancer

  • Samala, Ravi K
  • Chan, Heang-Ping
  • Hadjiiski, Lubomir
  • Helvie, Mark A
  • Kim, Renaid
2017 Conference Proceedings, cited 1 times
Website
Understanding the key radiogenomic associations for breast cancer between DCE-MRI and micro-RNA expressions is the foundation for the discovery of radiomic features as biomarkers for assessing tumor progression and prognosis. We conducted a study to analyze the radiogenomic associations for breast cancer using the TCGA-TCIA data set. The core idea that tumor etiology is a function of the behavior of miRNAs is used to build the regression models. The associations based on regression are analyzed for three study outcomes: diagnosis, prognosis, and treatment. The diagnosis group consists of miRNAs associated with clinicopathologic features of breast cancer and significant aberration of expression in breast cancer patients. The prognosis group consists of miRNAs which are closely associated with tumor suppression and regulation of cell proliferation and differentiation. The treatment group consists of miRNAs that contribute significantly to the regulation of metastasis thereby having the potential to be part of therapeutic mechanisms. As a first step, important miRNA expressions were identified and their ability to classify the clinical phenotypes based on the study outcomes was evaluated using the area under the ROC curve (AUC) as a figure-of-merit. The key mapping between the selected miRNAs and radiomic features were determined using least absolute shrinkage and selection operator (LASSO) regression analysis within a two-loop leave-one-out cross-validation strategy. These key associations indicated a number of radiomic features from DCE-MRI to be potential biomarkers for the three study outcomes.

Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

  • Saltz, J.
  • Gupta, R.
  • Hou, L.
  • Kurc, T.
  • Singh, P.
  • Nguyen, V.
  • Samaras, D.
  • Shroyer, K. R.
  • Zhao, T.
  • Batiste, R.
  • Van Arnam, J.
  • Cancer Genome Atlas Research, Network
  • Shmulevich, I.
  • Rao, A. U. K.
  • Lazar, A. J.
  • Sharma, A.
  • Thorsson, V.
Cell Rep 2018 Journal Article, cited 23 times
Website
Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumor-infiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment.

Towards Generation, Management, and Exploration of Combined Radiomics and Pathomics Datasets for Cancer Research

  • Saltz, Joel
  • Almeida, Jonas
  • Gao, Yi
  • Sharma, Ashish
  • Bremer, Erich
  • DiPrima, Tammy
  • Saltz, Mary
  • Kalpathy-Cramer, Jayashree
  • Kurc, Tahsin
AMIA Summits on Translational Science Proceedings 2017 Journal Article, cited 4 times
Website
Cancer is a complex multifactorial disease state and the ability to anticipate and steer treatment results will require information synthesis across multiple scales from the host to the molecular level. Radiomics and Pathomics, where image features are extracted from routine diagnostic Radiology and Pathology studies, are also evolving as valuable diagnostic and prognostic indicators in cancer. This information explosion provides new opportunities for integrated, multi-scale investigation of cancer, but also mandates a need to build systematic and integrated approaches to manage, query and mine combined Radiomics and Pathomics data. In this paper, we describe a suite of tools and web-based applications towards building a comprehensive framework to support the generation, management and interrogation of large volumes of Radiomics and Pathomics feature sets and the investigation of correlations between image features, molecular data, and clinical outcome.

High Level Mammographic Information Fusion For Real World Ontology Population

  • Salem, Yosra Ben
  • Idodi, Rihab
  • Ettabaa, Karim Saheb
  • Hamrouni, Kamel
  • Solaiman, Basel
Journal of Digital Information Management 2017 Journal Article, cited 1 times
Website
In this paper, we propose a novel approach for ontology instantiating from real data related to the mammographic domain. In our study, we are interested in handling two modalities of mammographic images:mammography and Breast MRI. Firstly, we propose to model both images content in ontological representations since ontologies allow the description of the objects from a common perspective. In order, to overcome the ambiguity problem of representation of image’s entities, we propose to take advantage of the possibility theory applied to the ontological representation. Second, both local generated ontologies are merged in a unique formal representation with the use of two similarity measures: syntactic measure and possibilistic measure. The candidate instances are, finally, used for the global domain ontology populating in order to empower the mammographic knowledge base. The approach was validated on real world domain and the results were evaluated in terms of precision and recall by an expert.

Lung Images Segmentation and Classification Based on Deep Learning: A New Automated CNN Approach

  • Salama, Wessam M.
  • Aly, Moustafa H.
  • Elbagoury, Azza M.
Journal of Physics: Conference Series 2021 Journal Article, cited 0 times
Website
Lung cancer became a significant health problem worldwide over the past decades. This paper introduces a new generalized framework for lung cancer detection where many different strategies are explored for the classification. The ResNet50 model is applied to classify CT lung images into benign or malignant. Also, the U-Net, which is one of the most used architectures in deep learning for image segmentation, is employed to segment CT images before classification to increase system performance. Moreover, Image Size Dependent Normalization Technique (ISDNT) and Wiener filter are utilized as the preprocessing phase to enhance the images and suppress the noise. Our proposed framework which comprises preprocessing, segmentation and classification phases, is applied on two databases: Lung Nodule Analysis 2016 (Luna 16) and National Lung Screening Trial (NLST). Data augmentation technique is applied to solve the problem of lung CT images deficiency, and consequently, the overfitting of deep models will be avoided. The classification results show that the preprocessing for the CT lung image as the input for ResNet50-U-Net hybrid model achieves the best performance. The proposed model achieves 98.98% accuracy (ACC), 98.65% area under the ROC curve (AUC), 98.99% sensitivity (Se), 98.43% precision (Pr), 98.86% F1- score and 1.9876 s computational time.

Deep Learning Methods for Brain Tumor Segmentation

  • Sakli, Marwen
  • Essid, Chaker
  • Ben Salah, Bassem
  • Sakli, Hedi
2023 Book Section, cited 0 times
MRI, or magnetic resonance imaging, is one of the most recent medical imaging techniques. It allows one to visualize organs and soft tissues in different planes of space with great precision. A single person's brain is scanned using MRI in several slices through a 3D anatomical viewpoint. However, it is difficult and time-consuming to manually segment brain tumors from MRI images. Furthermore, automatic segmentation of brain tumors using these images is noninvasive, avoiding biopsy and improving the safety of the diagnosis procedure. This chapter enriches the body of knowledge in the field of neuroscience. It describes a highly automated technique for segmenting brain tumors in multimodal MRI based on deep neural networks. An experimental study was carried out using the Brain Tumor Segmentation (BraTS 2020) dataset as a proof of concept. The accuracy, precision, sensitivity, and specificity exceed 99.3%. In addition, the achieved intersection over union and loss are 85.69% and 0.0177. The obtained results based on the proposed method are validated by comparing them to real values found in the state of the art.

A novel beam stopper-based approach for scatter correction in digital planar radiography

  • Sakaltras, N.
  • Pena, A.
  • Martinez, C.
  • Desco, M.
  • Abella, M.
2023 Journal Article, cited 0 times
Website
X-ray scatter in planar radiography degrades the contrast resolution of the image, thus reducing its diagnostic utility. Antiscatter grids partially block scattered photons at the cost of increasing the dose delivered by two- to four-fold and posing geometrical restrictions that hinder their use for other acquisition settings, such as portable radiography. The few software-based approaches investigated for planar radiography mainly estimate the scatter map from a low-frequency version of the image. We present a novel method for scatter correction in planar imaging based on direct patient measurements. Samples from the shadowed regions of an additional partially obstructed projection acquired with a beam stopper placed between the X-ray source and the patient are used to estimate the scatter map. Evaluation with simulated and real data showed an increase in contrast resolution for both lung and spine and recovery of ground truth values superior to those of three recently proposed methods. Our method avoids the biases of post-processing methods and yields results similar to those for an antiscatter grid while removing geometrical restrictions at around half the radiation dose. It can be used in unconventional imaging techniques, such as portable radiography, where training datasets needed for deep-learning approaches would be very difficult to obtain.

Brain tumor detection and segmentation: Interactive framework with a visual interface and feedback facility for dynamically improved accuracy and trust

  • Sailunaz, K.
  • Bestepe, D.
  • Alhajj, S.
  • Ozyer, T.
  • Rokne, J.
  • Alhajj, R.
PLoS One 2023 Journal Article, cited 0 times
Website
Brain cancers caused by malignant brain tumors are one of the most fatal cancer types with a low survival rate mostly due to the difficulties in early detection. Medical professionals therefore use various invasive and non-invasive methods for detecting and treating brain tumors at the earlier stages thus enabling early treatment. The main non-invasive methods for brain tumor diagnosis and assessment are brain imaging like computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI) scans. In this paper, the focus is on detection and segmentation of brain tumors from 2D and 3D brain MRIs. For this purpose, a complete automated system with a web application user interface is described which detects and segments brain tumors with more than 90% accuracy and Dice scores. The user can upload brain MRIs or can access brain images from hospital databases to check presence or absence of brain tumor, to check the existence of brain tumor from brain MRI features and to extract the tumor region precisely from the brain MRI using deep neural networks like CNN, U-Net and U-Net++. The web application also provides an option for entering feedbacks on the results of the detection and segmentation to allow healthcare professionals to add more precise information on the results that can be used to train the model for better future predictions and segmentations.

Spatial-channel attention-based stochastic neighboring embedding pooling and long-short-term memory for lung nodules classification

  • Saihood, Ahmed
  • Karshenas, Hossein
  • Nilchi, Ahmad Reza Naghsh
2022 Conference Paper, cited 0 times
Handling lesion size and location variance in lung nodules are one of the main shortcomings of traditional convolutional neural networks (CNNs). The pooling layer within CNNs reduces the resolution of the feature maps causing small local details loss that needs processing by the following layers. In this article, we proposed a new pooling-based stochastic neighboring embedding method (SNE-pooling) that is able to handle the long-range dependencies property of the lung nodules. Further, an attention-based SNE-pooling model is proposed that could perform spatial and channel attention. The experimental results conducted on LIDC and LUNGx datasets show that the attention-based SNE-pooling model significantly improves the performance for the state of the art.

Total Variation for Image Denoising Based on a Novel Smart Edge Detector: An Application to Medical Images

  • Said, Ahmed Ben
  • Hadjidj, Rachid
  • Foufou, Sebti
Journal of Mathematical Imaging and Vision 2018 Journal Article, cited 0 times
Website

Improved pulmonary lung nodules risk stratification in computed tomography images by fusing shape and texture features in a machine-learning paradigm

  • Sahu, Satya Prakash
  • Londhe, Narendra D.
  • Verma, Shrish
  • Singh, Bikesh K.
  • Banchhor, Sumit Kumar
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2020 Journal Article, cited 0 times
Website
Abstract Lung cancer is one of the most deadly cancer in both men and women. Accurate and early diagnosis of pulmonary lung nodules is critical. This study presents an accurate computer-aided diagnosis (CADx) system for risk stratification of pulmonary nodules in computed tomography (CT) lung images by fusing shape and texture-based features in a machine-learning (ML) based paradigm. A database with 114 (28 high-risk) patients acquired from Lung Image Database Consortium (LIDC) is used in this study. After nodule segmentation using K-means clustering, features based on shape and texture attributes are extracted. Seven different filter and wrapper-based feature selection techniques are used for dominant feature selection. Lastly, the classification of nodules is performed by a support vector machine using six different kernel functions. The classification results are evaluated using 10-fold cross-validation and hold-out data division protocols. The performance of the proposed system is evaluated using accuracy, sensitivity, specificity, and the area under receiver operating characteristics (AUC). Using 30 dominant features from the pool of shape and texture-based features, the proposed system achieves the highest classification accuracy and AUC of 89% and 0.92, respectively. The proposed ML-based system showed an improvement in risk stratification accuracy by fusing shape and texture-based features.

A Hybrid Approach for 3D Lung Segmentation in CT Images Using Active Contour and Morphological Operation

  • Sahu, Satya Praksh
  • Kamble, Bhawna
2020 Book Section, cited 0 times
Website
Lung segmentation is the initial step for detection and diagnosis for lung-related abnormalities and disease. In CAD system for lung cancer, this step traces the boundary for the pulmonary region from thorax in CT images. It decreases the overhead for a further step in CAD system by reducing the space for finding the ROIs. The major issue and challenging task for the segmentation is the inclusion of juxtapleural nodules in the segmented lungs. The chapter attempts 3D lung segmentation of CT images using active contour and morphological operations. The major steps in the proposed approach contain: preprocessing through various techniques, Otsu's thresholding for the binarizing the image; morphological operations are applied for elimination of undesired region and, finally, active contour for the segmentation of the lungs in 3D. For experiment, 10 subjects are taken from the public dataset of LIDC-IDRI. The proposed method achieved accuracies 0.979 Jaccard's similarity index value, 0.989 Dice similarity coefficient, and 0.073 volume overlap error when compared to ground truth.

Lung Cancer Nodule Detection by Using Selective Search Feature Extraction and Segmentation Approach of Deep Neural Network

  • Sahoo, Satyasangram
  • Borugadda, Prem Kumar
  • Lakhsmi, R.
International Transaction Journal of Engineering, Management, & Applied Sciences & Technologies 2022 Journal Article, cited 0 times
Website
The study addresses the implementation of the selective search for the classification of cancer nodules in the lungs. The search processes integrate the power of both segmentation as well as exhaustive search for detection of an object in an image. In addition, the features of the cancer stage classifier are also used for cluster organization from the histogram to set the difference between inter-class variance. The selective search makes use of class variance to trace out meta-similarities. Later the neural network is implemented for the cancer stage classification.

Brain Tumour Segmentation on MRI Images by Voxel Classification Using Neural Networks, and Patient Survival Prediction

  • Sahayam, Subin
  • Krishna, Nanda H.
  • Jayaraman, Umarani
2020 Book Section, cited 0 times
In this paper, an algorithm for segmentation of brain tumours and the survival prediction of a patient in days has been proposed. The delineation of brain tumours from magnetic resonance imaging (MRI) by experts is a time-consuming process and is susceptible to human error. Recently, most methods in the literature have used convolution neural network architectures, its variants, and an ensemble of several models to achieve the state-of-the-art result. In this paper, we study a neural network architecture to classify voxels in 3D MRI brain images into their respective segment classes. The study focuses on class imbalance among tumour regions, and pre-processing. The method has been trained and tested on the BraTS2019 dataset. The average Dice score for the segmentation task in the validation set is 0.47, 0.43, and 0.23 for enhancing, whole, and core tumour regions, respectively. For the second task, linear regression has been used to predict the survival of a patient in days. It achieved an accuracy of 0.465 on the online evaluation engine for the training dataset.

A multilevel self‐attention based segmentation and classification technique using Directional Hexagonal Mixed Pattern algorithm for lung nodule detection in thoracic CT image

  • Sahaya Jeniba, J.
  • Milton, A.
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2022 Journal Article, cited 0 times
Website
Pulmonic nodules are unusual growing of tissues; originate on one lung or both lungs. They are the round, trifling mass of soft tissues in the lung area. Habitually, pulmonic nodules are indications of lung tumors, but they may be nonthreatening. When identified earlier and treated in time, the patient's life expectancy increases. The anatomy of the lung is highly interconnected in nature, which makes it difficult to diagnose pulmonic nodules by diverse clinical imaging practices. A network model is presented in this paper for accurate classification of pulmonic nodules from computed tomography scans images. The lung images are subjected to semantic segmentation using Attention U-Net to isolate the pulmonary nodules. The proposed Directional Hexagonal Mixed Pattern is applied to generate a new texture pattern. Then, the nodules are classified by combining the proposed multilevel network model with the self-attention network. This paper also demonstrates an experimental arrangement called tenfold cross-validation without a segmentation mask, in which the nodules that had been marked as less than 3 mm by radiologists are discarded. This has obtained an improved result. The experimental results show that with and without segmentation masks the proposed classifier scores an accuracy of 90.48% and 91.83%. In addition, it has efficiently produced the measure of area under curve as 98.08%.

Brain Tumour Segmentation with a Muti-Pathway ResNet Based UNet

  • Saha, Aheli
  • Zhang, Yu-Dong
  • Satapathy, Suresh Chandra
Journal of Grid ComputingJ Grid Comput 2021 Journal Article, cited 0 times
Website
Automatic segmentation of brain tumour regions is essential in today’s scenario for proper diagnosis and treatment of the disease. Gliomas can appear in any region and can be of any shape and size, which makes automatic detection challenging. However, now, with the availability of high-quality MRI scans, various strides have been made in this field. In this paper, we propose a novel multi-pathway UNet incorporated with residual networks and skip connections to segment multimodal Magnetic Resonance images into three hierarchical glioma sub-regions. The multi-pathway serves as a medium to decompose the multiclass segmentation problem into subsequent binary segmentation tasks, where each pathway is responsible for segmenting one class from the background. Instead of a cascaded architecture for the hierarchical regions, we propose a shared encoder, followed by separate decoders for each category. Residual connections employed in the model facilitate increasing the performance. Experiments have been carried out on BraTS 2020 dataset and have achieved promising results.

DEMARCATE: Density-based Magnetic Resonance Image Clustering for Assessing Tumor Heterogeneity in Cancer

  • Saha, Abhijoy
  • Banerjee, Sayantan
  • Kurtek, Sebastian
  • Narang, Shivali
  • Lee, Joonsang
  • Rao, Ganesh
  • Martinez, Juan
  • Bharath, Karthik
  • Rao, Arvind UK
  • Baladandayuthapani, Veerabhadran
NeuroImage: Clinical 2016 Journal Article, cited 4 times
Website
Tumor heterogeneity is a crucial area of cancer research wherein inter- and intra-tumor differences are investigated to assess and monitor disease development and progression, especially in cancer. The proliferation of imaging and linked genomic data has enabled us to evaluate tumor heterogeneity on multiple levels. In this work, we examine magnetic resonance imaging (MRI) in patients with brain cancer to assess image-based tumor heterogeneity. Standard approaches to this problem use scalar summary measures (e.g., intensity-based histogram statistics) that do not adequately capture the complete and finer scale information in the voxel-level data. In this paper, we introduce a novel technique, DEMARCATE (DEnsity-based MAgnetic Resonance image Clustering for Assessing Tumor hEterogeneity) to explore the entire tumor heterogeneity density profiles (THDPs) obtained from the full tumor voxel space. THDPs are smoothed representations of the probability density function of the tumor images. We develop tools for analyzing such objects under the Fisher-Rao Riemannian framework that allows us to construct metrics for THDP comparisons across patients, which can be used in conjunction with standard clustering approaches. Our analyses of The Cancer Genome Atlas (TCGA) based Glioblastoma dataset reveal two significant clusters of patients with marked differences in tumor morphology, genomic characteristics and prognostic clinical outcomes. In addition, we see enrichment of image-based clusters with known molecular subtypes of glioblastoma multiforme, which further validates our representation of tumor heterogeneity and subsequent clustering techniques.

EMSViT: Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation

  • Sagar, Abhinav
2022 Book Section, cited 0 times
In this paper, we propose a novel network named Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation (EMSViT). Our network splits the input feature maps into three parts with 1×1, 3×3 and 5×5 convolutions in both encoder and decoder. Concat operator is used to merge the features before being fed to three consecutive transformer blocks with attention mechanism embedded inside it. Skip connections are used to connect encoder and decoder transformer blocks. Similarly, transformer blocks and multi scale architecture is used in decoder before being linearly projected to produce the output segmentation map. We test the performance of our network using Synapse multi-organ segmentation dataset, Automated cardiac diagnosis challenge dataset, Brain tumour MRI segmentation dataset and Spleen CT segmentation dataset. Without bells and whistles, our network outperforms most of the previous state of the art CNN and transformer based models using Dice score and the Hausdorff distance as the evaluation metrics.

RMU-Net: A Novel Residual Mobile U-Net Model for Brain Tumor Segmentation from MR Images

  • Saeed, M. U.
  • Ali, G.
  • Bin, W.
  • Almotiri, S. H.
  • AlGhamdi, M. A.
  • Nagra, A. A.
  • Masood, K.
  • ul Amin, R.
2021 Journal Article, cited 0 times
Website
The most aggressive form of brain tumor is gliomas, which leads to concise life when high grade. The early detection of glioma is important to save the life of patients. MRI is a commonly used approach for brain tumors evaluation. However, the massive amount of data provided by MRI prevents manual segmentation in a reasonable time, restricting the use of accurate quantitative measurements in clinical practice. An automatic and reliable method is required that can segment tumors accurately. To achieve end-to-end brain tumor segmentation, a hybrid deep learning model RMU-Net is proposed. The architecture of MobileNetV2 is modified by adding residual blocks to learn in-depth features. This modified Mobile Net V2 is used as an encoder in the proposed network, and upsampling layers of U-Net are used as the decoder part. The proposed model has been validated on BraTS 2020, BraTS 2019, and BraTS 2018 datasets. The RMU-Net achieved the dice coefficient scores for WT, TC, and ET of 91.35%, 88.13%, and 83.26% on the BraTS 2020 dataset, 91.76%, 91.23%, and 83.19% on the BraTS 2019 dataset, and 90.80%, 86.75%, and 79.36% on the BraTS 2018 dataset, respectively. The performance of the proposed method outperforms with less computational cost and time as compared to previous methods.

Denoising on Low-Dose CT Image Using Deep CNN

  • Sadamatsu, Yuta
  • Murakami, Seiichi
  • Li, Guangxu
  • Kamiya, Tohru
2022 Conference Paper, cited 0 times
Computed Tomography (CT) scans are widely used in Japan, and they contribute to public health. On the other hand, there is also a risk of radiation exposure. To solve this problem, attempts are being made to reduce the radiation dose during imaging. However, reducing the radiation dose causes noise and degrades image quality. In this paper, we propose an image analysis method that efficiently removes noise by changing the activation function of Deep Convolutional Neural Network (Deep CNN). Experimental tests using full-body slice CT images of pigs and phantom CT images of lungs with Poisson noise show that the proposed method is helpful by comparing them with normal-dose CT images and evaluating image quality using peak signal-to-noise ratio (PSNR).

Identifying overall survival in 98 glioblastomas using VASARI features at 3T

  • Sacli-Bilmez, B.
  • Firat, Z.
  • Topcuoglu, O. M.
  • Yaltirik, K.
  • Ture, U.
  • Ozturk-Isik, E.
2023 Journal Article, cited 0 times
Website
PURPOSE: This study aims to evaluate qualitative and quantitative imaging metrics along with clinical features affecting overall survival in glioblastomas and to classify them into high survival and low survival groups based on 12, 19, and 24 months thresholds using machine learning. METHODS: The cohort consisted of 98 adult glioblastomas. A standard brain tumor magnetic resonance (MR) imaging protocol, was performed on a 3T MR scanner. Visually Accessible REMBRANDT Images (VASARI) features were assessed. A Kaplan-Meier survival analysis followed by a log-rank test and multivariate Cox regression analysis were used to investigate the effects of VASARI features along with the age, gender, the extent of resection, pre- and post-KPS, ki67 and P53 mutation status on overall survival. Supervised machine learning algorithms were employed to predict the survival of glioblastoma patients based on 12, 19, and 24 months thresholds. RESULTS: Tumor location (p<0.001), the proportion of non-enhancing component (p=0.0482), and proportion of necrosis (p=0.02) were significantly associated with overall survival based on Kaplan-Meier analysis. Multivariate Cox regression analysis revealed that increases in proportion of non-enhancing component (p=0.040) and proportion of necrosis (p=0.039) were significantly associated with overall survival. Machine-learning models were successful in differentiating patients living longer than 12 months with 96.40% accuracy (sensitivity=97.22%, specificity=95.55%). The classification accuracies based on 19 and 24 months survival thresholds were 70.87% (sensitivity=83.02%, specificity=60.11%) and 74.66% (sensitivity=67.58%, specificity=82.08%), respectively. CONCLUSION: Employing clinical and VASARI features together resulted in a successful classification of glioblastomas that would have a longer overall survival.

Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae‐Sun
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Website
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.

Are shape morphologies associated with survival? A potential shape-based biomarker predicting survival in lung cancer

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae-Sun
J Cancer Res Clin Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE: Imaging biomarkers (IBMs) are increasingly investigated as prognostic indicators. IBMs might be capable of assisting treatment selection by providing useful insights into tumor-specific factors in a non-invasive manner. METHODS: We investigated six three-dimensional shape-based IBMs: eccentricities between (I) intermediate-major axis (Eimaj), (II) intermediate-minor axis (Eimin), (III) major-minor axis (Emj-mn) and volumetric index of (I) sphericity (VioS), (II) flattening (VioF), (III) elongating (VioE). Additionally, we investigated previously established two-dimensional shape IBMs: eccentricity (E), index of sphericity (IoS), and minor-to-major axis length (Mn_Mj). IBMs were compared in terms of their predictive performance for 5-year overall survival in two independent cohorts of patients with lung cancer. Cohort 1 received surgical excision, while cohort 2 received radiation therapy alone or chemo-radiation therapy. Univariate and multivariate survival analyses were performed. Correlations with clinical parameters were evaluated using analysis of variance. IBM reproducibility was assessed using concordance correlation coefficients (CCCs). RESULTS: E was associated with reduced survival in cohort 1 (hazard ratio [HR]: 0.664). Eimin and VioF were associated with reduced survival in cohort 2 (HR 1.477 and 1.701). VioS was associated with reduced survival in cohorts 1 and 2 (HR 1.758 and 1.472). Spherical tumors correlated with shorter survival durations than did irregular tumors (median survival difference: 1.21 and 0.35 years in cohorts 1 and 2, respectively). VioS was a significant predictor of survival in multivariate analyses of both cohorts. All IBMs showed good reproducibility (CCC ranged between 0.86-0.98). CONCLUSIONS: In both investigated cohorts, VioS successfully linked shape morphology to patient survival.

Computer-assisted subtyping and prognosis for non-small cell lung cancer patients with unresectable tumor

  • Saad, Maliazurina
  • Choi, Tae-Sun
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 0 times
Website
BACKGROUND: The histological classification or subtyping of non-small cell lung cancer is essential for systematic therapy decisions. Differentiating between the two main subtypes of pulmonary adenocarcinoma and squamous cell carcinoma highlights the considerable differences that exist in the prognosis of patient outcomes. Physicians rely on a pathological analysis to reveal these phenotypic variations that requires invasive methods, such as biopsy and resection sample, but almost 70% of tumors are unresectable at the point of diagnosis. METHOD: A computational method that fuses two frameworks of computerized subtyping and prognosis was proposed, and it was validated against publicly available dataset in The Cancer Imaging Archive that consisted of 82 curated patients with CT scans. The accuracy of the proposed method was compared with the gold standard of pathological analysis, as defined by theInternational Classification of Disease for Oncology (ICD-O). A series of survival outcome test cases were evaluated using the Kaplan-Meier estimator and log-rank test (p-value) between the computational method and ICD-O. RESULTS: The computational method demonstrated high accuracy in subtyping (96.2%) and good consistency in the statistical significance of overall survival prediction for adenocarcinoma and squamous cell carcinoma patients (p<0.03) with respect to its counterpart pathological subtyping (p<0.02). The degree of reproducibility between prognosis taken on computational and pathological subtyping was substantial with an averaged concordance correlation coefficient (CCC) of 0.9910. CONCLUSION: The findings in this study support the idea that quantitative analysis is capable of representing tissue characteristics, as offered by a qualitative analysis.

Deciphering unclassified tumors of non-small-cell lung cancer through radiomics

  • Saad, Maliazurina
  • Choi, Tae-Sun
2017 Journal Article, cited 8 times
Website

Automatic Removal of Mechanical Fixations from CT Imagery with Particle Swarm Optimisation

  • Ryalat, Mohammad Hashem
  • Laycock, Stephen
  • Fisher, Mark
2017 Conference Proceedings, cited 0 times
Website
Fixation devices are used in radiotherapy treatment of head and neck cancers to ensure successive treatment fractions are accurately targeted. Typical fixations usually take the form of a custom made mask that is clamped to the treatment couch and these are evident in many CT data sets as radiotherapy treatment is normally planned with the mask in place. But the fixations can make planning more difficult for certain tumor sites and are often unwanted by third parties wishing to reuse the data. Manually editing the CT images to remove the fixations is time consuming and error prone. This paper presents a fast and automatic approach that removes artifacts due to fixations in CT images without affecting pixel values representing tissue. The algorithm uses particle swarm optimisation to speed up the execution time and presents results from five CT data sets that show it achieves an average specificity of 92.01% and sensitivity of 99.39%.

Impact of Spherical Coordinates Transformation Pre-processing in Deep Convolution Neural Networks for Brain Tumor Segmentation and Survival Prediction

  • Russo, Carlo
  • Liu, Sidong
  • Di Ieva, Antonio
2021 Book Section, cited 0 times
Pre-processing and Data Augmentation play an important role in Deep Convolutional Neural Networks (DCNN). Whereby several methods aim for standardization and augmentation of the dataset, we here propose a novel method aimed to feed DCNN with spherical space transformed input data that could better facilitate feature learning compared to standard Cartesian space images and volumes. In this work, the spherical coordinates transformation has been applied as a preprocessing method that, used in conjunction with normal MRI volumes, improves the accuracy of brain tumor segmentation and patient overall survival (OS) prediction on Brain Tumor Segmentation (BraTS) Challenge 2020 dataset. The LesionEncoder framework has been then applied to automatically extract features from DCNN models, achieving 0.586 accuracy of OS prediction on the validation data set, which is one of the best results according to BraTS 2020 leaderboard.

TCIApathfinder: an R client for The Cancer Imaging Archive REST API

  • Russell, Pamela
  • Fountain, Kelly
  • Wolverton, Dulcy
  • Ghosh, Debashis
Cancer research 2018 Journal Article, cited 1 times
Website

Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network

  • Rudie, Jeffrey D.
  • Weiss, David A.
  • Saluja, Rachit
  • Rauschecker, Andreas M.
  • Wang, Jiancong
  • Sugrue, Leo
  • Bakas, Spyridon
  • Colby, John B.
Frontiers in Computational Neuroscience 2019 Journal Article, cited 0 times
An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects’ brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset (ntraining = 285, nvalidation = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median Dice for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly (p = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT (r = 0.96) and WMH (r = 0.89). Larger lesion volumes were positively correlated with higher/better Dice scores for WT (r = 0.33), WMH (r = 0.34), and across all lesions (r = 0.89) on a log(10) transformed scale. While the median Dice for WMH was 0.42 across training subjects with WMH, the median Dice was 0.62 for those with at least 5 cm3 of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.

Body composition radiomic features as a predictor of survival in patients with non-small cellular lung carcinoma: A multicenter retrospective study

  • Rozynek, M.
  • Tabor, Z.
  • Klek, S.
  • Wojciechowski, W.
2024 Journal Article, cited 0 times
Website
OBJECTIVES: This study combined two novel approaches in oncology patient outcome predictions-body composition and radiomic features analysis. The aim of this study was to validate whether automatically extracted muscle and adipose tissue radiomic features could be used as a predictor of survival in patients with non-small cell lung cancer. METHODS: The study included 178 patients with non-small cell lung cancer receiving concurrent platinum-based chemoradiotherapy. Abdominal imaging was conducted as a part of whole-body positron emission tomography/computed tomography performed before therapy. Methods used included automated assessment of the volume of interest using densely connected convolutional network classification model - DenseNet121, automated muscle and adipose tissue segmentation using U-net architecture implemented in nnUnet framework, and radiomic features extraction. Acquired body composition radiomic features and clinical data were used for overall and 1-y survival prediction using machine learning classification algorithms. RESULTS: The volume of interest detection model achieved the following metric scores: 0.98 accuracy, 0.89 precision, 0.96 recall, and 0.92 F1 score. Automated segmentation achieved a median dice coefficient >0.99 in all segmented regions. We extracted 330 body composition radiomic features for every patient. For overall survival prediction using clinical and radiomic data, the best-performing feature selection and prediction method achieved areas under the curve-receiver operating characteristic (AUC-ROC) of 0.73 (P < 0.05); for 1-y survival prediction AUC-ROC was 0.74 (P < 0.05). CONCLUSION: Automatically extracted muscle and adipose tissue radiomic features could be used as a predictor of survival in patients with non-small cell lung cancer.

Fully automated 3D body composition analysis and its association with overall survival in head and neck squamous cell carcinoma patients

  • Rozynek, Miłosz
  • Gut, Daniel
  • Kucybała, Iwona
  • Strzałkowska-Kominiak, Ewa
  • Tabor, Zbisław
  • Urbanik, Andrzej
  • Kłęk, Stanisław
  • Wojciechowski, Wadim
Frontiers in Oncology 2023 Journal Article, cited 0 times
Objectives: We developed a method for a fully automated deep-learning segmentation of tissues to investigate if 3D body composition measurements are significant for survival of Head and Neck Squamous Cell Carcinoma (HNSCC) patients. Methods: 3D segmentation of tissues including spine, spine muscles, abdominal muscles, subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and internal organs within volumetric region limited by L1 and L5 levels was accomplished using deep convolutional segmentation architecture - U-net implemented in a nnUnet framework. It was trained on separate dataset of 560 single-channel CT slices and used for 3D segmentation of pre-radiotherapy (Pre-RT) and post-radiotherapy (Post-RT) whole body PET/CT or abdominal CT scans of 215 HNSCC patients. Percentages of tissues were used for overall survival analysis using Cox proportional hazard (PH) model. Results: Our deep learning model successfully segmented all mentioned tissues with Dice’s coefficient exceeding 0.95. The 3D measurements including difference between Pre-RT and post-RT abdomen and spine muscles percentage, difference between Pre-RT and post-RT VAT percentage and sum of Pre-RT abdomen and spine muscles percentage together with BMI and Cancer Site were selected and significant at the level of 5% for the overall survival. Aside from Cancer Site, the lowest hazard ratio (HR) value (HR, 0.7527; 95% CI, 0.6487-0.8735; p = 0.000183) was observed for the difference between Pre-RT and post-RT abdomen and spine muscles percentage. Conclusion: Fully automated 3D quantitative measurements of body composition are significant for overall survival in Head and Neck Squamous Cell Carcinoma patients.

Visual Interpretation with Three-Dimensional Annotations (VITA): Three-Dimensional Image Interpretation Tool for Radiological Reporting

  • Roy, Sharmili
  • Brown, Michael S
  • Shih, George L
2014 Journal Article, cited 5 times
Website
This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications in Medicine (DICOM) object and is automatically added to the study for archival in Picture Archiving and Communication System (PACS). In addition, a video summary (e.g., MPEG4) can be generated for sharing with patients and for situations where DICOM viewers are not readily available to referring physicians. The current version of VITA is compatible with ClearCanvas; however, VITA can work with any PACS workstation that has a structured annotation implementation (e.g., Extendible Markup Language, Health Level 7, Annotation and Image Markup) and is able to seamlessly integrate into the existing reporting workflow. In a survey with referring physicians, the vast majority strongly agreed that 3D visual summaries improve the communication of the radiologists' reports and aid communication with patients.

Multi-plane UNet++ Ensemble for Glioblastoma Segmentation

  • Roth, Johannes
  • Keller, Johannes
  • Franke, Stefan
  • Neumuth, Thomas
  • Schneider, Daniel
2022 Conference Paper, cited 0 times
Website
Glioblastoma multiforme (grade four glioma, GBM) is the most aggressive malignant tumor in the brain and usually treated by combined surgery, chemo- and radiotherapy. The O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status was shown to be predictive of GBM sensitivity to alkylating agent chemotherapy and is a promising marker for personalized treatment. In this paper we propose to use a multi-plane ensemble of UNet++ models for the segmentation of gliomas in MRI scans, using a combination of Dice loss and boundary loss for training. For the prediction of MGMT promoter methylation, we use an ensemble of 3D EfficientNet (one per MRI modality). Both, the UNet++ ensemble and EfficientNet are trained and validated on data provided in the context of the Brain Tumor Segmentation Challenge (BraTS) 2021, containing 2.000 fully annotated glioma samples with four different MRI modalities. We achieve Dice scores of 0.792, 0.835, and 0.906 as well as Hausdorff distances of 16.61, 10.11, and 4.54 for enhancing tumor, tumor core and whole tumor, respectively. For MGMT promoter methylation status prediction, an AUROC of 0.577 is obtained.

Rapid artificial intelligence solutions in a pandemic—The COVID-19-20 Lung CT Lesion Segmentation Challenge

  • Roth, Holger R.
  • Xu, Ziyue
  • Tor-Díez, Carlos
  • Sanchez Jacob, Ramon
  • Zember, Jonathan
  • Molto, Jose
  • Li, Wenqi
  • Xu, Sheng
  • Turkbey, Baris
  • Turkbey, Evrim
  • Yang, Dong
  • Harouni, Ahmed
  • Rieke, Nicola
  • Hu, Shishuai
  • Isensee, Fabian
  • Tang, Claire
  • Yu, Qinji
  • Sölter, Jan
  • Zheng, Tong
  • Liauchuk, Vitali
  • Zhou, Ziqi
  • Moltz, Jan Hendrik
  • Oliveira, Bruno
  • Xia, Yong
  • Maier-Hein, Klaus H.
  • Li, Qikai
  • Husch, Andreas
  • Zhang, Luyang
  • Kovalev, Vassili
  • Kang, Li
  • Hering, Alessa
  • Vilaça, João L.
  • Flores, Mona
  • Xu, Daguang
  • Wood, Bradford
  • Linguraru, Marius George
Medical Image Analysis 2022 Journal Article, cited 18 times
Website
Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge — 2020.

A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations

  • Roth, Holger R
  • Lu, Le
  • Seff, Ari
  • Cherry, Kevin M
  • Hoffman, Joanne
  • Wang, Shijun
  • Liu, Jiamin
  • Turkbey, Evrim
  • Summers, Ronald M
2014 Conference Proceedings, cited 192 times
Website
Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.

External clinical validation of prone and supine CT colonography registration

  • Roth, Holger R
  • Boone, Darren J
  • Halligan, Steve
  • Hampshire, Thomas E
  • McClelland, Jamie R
  • Hu, Mingxing
  • Punwani, Shonit
  • Taylor, Stuart
  • Hawkes, David J
2012 Book Section, cited 2 times
Website

Comparison of Supervised and Unsupervised Approaches for the Generation of Synthetic CT from Cone-Beam CT

  • Rossi, M.
  • Cerveri, P.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.

3D Automatic Brain Tumor Segmentation Using a Multiscale Input U-Net Network

  • Rosas González, S.
  • Birgui Sekou, T.
  • Hidane, M.
  • Tauber, C.
2020 Book Section, cited 0 times
Quantitative analysis of brain tumors is crucial for surgery planning, follow-up and subsequent radiation treatment of glioma. Finding an automatic and reproducible solution may save time to physicians and contribute to improve overall poor prognosis of glioma patients. In this paper, we present our current BraTS contribution on developing an accurate and robust tumor segmentation algorithm. Our network architecture implements a multiscale input module which has been thought to maximize the extraction of features associated to the multiple image modalities before they are merged in a modified U-Net network avoiding the loss of specific information provided by each modality and improving brain tumor segmentation performance. Our method’s current performance on the BraTS 2019 test set is dice scores of 0.775 ± 0.212, 0.865 ± 0.133 and 0.789 ± 0.266 for enhancing tumor, whole tumor and tumor core, respectively with and overall dice of 0.81.

Malignant nodule detection on lung CT scan images with kernel RX-algorithm

  • Roozgard, A.
  • Cheng, S.
  • Hong, Liu
2012 Conference Proceedings, cited 24 times
Website
In this paper, we present a nonlinear anomaly detector called kernel RX-algorithm and apply it to CT images for malignant nodule detection. Malignant nodule detection is very similar to anomaly detection in military imaging applications where the RX-algorithm has been successfully applied. We modified the original RX-algorithm so that it can be applied to anomaly detection in CT images. Moreover, using kernel trick, we mapped the data to a high dimensional space to obtain a kernelized RX-algorithm that outperforms the original RX-algorithm. The preliminary results of applying the kernel RX-algorithm on annotated public access databases suggests that the proposed method may provide a means for early detection of the malignant nodules.

3D-SCoBeP: 3D medical image registration using sparse coding and belief propagation

  • Roozgard, Aminmohammad
  • Barzigar, Nafise
  • Verma, Pramode
  • Cheng, Samuel
International Journal of Diagnostic Imaging 2014 Journal Article, cited 4 times
Website

3D medical image denoising using 3D block matching and low-rank matrix completion

  • Roozgard, Aminmohammad
  • Barzigar, Nafise
  • Verma, Pramode
  • Cheng, Samuel
2013 Conference Proceedings, cited 0 times
Website
3D Denoising as one of the most significant tools in medical imaging was studied in the literature. However, most existing 3D medical data denoising algorithms have assumed the additive white Gaussian noise. In this work, we propose an efficient 3D medical data denoising method that can handle a noise mixture of various types. Our method is based on modified 2D Adaptive Rood Pattern Search (ARPS) [1] and low-rank matrix completion as follows. In our method, a noisy 3D data is processed in blockwise manner, for each processed 3D block we find similar 3D blocks in 3D data, where we use overlapped 3D patches to further lower the computation complexity. The 3D blocks then will stack together and unreliable voxels will be replaced using fast matrix completion method [2]. Experimental results show that the proposed method is able to robustly denoise the mixed noise from 3D medical data.

Computer Simulation of Low-dose CT with Clinical Lung Image Database: a preliminary study

  • Ronga, Junyan
  • Gaoa, Peng
  • Liua, Wenlei
  • Zhanga, Yuanke
  • Liua, Tianshuai
  • Lu, Hongbing
2017 Conference Proceedings, cited 1 times
Website

Transcriptomic and connectomic correlates of differential spatial patterning among gliomas

  • Romero-Garcia, Rafael
  • Mandal, Ayan S
  • Bethlehem, Richard A I
  • Crespo-Facorro, Benedicto
  • Hart, Michael G
  • Suckling, John
BRAIN 2022 Journal Article, cited 0 times
Website
Unravelling the complex events driving grade-specific spatial distribution of brain tumour occurrence requires rich datasets from both healthy individuals and patients. Here, we combined open-access data from The Cancer Genome Atlas, the UK Biobank and the Allen Brain Human Atlas to disentangle how the different spatial occurrences of glioblastoma multiforme and low-grade gliomas are linked to brain network features and the normative transcriptional profiles of brain regions.From MRI of brain tumour patients, we first constructed a grade-related frequency map of the regional occurrence of low-grade gliomas and the more aggressive glioblastoma multiforme. Using associated mRNA transcription data, we derived a set of differential gene expressions from glioblastoma multiforme and low-grade gliomas tissues of the same patients. By combining the resulting values with normative gene expressions from post-mortem brain tissue, we constructed a grade-related expression map indicating which brain regions express genes dysregulated in aggressive gliomas. Additionally, we derived an expression map of genes previously associated with tumour subtypes in a genome-wide association study (tumour-related genes).There were significant associations between grade-related frequency, grade-related expression and tumour-related expression maps, as well as functional brain network features (specifically, nodal strength and participation coefficient) that are implicated in neurological and psychiatric disorders.These findings identify brain network dynamics and transcriptomic signatures as key factors in regional vulnerability for glioblastoma multiforme and low-grade glioma occurrence, placing primary brain tumours within a well established framework of neurological and psychiatric cortical alterations.

Towards a Whole Body [18F] FDG Positron Emission Tomography Attenuation Correction Map Synthesizing using Deep Neural Networks

  • Rodríguez Colmeiro, Ramiro Germán
  • Verrastro, Claudio
  • Minsky, Daniel
  • Grosges, Thomas
Journal of Computer Science and Technology 2021 Journal Article, cited 0 times
The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we explore the task of whole body attenuation map generation using 3D deep neural networks. We analyze the advantages that an adversarial network training cand provide to such models. The networks are trained to learn the mapping from non attenuation corrected [18 ^F]-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. Then the sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples. The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the networks is 90±20 and 103±18HU and a peak signal to noise ratio of 19.3±1.7 dB and 18.6±1.5, for the base model and the adversarial model respectively. The attenuation correction is tested by means of attenuation sinograms, obtaining a line of response attenuation mean error lower than 1% with a standard deviation lower than 8%. The proposed deep learning topologies are capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the accuracy of both methods holds in the presence of data from multiple sources and modalities and are trained on publicly available datasets. Finally, while the adversarial layer enhances visual appearance of the produced samples, the 3D U-Net achieves higher metric performance.

Value of handcrafted and deep radiomic features towards training robust machine learning classifiers for prediction of prostate cancer disease aggressiveness

  • Rodrigues, A.
  • Rodrigues, N.
  • Santinha, J.
  • Lisitskaya, M. V.
  • Uysal, A.
  • Matos, C.
  • Domingues, I.
  • Papanikolaou, N.
2023 Journal Article, cited 0 times
Website
There is a growing piece of evidence that artificial intelligence may be helpful in the entire prostate cancer disease continuum. However, building machine learning algorithms robust to inter- and intra-radiologist segmentation variability is still a challenge. With this goal in mind, several model training approaches were compared: removing unstable features according to the intraclass correlation coefficient (ICC); training independently with features extracted from each radiologist's mask; training with the feature average between both radiologists; extracting radiomic features from the intersection or union of masks; and creating a heterogeneous dataset by randomly selecting one of the radiologists' masks for each patient. The classifier trained with this last resampled dataset presented with the lowest generalization error, suggesting that training with heterogeneous data leads to the development of the most robust classifiers. On the contrary, removing features with low ICC resulted in the highest generalization error. The selected radiomics dataset, with the randomly chosen radiologists, was concatenated with deep features extracted from neural networks trained to segment the whole prostate. This new hybrid dataset was then used to train a classifier. The results revealed that, even though the hybrid classifier was less overfitted than the one trained with deep features, it still was unable to outperform the radiomics model.

Segmentation of candidates for pulmonary nodules based on computed tomorography

  • Rocha, Maura G. R. da
  • Saraiva, Willyams M.
  • Drumond, Patrícia M. L de L.
  • Carvalho Filho, Antonio O. de
  • de Sousa, Alcilene D.
2016 Conference Paper, cited 0 times
Website
Abstract: The present work presents a methodology for automatic segmentation of pulmonary solitary nodules candidates using cellular automaton. Early detection of pulmonary solitary nodules that may become cancer is essential for survival of patients. To assist the experts in the identification of these nodules are being developed computer aided systems that aim to automate the work of detection and classification. The segmentation stage plays a key role in automatic detection of lung nodules, as it allows separating the image elements in regions, which have the same property or characteristic. The methodology used in the article includes acquisition of images, noise elimination, pulmonary parenchyma segmentation and segmentation of pulmonary solitary nodules candidates. The tests were conducted using set of images of the LIDC-IDRI base, containing 739 nodules. The test results show a sensitivity of 95.66% of the nodules.

Classification of Acute Lymphoblastic Leukemia based on White Blood Cell Images using InceptionV3 Model

  • Rizki Firdaus, Mulya
  • Ema, Utami
  • Dhani, Ariatmanto
2023 Journal Article, cited 0 times
Acute lymphoblastic leukemia (ALL) is the most common form of leukemia that occurs in children. Detection of ALL through white blood cell image analysis can help with the prognosis and appropriate treatment. In this study, the author proposes an approach to classifying ALL based on white blood cell images using a convolutional neural network (CNN) model called InceptionV3. The dataset used in this research consists of white blood cell images collected from patients with ALL and healthy individuals. These images were obtained from The Cancer Imaging Archive (TCIA), which is a service that stores large-scale cancer medical images available to the public. During the evaluation phase, the author used training data evaluation metrics such as accuracy and loss to measure the model's performance. The research results show that the InceptionV3 model is capable of classifying white blood cell images with a high level of accuracy. This model achieves an average ALL recognition accuracy of 0.9896 with a loss of 0.031. The use of CNN models such as InceptionV3 in medical image analysis has the potential to improve the efficiency and precision of image-based disease diagnosis.

Multi-modal U-Nets with Boundary Loss and Pre-training for Brain Tumor Segmentation

  • Ribalta Lorenzo, Pablo
  • Marcinkiewicz, Michal
  • Nalepa, Jakub
2020 Book Section, cited 0 times
Gliomas are the most common primary brain tumors, and their manual segmentation is a time-consuming and user-dependent process. We present a two-step multi-modal U-Net-based architecture with unsupervised pre-training and surface loss component for brain tumor segmentation which allows us to seamlessly benefit from all magnetic resonance modalities during the delineation. The results of the experimental study, performed over the newest release of the BraTS test set, revealed that our method delivers accurate brain tumor segmentation, with the average DICE score of 0.72, 0.86, and 0.77 for the enhancing tumor, whole tumor, and tumor core, respectively. The total time required to process one study using our approach amounts to around 20 s.

Detection of Lung Nodules on Medical Images by the Use of Fractal Segmentation

  • Rezaie, Afsaneh
  • Habiboghli, Ali
International Journal of Interactive Multimedia and Artificial Inteligence 2017 Journal Article, cited 0 times
Website

Within-Modality Synthesis and Novel Radiomic Evaluation of Brain MRI Scans

  • Rezaeijo, S. M.
  • Chegeni, N.
  • Baghaei Naeini, F.
  • Makris, D.
  • Bakas, S.
Cancers (Basel) 2023 Journal Article, cited 0 times
Website
One of the most common challenges in brain MRI scans is to perform different MRI sequences depending on the type and properties of tissues. In this paper, we propose a generative method to translate T2-Weighted (T2W) Magnetic Resonance Imaging (MRI) volume from T2-weight-Fluid-attenuated-Inversion-Recovery (FLAIR) and vice versa using Generative Adversarial Networks (GAN). To evaluate the proposed method, we propose a novel evaluation schema for generative and synthetic approaches based on radiomic features. For the evaluation purpose, we consider 510 pair-slices from 102 patients to train two different GAN-based architectures Cycle GAN and Dual Cycle-Consistent Adversarial network (DC(2)Anet). The results indicate that generative methods can produce similar results to the original sequence without significant change in the radiometric feature. Therefore, such a method can assist clinics to make decisions based on the generated image when different sequences are not available or there is not enough time to re-perform the MRI scans.

Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation

  • Rezaei, Mina
  • Yang, Haojin
  • Harmuth, Konstantin
  • Meinel, Christoph
2019 Conference Proceedings, cited 0 times
Website

Multi-fractal detrended texture feature for brain tumor classification

  • Reza, Syed MS
  • Mays, Randall
  • Iftekharuddin, Khan M
2015 Journal Article, cited 5 times
Website
We propose a novel non-invasive brain tumor type classification using Multi-fractal Detrended Fluctuation Analysis (MFDFA) [1] in structural magnetic resonance (MR) images. This preliminary work investigates the efficacy of the MFDFA features along with our novel texture feature known as multi-fractional Brownian motion (mBm) [2]in classifying (grading) brain tumors as High Grade (HG) and Low Grade (LG). Based on prior performance, Random Forest (RF) [3] is employed for tumor grading using two different datasets such as BRATS-2013 [4] and BRATS-2014 [5]. Quantitative scores such as precision, recall, accuracy are obtained using the confusion matrix. On an average 90% precision and 85% recall from the inter-dataset cross-validation confirm the efficacy of the proposed method.

Thoratic Spine Segmentation Based on CT Images

  • Révy, Gábor
  • Hadházi, Dániel
  • Hullám, Gábor
2023 Conference Paper, cited 0 times
Website
Automatic vertebrae localization and segmentation in computed tomography (CT) are fundamental for computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems. However, they remain challenging due to the high variation in spinal anatomy among patients. In this paper, we propose a simple, model-free approach for automatic CT vertebrae localization and segmentation. The segmentation pipeline consists of 3 stages. In the first stage the center line of the spinal cord is estimated using convolution. In the second stage a baseline segmentation of the spine is created using morphological reconstruction and other classical image processing algorithms. Finally, the baseline spine segmentation is refined by limiting its boundaries using simple heuristics based on expert knowledge. We evaluated our method on the COVID-19 subdataset of the CTSpine1K dataset. Our solution achieved a dice coefficient of 0.8160±0.0432 (mean±std) and an intersection over union of 0.6914±0.0618 for spine segmentation. The experimental results have demonstrated the feasibility of the proposed method in a real environment.

Automatic lung segmentation in CT scans using guided filtering

  • Revy, Gabor
  • Hadhazi, Daniel
  • Hullam, Gabor
2022 Conference Paper, cited 0 times
The segmentation of the lungs in chest CT scans is a crucial step in computer-aided diagnosis. Current algorithms designed to solve this problem usually utilize a model of some form. To build a sufficiently robust model, a very large amount of diverse data is required, which is not always available. In this work, we propose a novel model-free algorithm for lung segmentation. Our segmentation pipeline consists of expert algorithms, some of which are improved versions of previously known methods, and a novel application of the guided filter method. Our system achieves an IoU (intersection over union) value of 0.9236 ± 0.0290 (mean±std) and a DSC (Dice similarity coefficient) of 0.9601 ± 0.0158 on the LCTSC dataset. These results indicate, that our segmentation pipeline can be a viable solution in certain applications.

GPU-accelerated lung CT segmentation based on level sets and texture analysis

  • Reska, D.
  • Kretowski, M.
2024 Journal Article, cited 0 times
Website
This paper presents a novel semi-automatic method for lung segmentation in thoracic CT datasets. The fully three-dimensional algorithm is based on a level set representation of an active surface and integrates texture features to improve its robustness. The method's performance is enhanced by the graphics processing unit (GPU) acceleration. The segmentation process starts with a manual initialisation of 2D contours on a few representative slices of the analysed volume. Next, the starting regions for the active surface are generated according to the probability maps of texture features. The active surface is then evolved to give the final segmentation result. The recent implementation employs features based on grey-level co-occurrence matrices and Gabor filters. The algorithm was evaluated on real medical imaging data from the LCTCS 2017 challenge. The results were also compared with the outcomes of other segmentation methods. The proposed approach provided high segmentation accuracy while offering very competitive performance.

Optimizing deep belief network parameters using grasshopper algorithm for liver disease classification

  • Renukadevi, Thangavel
  • Karunakaran, Saminathan
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Image processing plays a vital role in many areas such as healthcare, military, scientific and business due to its wide variety of advantages and applications. Detection of computed tomography (CT) liver disease is one of the difficult tasks in the medical field. Hand crafted features and classifications are the two types of methods used in the previous approaches, to classify liver disease. But these classification results are not optimal. In this article, we propose a novel method utilizing deep belief network (DBN) with grasshopper optimization algorithm (GOA) for liver disease classification. Initially, the image quality is enhanced by preprocessing techniques and then features like texture, color and shape are extracted. The extracted features are reduced by utilizing the dimensionality reduction method like principal component analysis (PCA). Here, the DBN parameters are optimized using GOA for recognizing liver disease. The experiments are performed on the real time and open source CT image datasets which embraces normal, cyst, hepatoma, and cavernous hemangiomas, fatty liver, metastasis, cirrhosis, and tumor samples. The proposed method yields 98% accuracy, 95.82% sensitivity, 97.52% specificity, 98.53% precision, and 96.8% F‐1 score in simulation process when compared with other existing techniques.

New Perspectives for Estimating Body Composition From Computed Tomography: Clothing Associated Artifacts

  • Rentz, L. E.
  • Malone, B. M.
  • Vettiyil, B.
  • Sillaste, E. A.
  • Mizener, A. D.
  • Clayton, S. A.
  • Pistilli, E. E.
Acad Radiol 2024 Journal Article, cited 0 times
Website
Retrospective analysis of computed tomography (CT) imaging has been widely utilized in clinical populations as the “gold standard” method for quantifying body composition and tissue volumes ( 1 ). Thousands of published studies across the last 30 years suggest a concerningly high heterogeneity for statistical associations involving skeletal muscle and adiposity across patient populations that represent all types of cancer, COPD, and recently COVID-19 ( 2 , 3 , 4 , 5 , 6 , 7 , 8 ). Like most clinical datasets, the extensive presence of confounds, inconsistencies, and missing data tend to complicate post hoc imaging analyses ( 9 ). In addition to obvious data artifact, ample threats to study validity can be well concealed by lengthy patient charts, co-occurring factors, and methodological limitations. In the absence of a highly controlled environment, we neglect to consider the multiplicity of factors that can influence naturally occurring data, and thus, real-world utility of findings ( 9 , 10 ). Most importantly, we often fail to rehumanize collections of datapoints to understand patterns, compound clinical effects, and limitations experienced by both the clinical team and patient that maximize the value of post hoc conclusions.

A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification

  • Ren, Y.
  • Tsai, M. Y.
  • Chen, L.
  • Wang, J.
  • Li, S.
  • Liu, Y.
  • Jia, X.
  • Shen, C.
Int J Comput Assist Radiol Surg 2020 Journal Article, cited 2 times
Website
PURPOSE: Diagnosis of lung cancer requires radiologists to review every lung nodule in CT images. Such a process can be very time-consuming, and the accuracy is affected by many factors, such as experience of radiologists and available diagnosis time. To address this problem, we proposed to develop a deep learning-based system to automatically classify benign and malignant lung nodules. METHODS: The proposed method automatically determines benignity or malignancy given the 3D CT image patch of a lung nodule to assist diagnosis process. Motivated by the fact that real structure among data is often embedded on a low-dimensional manifold, we developed a novel manifold regularized classification deep neural network (MRC-DNN) to perform classification directly based on the manifold representation of lung nodule images. The concise manifold representation revealing important data structure is expected to benefit the classification, while the manifold regularization enforces strong, but natural constraints on network training, preventing over-fitting. RESULTS: The proposed method achieves accurate manifold learning with reconstruction error of ~ 30 HU on real lung nodule CT image data. In addition, the classification accuracy on testing data is 0.90 with sensitivity of 0.81 and specificity of 0.95, which outperforms state-of-the-art deep learning methods. CONCLUSION: The proposed MRC-DNN facilitates an accurate manifold learning approach for lung nodule classification based on 3D CT images. More importantly, MRC-DNN suggests a new and effective idea of enforcing regularization for network training, possessing the potential impact to a board range of applications.

Overall Survival Prediction Using Conventional MRI Features

  • Ren, Yanhao
  • Sun, Pin
  • Lu, Wenlian
2020 Book Section, cited 0 times
Gliomas are common primary brain malignancies. The sub-regions of gliomas are depicted by MRI scans, reflecting varying biological properties. These properties have effect on the diagnosis of neurosurgeons on whether or what kind of resection should be done. The survival days after gross total resection is also of great concern. In this paper, we propose a semi-auto method for segmentation, and extract features from slices of MRI scans, including conventional MRI features and clinical features. 13 features of a subject are selected finally and a support vector regression is used to fit with the training data.

Improved False Positive Reduction by Novel Morphological Features for Computer-Aided Polyp Detection in CT Colonography

  • Ren, Yacheng
  • Ma, Jingchen
  • Xiong, Junfeng
  • Chen, Yi
  • Lu, Lin
  • Zhao, Jun
IEEE Journal of Biomedical and Health Informatics 2018 Journal Article, cited 3 times
Website

An unsupervised semi-automated pulmonary nodule segmentation method based on enhanced region growing

  • Ren, He
  • Zhou, Lingxiao
  • Liu, Gang
  • Peng, Xueqing
  • Shi, Weiya
  • Xu, Huilin
  • Shan, Fei
  • Liu, Lei
Quantitative Imaging in Medicine and Surgery 2020 Journal Article, cited 0 times
Website

SBRT of ventricular tachycardia using 4pi optimized trajectories

  • Reis, C.
  • Little, B.
  • Lee MacDonald, R.
  • Syme, A.
  • Thomas, C. G.
  • Robar, J. L.
J Appl Clin Med Phys 2021 Journal Article, cited 0 times
Website
PURPOSE: To investigate the possible advantages of using 4pi-optimized arc trajectories in stereotactic body radiation therapy of ventricular tachycardia (VT-SBRT) to minimize exposure of healthy tissues. METHODS AND MATERIALS: Thorax computed tomography (CT) data for 15 patients were used for contouring organs at risk (OARs) and defining realistic planning target volumes (PTVs). A conventional trajectory plan, defined as two full coplanar arcs was compared to an optimized-trajectory plan provided by a 4pi algorithm that penalizes geometric overlap of PTV and OARs in the beam's-eye-view. A single fraction of 25 Gy was prescribed to the PTV in both plans and a comparison of dose sparing to OARs was performed based on comparisons of maximum, mean, and median dose. RESULTS: A significant average reduction in maximum dose was observed for esophagus (18%), spinal cord (26%), and trachea (22%) when using 4pi-optimized trajectories. Mean doses were also found to decrease for esophagus (19%), spinal cord (33%), skin (18%), liver (59%), lungs (19%), trachea (43%), aorta (11%), inferior vena cava (25%), superior vena cava (33%), and pulmonary trunk (26%). A median dose reduction was observed for esophagus (40%), spinal cord (48%), skin (36%), liver (72%), lungs (41%), stomach (45%), trachea (53%), aorta (45%), superior vena cava (38%), pulmonary veins (32%), and pulmonary trunk (39%). No significant difference was observed for maximum dose (p = 0.650) and homogeneity index (p = 0.156) for the PTV. Average values of conformity number were 0.86 +/- 0.05 and 0.77 +/- 0.09 for the conventional and 4pi optimized plans respectively. CONCLUSIONS: 4pi optimized trajectories provided significant reduction to mean and median doses to cardiac structures close to the target but did not decrease maximum dose. Significant improvement in maximum, mean and median doses for noncardiac OARs makes 4pi optimized trajectories a suitable delivery technique for treating VT.

BU-Net: Brain Tumor Segmentation Using Modified U-Net Architecture

  • Rehman, Mobeen Ur
  • Cho, SeungBin
  • Kim, Jee Hong
  • Chong, Kil To
2020 Journal Article, cited 0 times
Website
The semantic segmentation of a brain tumor is of paramount importance for its treatment and prevention. Recently, researches have proposed various neural network-based architectures to improve the performance of segmentation of brain tumor sub-regions. Brain tumor segmentation, being a challenging area of research, requires improvement in its performance. This paper proposes a 2D image segmentation method, BU-Net, to contribute to brain tumor segmentation research. Residual extended skip (RES) and wide context (WC) are used along with the customized loss function in the baseline U-Net architecture. The modifications contribute by finding more diverse features, by increasing the valid receptive field. The contextual information is extracted with the aggregating features to get better segmentation performance. The proposed BU-Net was evaluated on the high-grade glioma (HGG) datasets of the BraTS2017 Challenge-the test datasets of the BraTS 2017 and 2018 Challenge datasets. Three major labels to segmented were tumor core (TC), whole tumor (WT), and enhancing core (EC). To compare the performance quantitatively, the dice score was utilized. The proposed BU-Net outperformed the existing state-of-the-art techniques. The high performing BU-Net can have a great contribution to researchers from the field of bioinformatics and medicine.

BrainSeg-Net: Brain Tumor MR Image Segmentation via Enhanced Encoder-Decoder Network

  • Rehman, M. U.
  • Cho, S.
  • Kim, J.
  • Chong, K. T.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
Efficient segmentation of Magnetic Resonance (MR) brain tumor images is of the utmost value for the diagnosis of tumor region. In recent years, advancement in the field of neural networks has been used to refine the segmentation performance of brain tumor sub-regions. The brain tumor segmentation has proven to be a complicated task even for neural networks, due to the small-scale tumor regions. These small-scale tumor regions are unable to be identified, the reason being their tiny size and the huge difference between area occupancy by different tumor classes. In previous state-of-the-art neural network models, the biggest problem was that the location information along with spatial details gets lost in deeper layers. To address these problems, we have proposed an encoder-decoder based model named BrainSeg-Net. The Feature Enhancer (FE) block is incorporated into the BrainSeg-Net architecture which extracts the middle-level features from low-level features from the shallow layers and shares them with the dense layers. This feature aggregation helps to achieve better performance of tumor identification. To address the problem associated with imbalance class, we have used a custom-designed loss function. For evaluation of BrainSeg-Net architecture, three benchmark datasets are utilized: BraTS2017, BraTS 2018, and BraTS 2019. Segmentation of Enhancing Core (EC), Whole Tumor (WT), and Tumor Core (TC) is carried out. The proposed architecture have exhibited good improvement when compared with existing baseline and state-of-the-art techniques. The MR brain tumor segmentation by BrainSeg-Net uses enhanced location and spatial features, which performs better than the existing plethora of brain MR image segmentation approaches.

A Deep Learning-Based Approach for Mammographic Architectural Distortion Classification

  • Rehman, Khalil ur
  • Li, Jianqiang
  • Pei, Yan
  • Yasin, Anaa
  • Ali, Saqib
2022 Conference Paper, cited 0 times
Website
Breast cancer is the most deadly cancer in females globally. Architectural distortion is the third most often reported irregularity on digital mammograms among the masses and microcalcification. Physically identifying architectural distortion for radiologists is problematic because of its subtle appearance on the dense breast. Automatic early identification of breast cancer using computer algorithms from a mammogram may assist doctors in eliminating unwanted biopsies. This research presents a novel diagnostic method to identify AD ROIs from mammograms using computer vision-based depth-wise CNN. The proposed methodology is examined on private PINUM 2885 and public DDSM 3568 images and achieved a 0.99 and 0.95 sensitivity, respectively. The experimental findings revealed that the proposed scheme outperformed SVM, KNN, and previous studies.

CT Reconstruction Kernels and the Effect of Pre- and Post-Processing on the Reproducibility of Handcrafted Radiomic Features

  • Refaee, T.
  • Salahuddin, Z.
  • Widaatalla, Y.
  • Primakov, S.
  • Woodruff, H. C.
  • Hustinx, R.
  • Mottaghy, F. M.
  • Ibrahim, A.
  • Lambin, P.
J Pers Med 2022 Journal Article, cited 0 times
Website
Handcrafted radiomics features (HRFs) are quantitative features extracted from medical images to decode biological information to improve clinical decision making. Despite the potential of the field, limitations have been identified. The most important identified limitation, currently, is the sensitivity of HRF to variations in image acquisition and reconstruction parameters. In this study, we investigated the use of Reconstruction Kernel Normalization (RKN) and ComBat harmonization to improve the reproducibility of HRFs across scans acquired with different reconstruction kernels. A set of phantom scans (n = 28) acquired on five different scanner models was analyzed. HRFs were extracted from the original scans, and scans were harmonized using the RKN method. ComBat harmonization was applied on both sets of HRFs. The reproducibility of HRFs was assessed using the concordance correlation coefficient. The difference in the number of reproducible HRFs in each scenario was assessed using McNemar's test. The majority of HRFs were found to be sensitive to variations in the reconstruction kernels, and only six HRFs were found to be robust with respect to variations in reconstruction kernels. The use of RKN resulted in a significant increment in the number of reproducible HRFs in 19 out of the 67 investigated scenarios (28.4%), while the ComBat technique resulted in a significant increment in 36 (53.7%) scenarios. The combination of methods resulted in a significant increment in 53 (79.1%) scenarios compared to the HRFs extracted from original images. Since the benefit of applying the harmonization methods depended on the data being harmonized, reproducibility analysis is recommended before performing radiomics analysis. For future radiomics studies incorporating images acquired with similar image acquisition and reconstruction parameters, except for the reconstruction kernels, we recommend the systematic use of the pre- and post-processing approaches (respectively, RKN and ComBat).

Automated image quality assessment for chest CT scans

  • Reeves, A. P.
  • Xie, Y.
  • Liu, S.
Med Phys 2018 Journal Article, cited 0 times
Website
PURPOSE: Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. METHODS: For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. RESULTS: The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. CONCLUSIONS: Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods.

Automated pulmonary nodule CT image characterization in lung cancer screening

  • Reeves, Anthony P
  • Xie, Yiting
  • Jirapatnakul, Artit
International Journal of Computer Assisted Radiology and Surgery 2016 Journal Article, cited 19 times
Website

Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks

  • Reddy, Annapareddy V. N.
  • Krishna, Ch Phani
  • Mallick, Pradeep Kumar
  • Satapathy, Sandeep Kumar
  • Tiwari, Prayag
  • Zymbler, Mikhail
  • Kumar, Sachin
Journal of Big Data 2020 Journal Article, cited 0 times
Website
Glioblastoma (GBM) is a stage 4 malignant tumor in which a large portion of tumor cells are reproducing and dividing at any moment. These tumors are life threatening and may result in partial or complete mental and physical disability. In this study, we have proposed a classification model using hybrid deep belief networks (DBN) to classify magnetic resonance imaging (MRI) for GBM tumor. DBN is composed of stacked restricted Boltzmann machines (RBM). DBN often requires a large number of hidden layers that consists of large number of neurons to learn the best features from the raw image data. Hence, computational and space complexity is high and requires a lot of training time. The proposed approach combines DTW with DBN to improve the efficiency of existing DBN model. The results are validated using several statistical parameters. Statistical validation verifies that the combination of DTW and DBN outperformed the other classifiers in terms of training time, space complexity and classification accuracy.

Deeply supervised U‐Net for mass segmentation in digital mammograms

  • Ravitha Rajalakshmi, N.
  • Vidhyapriya, R.
  • Elango, N.
  • Ramesh, Nikhil
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2020 Journal Article, cited 0 times
Website
Mass detection is a critical process in the examination of mammograms. The shape and texture of the mass are key parameters used in the diagnosis of breast cancer. To recover the shape of the mass, semantic segmentation is found to be more useful rather than mere object detection (or) localization. The main challenges involved in the mass segmentation include: (a) low signal to noise ratio (b) indiscernible mass boundaries, and (c) more false positives. These problems arise due to the significant overlap in the intensities of both the normal parenchymal region and the mass region. To address these challenges, deeply supervised U‐Net model (DS U‐Net) coupled with dense conditional random fields (CRFs) is proposed. Here, the input images are preprocessed using CLAHE and a modified encoder‐decoder‐based deep learning model is used for segmentation. In general, the encoder captures the textual information of various regions in an input image, whereas the decoder recovers the spatial location of the desired region of interest. The encoder‐decoder‐based models lack the ability to recover the non‐conspicuous and spiculated mass boundaries. In the proposed work, deep supervision is integrated with a popular encoder‐decoder model (U‐Net) to improve the attention of the network toward the boundary of the suspicious regions. The final segmentation map is also created as a linear combination of the intermediate feature maps and the output feature map. The dense CRF is then used to fine‐tune the segmentation map for the recovery of definite edges. The DS U‐Net with dense CRF is evaluated on two publicly available benchmark datasets CBIS‐DDSM and INBREAST. It provides a dice score of 82.9% for CBIS‐DDSM and 79% for INBREAST.

Accelerating Machine Learning with Training Data Management

  • Ratner, Alexander Jason
2019 Thesis, cited 1 times
Website
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.

Imaging Signature of 1p/19q Co-deletion Status Derived via Machine Learning in Lower Grade Glioma

  • Rathore, Saima
  • Chaddad, Ahmad
  • Bukhari, Nadeem Haider
  • Niazi, Tamim
2020 Book Section, cited 0 times
Website
We present a new approach to quantify the co-deletion of chromosomal arms 1p/19q status in lower grade glioma (LGG). Though the surgical biopsy followed by fluorescence in-situ hybridization test is the gold standard currently to identify mutational status for diagnosis and treatment planning, there are several imaging studies to predict the same. Our study aims to determine the 1p/19q mutational status of LGG non-invasively by advanced pattern analysis using multi-parametric MRI. The publicly available dataset at TCIA was used. T1-W and T2-W MRIs of a total 159 patients with grade-II and grade-III glioma, who had biopsy proven 1p/19q status consisting either no deletion (n = 57) or co-deletion (n = 102), were used in our study. We quantified the imaging profile of these tumors by extracting diverse imaging features, including the tumor’s spatial distribution pattern, volumetric, texture, and intensity distribution measures. We integrated these diverse features via support vector machines, to construct an imaging signature of 1p/19q, which was evaluated in independent discovery (n = 85) and validation (n = 74) cohorts, and compared with the 1p/19q status obtained through fluorescence in-situ hybridization test. The classification accuracy on complete, discovery and replication cohorts was 86.16%, 88.24%, and 85.14%, respectively. The classification accuracy when the model developed on training cohort was applied on unseen replication set was 82.43%. Non-invasive prediction of 1p/19q status from MRIs would allow improved treatment planning for LGG patients without the need of surgical biopsies and would also help in potentially monitoring the dynamic mutation changes during the course of the treatment.

Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of de novo Glioblastoma Patients

  • Rathore, Saima
  • Akbari, Hamed
  • Bakas, Spyridon
  • Pisapia, Jared M
  • Shukla, Gaurav
  • Rudie, Jeffrey D
  • Da, Xiao
  • Davuluri, Ramana V
  • Dahmane, Nadia
  • O'Rourke, Donald M
Frontiers in Computational Neuroscience 2019 Journal Article, cited 0 times

Magnetic resonance spectroscopy as an early indicator of response to anti-angiogenic therapy in patients with recurrent glioblastoma: RTOG 0625/ACRIN 6677

  • Ratai, E. M.
  • Zhang, Z.
  • Snyder, B. S.
  • Boxerman, J. L.
  • Safriel, Y.
  • McKinstry, R. C.
  • Bokstein, F.
  • Gilbert, M. R.
  • Sorensen, A. G.
  • Barboriak, D. P.
2013 Journal Article, cited 0 times
Website
Background. The prognosis for patients with recurrent glioblastoma remains poor. The purpose of this study was to assess the potential role of MR spectroscopy as an early indicator of response to anti-angiogenic therapy. Methods. Thirteen patients with recurrent glioblastoma were enrolled in RTOG 0625/ACRIN 6677, a prospective multicenter trial in which bevacizumab was used in combination with either temozolomide or irinotecan. Patients were scanned prior to treatment and at specific timepoints during the treatment regimen. Postcontrast T1-weighted MRI was used to assess 6-month progression-free survival. Spectra from the enhancing tumor and peritumoral regions were defined on the postcontrast T1-weighted images. Changes in the concentration ratios of N-acetylaspartate/creatine (NAA/Cr), choline-containing compounds (Cho)/Cr, and NAA/Cho were quantified in comparison with pretreatment values. Results. NAA/Cho levels increased and Cho/Cr levels decreased within enhancing tumor at 2 weeks relative to pretreatment levels (P = .048 and P = .016, respectively), suggesting a possible antitumor effect of bevacizumab with cytotoxic chemotherapy. Nine of the 13 patients were alive and progression free at 6 months. Analysis of receiver operating characteristic curves for NAA/Cho changes in tumor at 8 weeks revealed higher levels in patients progression free at 6 months (area under the curve = 0.85), suggesting that NAA/Cho is associated with treatment response. Similar results were observed for receiver operating characteristic curve analyses against 1-year survival. In addition, decreased Cho/Cr and increased NAA/Cr and NAA/Cho in tumor periphery at 16 weeks posttreatment were associated with both 6-month progression-free survival and 1-year survival. Conclusion. Changes in NAA and Cho by MR spectroscopy may potentially be useful as imaging biomarkers in assessing response to anti-angiogenic treatment.

LeuFeatx: Deep learning-based feature extractor for the diagnosis of acute leukemia from microscopic images of peripheral blood smear

  • Rastogi, P.
  • Khanna, K.
  • Singh, V.
Comput Biol Med 2022 Journal Article, cited 2 times
Website
The abnormal growth of leukocytes causes hematologic malignancies such as leukemia. The clinical assessment methods for the diagnosis of the disease are labor-intensive and time-consuming. Image-based automated diagnostic systems can be of great help in the decision-making process for leukemia detection. A feature-dependent, intrinsic, reliable classifier is a critical component in building such a diagnostic system. However, the identification of vital and relevant features is a challenging task in the classification workflow. The proposed work presents a novel two-step methodology for the robust classification of leukocytes for leukemia diagnosis by building a VGG16-adapted fine-tuned feature-extractor model, termed as "LeuFeatx," which plays a critical role in the accurate classification of leukocytes. LeuFeatx was found to be capable of extracting notable leukocyte features using microscopic single-cell leukocyte images. The filters and learned features are visualized and compared with base VGG16 model features. Independent classification experiments using three public benchmark leukocyte datasets were conducted to assess the effectiveness of extracted features with the proposed LeuFeatx model. Multiclass classifiers trained using LeuFeatx deep features achieved higher precision and sensitivity for seven leukocyte subtypes compared to the latest research on the AML Morphological dataset, and it achieved higher sensitivity for all cell types vis-a-vis recent work on peripheral blood cells dataset from the Hospital Clinic of Barcelona. In a binary classification experiment using the ALL_IDB2 dataset, classifiers trained using LeuFeatx deep features achieved an accuracy of 96.15%, which is better than the other state-of-the-art methods reported in the literature. Thus, the higher performance of the classifiers across observed comparison metrics establishes the relevance of the extracted features and the overall robustness of the proposed model.

Comparison of iterative parametric and indirect deep learning-based reconstruction methods in highly undersampled DCE-MR Imaging of the breast

  • Rastogi, A.
  • Yalavarthy, P. K.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: To compare the performance of iterative direct and indirect parametric reconstruction methods with indirect deep learning-based reconstruction methods in estimating tracer-kinetic parameters from highly undersampled DCE-MR Imaging breast data and provide a systematic comparison of the same. METHODS: Estimation of tracer-kinetic parameters using indirect methods from undersampled data requires to reconstruct the anatomical images initially by solving an inverse problem. This reconstructed images gets utilized in turn to estimate the tracer-kinetic parameters. In direct estimation, the parameters are estimated without reconstructing the anatomical images. Both problems are ill-posed and are typically solved using prior-based regularization or using deep learning. In this study, for indirect estimation, two deep learning-based reconstruction frameworks namely, ISTA-Net(+) and MODL, were utilized. For direct and indirect parametric estimation, sparsity inducing priors (L1 and Total-Variation) and limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm as solver was deployed. The performance of these techniques were compared systematically in estimation of vascular permeability ( K trans ) from undersampled DCE-MRI breast data using Patlak as pharmaco-kinetic model. The experiments involved retrospective undersampling of the data 20x, 50x, and 100x and compared the results using PSNR, nRMSE, SSIM, and Xydeas metrics. The K trans maps estimated from fully sampled data were utilized as ground truth. The developed code was made available as https://github.com/Medical-Imaging-Group/DCE-MRI-Compare open-source for enthusiastic users. RESULTS: The reconstruction methods performance was evaluated using ten patients breast data (five patients each for training and testing). Consistent with other studies, the results indicate that direct parametric reconstruction methods provide improved performance compared to the indirect parameteric reconstruction methods. The results also indicate that for 20x undersampling, deep learning-based methods performs better or at par with direct estimation in terms of PSNR, SSIM, and nRMSE. However, for higher undersampling rates (50x and 100x) direct estimation performs better in all metrics. For all undersampling rates, direct reconstruction performed better in terms of Xydeas metric, which indicated fidelity in magnitude and orientation of edges. CONCLUSION: Deep learning-based indirect techniques perform at par with direct estimation techniques for lower undersampling rates in the breast DCE-MR imaging. At higher undersampling rates, they are not able to provide much needed generalization. Direct estimation techniques are able to provide more accurate results than both deep learning- and parametric-based indirect methods in these high undersampling scenarios.

From pixels to prognosis: unveiling radiomics models with SHAP and LIME for enhanced interpretability

  • Raptis, S.
  • Ilioudis, C.
  • Theodorou, K.
Biomed Phys Eng Express 2024 Journal Article, cited 0 times
Website
Radiomics-based prediction models have shown promise in predicting Radiation Pneumonitis (RP), a common adverse outcome of chest irradiation. This study looks into more than just RP: it also investigates a bigger shift in the way radiomics-based models work. By integrating multi-modal radiomic data, which includes a wide range of variables collected from medical images including cutting-edge PET/CT imaging, we have developed predictive models that capture the intricate nature of illness progression. Radiomic features were extracted using PyRadiomics, encompassing intensity, texture, and shape measures. The high-dimensional dataset formed the basis for our predictive models, primarily Gradient Boosting Machines (GBM)-XGBoost, LightGBM, and CatBoost. Performance evaluation metrics, including Multi-Modal AUC-ROC, Sensitivity, Specificity, and F1-Score, underscore the superiority of the Deep Neural Network (DNN) model. The DNN achieved a remarkable Multi-Modal AUC-ROC of 0.90, indicating superior discriminatory power. Sensitivity and specificity values of 0.85 and 0.91, respectively, highlight its effectiveness in detecting positive occurrences while accurately identifying negatives. External validation datasets, comprising retrospective patient data and a heterogeneous patient population, validate the robustness and generalizability of our models. The focus of our study is the application of sophisticated model interpretability methods, namely SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), to improve the clarity and understanding of predictions. These methods allow clinicians to visualize the effects of features and provide localized explanations for every prediction, enhancing the comprehensibility of the model. This strengthens trust and collaboration between computational technologies and medical competence. The integration of data-driven analytics and medical domain expertise represents a significant shift in the profession, advancing us from analyzing pixel-level information to gaining valuable prognostic insights.

MRI brain image classification using Linear Vector Quantization Classifier

  • Rao, R. R.
  • Pabboju, S.
  • Raju, A. R.
Cardiometry 2022 Journal Article, cited 0 times
The metastases cancer other than the lifestyle-related or environmental related no known facts for the brain tumors. Only factors that may cause brain tumors might be the exposure to high ionizing radiation and a family history of any brain disease also increase brain cancer risk. The cancerous brain is a brain disorder that shapes masses in cells called tumors. The early diagnosis of brain cancer using the Magnetic Resonance Imaging (MRI) scan image for cancer disease is required to reduce the mortality rate. Dual-Tree Mband Wavelet Transform (DTMBWT) based feature extraction, and Linear Vector Quantization Classifier (LVQC) based MRI brain image classification. DTMBWT decomposes the MRI brain images in the frequency domain as the sub-bands for fuzzy-based low and high components to evaluate the features selected. The Sub-band Energy Features (SEF) for individual and sub-set ranking helps classify normal and abnormal images that LVQC for output prediction characterizes. The results show the classification accuracy of 95% using DTMBWT based SEF and LVQC classifiers.

A combinatorial radiographic phenotype may stratify patient survival and be associated with invasion and proliferation characteristics in glioblastoma

  • Rao, Arvind
  • Rao, Ganesh
  • Gutman, David A
  • Flanders, Adam E
  • Hwang, Scott N
  • Rubin, Daniel L
  • Colen, Rivka R
  • Zinn, Pascal O
  • Jain, Rajan
  • Wintermark, Max
Journal of neurosurgery 2016 Journal Article, cited 19 times
Website
OBJECTIVE Individual MRI characteristics (e.g., volume) are routinely used to identify survival-associated phenotypes for glioblastoma (GBM). This study investigated whether combinations of MRI features can also stratify survival. Furthermore, the molecular differences between phenotype-induced groups were investigated. METHODS Ninety-two patients with imaging, molecular, and survival data from the TCGA (The Cancer Genome Atlas)GBM collection were included in this study. For combinatorial phenotype analysis, hierarchical clustering was used. Groups were defined based on a cutpoint obtained via tree-based partitioning. Furthermore, differential expression analysis of microRNA (miRNA) and mRNA expression data was performed using GenePattern Suite. Functional analysis of the resulting genes and miRNAs was performed using Ingenuity Pathway Analysis. Pathway analysis was performed using Gene Set Enrichment Analysis. RESULTS Clustering analysis reveals that image-based grouping of the patients is driven by 3 features: volume-class, hemorrhage, and T1/FLAIR-envelope ratio. A combination of these features stratifies survival in a statistically significant manner. A cutpoint analysis yields a significant survival difference in the training set (median survival difference: 12 months, p = 0.004) as well as a validation set (p = 0.0001). Specifically, a low value for any of these 3 features indicates favorable survival characteristics. Differential expression analysis between cutpoint-induced groups suggests that several immune-associated (natural killer cell activity, T-cell lymphocyte differentiation) and metabolism-associated (mitochondrial activity, oxidative phosphorylation) pathways underlie the transition of this phenotype. Integrating data for mRNA and miRNA suggests the roles of several genes regulating proliferation and invasion. CONCLUSIONS A 3-way combination of MRI phenotypes may be capable of stratifying survival in GBM. Examination of molecular processes associated with groups created by this combinatorial phenotype suggests the role of biological processes associated with growth and invasion characteristics.

Integrative Analysis of mRNA, microRNA, and Protein Correlates of Relative Cerebral Blood Volume Values in GBM Reveals the Role for Modulators of Angiogenesis and Tumor Proliferation

  • Rao, Arvind
  • Manyam, Ganiraju
  • Rao, Ganesh
  • Jain, Rajan
Cancer Informatics 2016 Journal Article, cited 5 times
Website
Dynamic susceptibility contrast-enhanced magnetic resonance imaging is routinely used to provide hemodynamic assessment of brain tumors as a diagnostic as well as a prognostic tool. Recently, it was shown that the relative cerebral blood volume (rCBV), obtained from the contrast-enhancing as well as -nonenhancing portion of glioblastoma (GBM), is strongly associated with overall survival. In this study, we aim to characterize the genomic correlates (microRNA, messenger RNA, and protein) of this vascular parameter. This study aims to provide a comprehensive radiogenomic and radioproteomic characterization of the hemodynamic phenotype of GBM using publicly available imaging and genomic data from the Cancer Genome Atlas GBM cohort. Based on this analysis, we identified pathways associated with angiogenesis and tumor proliferation underlying this hemodynamic parameter in GBM.

Exploring relationships between multivariate radiological phenotypes and genetic features: A case-study in Glioblastoma using the Cancer Genome Atlas

  • Rao, Arvind
2013 Conference Proceedings, cited 0 times

Nerve optic segmentation in CT images using a deep learning model and a texture descriptor

  • Ranjbarzadeh, Ramin
  • Dorosti, Shadi
  • Jafarzadeh Ghoushchi, Saeid
  • Safavi, Sadaf
  • Razmjooy, Navid
  • Tataei Sarshar, Nazanin
  • Anari, Shokofeh
  • Bendechache, Malika
Complex & Intelligent Systems 2022 Journal Article, cited 1 times
Website

Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images

  • Ranjbarzadeh, R.
  • Bagherian Kasgari, A.
  • Jafarzadeh Ghoushchi, S.
  • Anari, S.
  • Naseri, M.
  • Bendechache, M.
2021 Journal Article, cited 0 times
Website
Brain tumor localization and segmentation from magnetic resonance imaging (MRI) are hard and important tasks for several applications in the field of medical analysis. As each brain imaging modality gives unique and key details related to each part of the tumor, many recent approaches used four modalities T1, T1c, T2, and FLAIR. Although many of them obtained a promising segmentation result on the BRATS 2018 dataset, they suffer from a complex structure that needs more time to train and test. So, in this paper, to obtain a flexible and effective brain tumor segmentation system, first, we propose a preprocessing approach to work only on a small part of the image rather than the whole part of the image. This method leads to a decrease in computing time and overcomes the overfitting problems in a Cascade Deep Learning model. In the second step, as we are dealing with a smaller part of brain images in each slice, a simple and efficient Cascade Convolutional Neural Network (C-ConvNet/C-CNN) is proposed. This C-CNN model mines both local and global features in two different routes. Also, to improve the brain tumor segmentation accuracy compared with the state-of-the-art models, a novel Distance-Wise Attention (DWA) mechanism is introduced. The DWA mechanism considers the effect of the center location of the tumor and the brain inside the model. Comprehensive experiments are conducted on the BRATS 2018 dataset and show that the proposed model obtains competitive results: the proposed method achieves a mean whole tumor, enhancing tumor, and tumor core dice scores of 0.9203, 0.9113 and 0.8726 respectively. Other quantitative and qualitative assessments are presented and discussed.

Improving semi-supervised deep learning under distribution mismatch for medical image analysis applications

  • Ramírez, Saúl Calderón
2021 Thesis, cited 0 times
Website
Deep learning methodologies have shown outstanding success in different image analysis applications. They rely on the abundance of labelled observations to build the model. However, frequently it is expensive to gather labelled observations of data, making the usage of deep learning models imprudent. Different practical examples of this challenge can be found in the analysis of medical images. For instance, labelling images to solve medical imaging problems require expensive labelling efforts, as experts (i.e., radiologists) are required to produce reliable labels. Semi-supervised learning is an increasingly popular alternative approach to deal with small labelled datasets and increase model test accuracy, by leveraging unlabelled data. However, in real-world usage settings, an unlabelled dataset might present a different distribution than the labelled dataset (i.e., the labelled dataset was sampled from a target clinic and the unlabelled dataset from a source clinic). There are different causes for a distribution mismatch between the labelled and the unlabelled dataset: a prior probability shift, a set of observations from unseen classes in the labelled dataset, and a covariate shift of the features. In this work, we assess the impact of this phenomena, for the state of the art semi-supervised model known as MixMatch. We evaluate both label and feature distribution mismatch impact in MixMatch in a real-world application: the classification of chest X-ray images for COVID-19 detection. We also test the performance gain of using MixMatch for malignant cancer detection using mammograms. For both study cases we managed to build new datasets from a private clinic in Costa Rica. We propose different approaches to address different causes of a distribution mismatch between the labelled and unlabelled datasets. First, regarding the prior probability shift, a simple model-oriented approach to deal with this challenge, is proposed. According to our experiments, the proposed method yielded accuracy gains of up to 14% statistical significance. As for more challenging distribution mismatch settings caused by a covariate shift in the feature space and sampling unseen classes in the unlabelled dataset we propose a data-oriented approach to deal with such challenges. As an assessment tool, we propose a set of dataset dissimilarity metrics designed to measure how much performance benefit a semi-supervised training regime can get from using a specific unlabelled dataset over another. Also, two techniques designed to score each unlabelled observation according to how much accuracy might bring including such observation into the unlabelled dataset for semi-supervised training are proposed. These scores can be used to discard harmful unlabelled observations. The novel methods use a generic feature extractor to build a feature space where the metrics and scores are computed. The dataset dissimilarity metrics yielded a linear correlation of up to 90% to the performance of the state-of-the-art Mix- Match semi-supervised training algorithm, suggesting that such metrics can be used to assess the quality of an unlabelled dataset. As for the scoring methods for unlabelled data, according to our tests, using them to discard harmful unlabelled data, was able to increase the performance of MixMatch to around 20%. This in the context of medical image analysis applications.

Reg R-CNN: Lesion Detection and Grading Under Noisy Labels

  • Ramien, Gregor N.
  • Jaeger, Paul F.
  • Kohl, Simon A. A.
  • Maier-Hein, Klaus H.
2019 Conference Proceedings, cited 0 times
Website
For the task of concurrently detecting and categorizing objects, the medical imaging community commonly adopts methods developed on natural images. Current state-of-the-art object detectors are comprised of two stages: the first stage generates region proposals, the second stage subsequently categorizes them. Unlike in natural images, however, for anatomical structures of interest such as tumors, the appearance in the image (e.g., scale or intensity) links to a malignancy grade that lies on a continuous ordinal scale. While classification models discard this ordinal relation between grades by discretizing the continuous scale to an unordered bag of categories, regression models are trained with distance metrics, which preserve the relation. This advantage becomes all the more important in the setting of label confusions on ambiguous data sets, which is the usual case with medical images. To this end, we propose Reg R-CNN, which replaces the second-stage classification model of a current object detector with a regression model. We show the superiority of our approach on a public data set with 1026 patients and a series of toy experiments. Code will be available at github.com/MIC-DKFZ/RegRCNN.

Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method

  • Ramdlon, Rafi Haidar
  • Martiana Kusumaningtyas, Entin
  • Karlita, Tita
2019 Conference Proceedings, cited 0 times
The accuracy level in diagnosing tumor type through MRI results is required to establish appropriate medical treatment. MRI results can be computationally examined using K-Nearest Neighbor method, a basic science application and classification technique of image processing. Tumor classification system is designed to detect tumor and edema in T1 and T2 images sequences, as well as to label and classify tumor type. Data interpretation of such system derives from Axial section of MRI results only, which is classified into three classes: Astrocytoma, Glioblastoma, and Oligodendroglioma. To detect tumor area, basic image processing technique is employed, comprising of image enhancement, image binarization, morphological image, and watershed. Tumor classification is applied after segmentation process of Shape Extration Feature is undertaken. The results of tumor classification obtained was 89.5 percent, which is able to provide information regarding tumor detection more clearly and specifically.

Convolutional-Neural-Network Assisted Segmentation and SVM Classification of Brain Tumor in Clinical MRI Slices

  • Rajinikanth, Venkatesan
  • Kadry, Seifedine
  • Nam, Yunyoung
Information Technology And Control 2021 Journal Article, cited 1 times
Website
Due to the increased disease occurrence rates in humans, the need for the Automated Disease Diagnosis (ADD) systems is also raised. Most of the ADD systems are proposed to support the doctor during the screening and decision making process. This research aims at developing a Computer Aided Disease Diagnosis (CADD) scheme to categorize the brain tumour of 2D MRI slices into Glioblastoma/Glioma class with better accuracy. The main contribution of this research work is to develop a CADD system with Convolutional-Neural-Network (CNN) supported segmentation and classification. The proposed CADD framework consist of the following phases; (i) Image collection and resizing, (ii) Automated tumour segmentation using VGG-UNet, (iv) Deep-feature extraction using VGG16 network, (v) Handcrafted feature extraction, (vi) Finest feature choice by firefly-algorithm, and (vii) Serial feature concatenation and binary classification. The merit of the executed CADD is confirmed using an investigation realized using the benchmark as well as clinically collected brain MRI slices. In this work, a binary classification with a 10-fold cross validation is implemented using well known classifiers and the results attained with the SVM-Cubic (accuracy >98%) is superior. This result confirms that the combination of CNN assisted segmentation and classification helps to achieve enhanced disease detection accuracy.

Glioma/Glioblastoma Detection in Brain MRI using Pre-trained Deep-Learning Scheme

  • Rajinikanth, Venkatesan
  • Kadry, Seifedine
  • Damaševičius, Robertas
  • Sujitha, R. Angel
  • Balaji, Gangadharam
  • Mohammed, Mazin Abed
2022 Conference Paper, cited 0 times
Website
Convolutional Neural Network (CNN) supported medicinal image examination is extensively accepted due to its reputation and improved accuracy. The investigational outcome obtained with DLS along with a chosen classifier helps to achieve better detection results than the traditional and machine-learning methods. The proposed research examines the performance of pre-trained VGG16 and VGG19 schemes in detecting the brain tumour (Glioma/Glioblastoma) grade using different pooling methods. The classification is performed using SoftMax with five-fold cross-validation, and the products are compared and presented. The brain tumour images considered in this study are collected from The Cancer Imaging Archive (TCIA) dataset. This work considered 2000 images (1000 Glioma and 1000 Glioblastoma) of axial-plane with dimension of 224×224×3 pixels for the assessment, and the attained results are compared. The experimental outcome achieved with Python® confirms that the VGG16 with average-pooling provides a better classification accuracy (>99%) with Decision Tree (DT) compared with other methods considered.

Lung Cancer Diagnosis and Treatment Using AI and Mobile Applications

  • Rajesh, P.
  • Murugan, A.
  • Muruganantham, B.
  • Ganesh Kumar, S.
International Journal of Interactive Mobile Technologies (iJIM) 2020 Journal Article, cited 0 times
Website
Cancer has become very common in this evolving world. Technology advancements, increased radiations have made cancer a common syndrome. Various types of cancers like Skin Cancer, Breast Cancer, Prostate Cancer, Blood Cancer, Colorectal cancer, Kidney Cancer and Lung Cancer exist. Among these various types of cancers, the mortality rate is high in lung cancer which is tough to diagnose and can be diagnosed only in advanced stages. Small cell lung cancer and non-small cell lung cancer are the two types in which non-small cell lung cancer (NSCLC) is the most common type which makes up to 80 to 85 percent of all cases [1].Digital Image Processing and Artificial Intelligence advancements has helped lot in medical image analysis and Computer Aided Diagnosis(CAD). Numerous research is carried out in this field to improve the detection and prediction of the cancerous tissues. In current methods, traditional image processing techniques is applied for image processing, noise removal and feature extraction. There are few good approaches that applies Artificial Intelligence and produce better results. However, no research has achieved 100% accuracy in nodule detection, early detection of cancerous nodules nor faster processing methods. Application of Artificial Intelligence techniques like Machine Learning, Deep Learning is very minimal and limited. In this paper [Figure 1], we have applied Artificial intelligence techniques to process CT (Computed Tomography) Scan image for data collection and data model training. The DICOM image data is saved as numpy file with all medical information extracted from the files for training. With the trained data we apply deep learning for noise removal and feature extraction. We can process huge volume of medical images for data collection, image processing, detection and prediction of nodules. The patient is made well aware of the disease and enabled with their health tracking using various mobile applications made available in the online stores for iOS and Android mobile devices.

Intelligent texture feature extraction and indexing for MRI image retrieval using curvelet and PCA with HTF

  • Rajakumar, K
  • Muttan, S
  • Deepa, G
  • Revathy, S
  • Priya, B Shanmuga
Advances in Natural and Applied Sciences 2015 Journal Article, cited 0 times
Website
With the development of multimedia network technology and the rapid increase of image application, Content Based Image Retrieval (CBIR) has become the most active area in image retrieval system. The fields of application of CBIR are becoming more and more exhaustive and wide. Most traditional image retrieval systems usually use color, texture, shape and spatial relationship. At present texture features play a very important role in computer vision and pattern recognition, especially in describing the content of images. Most texture image retrieval systems are providing retrieval result with insufficient retrieval accuracy. We address this problem, by using curvelet with PCA using Haralick Texture Feature (HTF) based image retrieval system is proposed in this paper. The combined approach of curvelet and PCA using HTF has produced better results than other proposed techniques.

REPRESENTATION LEARNING FOR BREAST CANCER LESION DETECTION

  • Raimundo, João Nuno Centeno
2022 Thesis, cited 0 times
Website
Breast Cancer (BC) is the second type of cancer with a higher incidence in women, it is responsible for the death of hundreds of thousands of women every year. However, when detected in the early stages of the disease, treatment methods have proven to be very effective in increasing life expectancy and, in many cases, patients fully recover. Several medical image modalities, such as MG – Mammography (X-Rays), US - Ultrasound, CT - Computer Tomography, MRI - Magnetic Resonance Imaging, and Tomosynthesis have been explored to support radiologists/physicians in clinical decision-making workflows for the detection and diagnosis of BC. MG is the imaging modality more used at the worldwide level, however, recent research results have demonstrated that breast MRI is more sensitive than mammography to find pathological lesions, and it is not limited/affected by breast density issues. Therefore, it is currently a trend to introduce MRI-based breast assessment into clinical workflows (screening and diagnosis), but when compared to MG the workload of radiologists/physicians increases, MRI assessment is a more time-consuming task, and its effectiveness is affected not only by the variety of morphological characteristics of each specific tumor phenotype and its origin but also by human fatigue. Computer-Aided Detection (CADe) methods have been widely explored primarily in mammography screening tasks, but it remains an unsolved problem in breast MRI settings. This work aims to explore and validate BC detection models using Machine (Deep) Learning algorithms. As the main contribution, we have developed and validated an innovative method that improves the “breast MRI preprocessing phase” to select the patient’s image slices and bounding boxes representing pathological lesions. With this, it is possible to build a more robust training dataset to feed the deep learning models, reducing the computation time and the dimension of the dataset, and more importantly, to identify with high accuracy the specific regions (bounding boxes) for each of the patient images, in which a possible pathological lesion (tumor) has been identified. In experimental settings using a fully annotated (released for public domain) dataset comprising a total of 922 MRI-based BC patient cases, we have achieved, as the most accurate trained model, an accuracy rate of 97.83%, and subsequently, applying a ten-fold cross-validation method, a mean accuracy on the trained models of 94.46% and an associated standard deviation of 2.43%.

An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT

  • Rai, S.
  • Bhatt, J. S.
  • Patra, S. K.
2024 Journal Article, cited 0 times
Website
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.

Redundancy Reduction in Semantic Segmentation of 3D Brain Tumor MRIs

  • Rahman Siddiquee, Md Mahfuzur
  • Myronenko, Andriy
2022 Book Section, cited 4 times
Website
Another year of the multimodal brain tumor segmentation challenge (BraTS) 2021 provides an even larger dataset to facilitate collaboration and research of brain tumor segmentation methods, which are necessary for disease analysis and treatment planning. A large dataset size of BraTS 2021 and the advent of modern GPUs provide a better opportunity for deep-learning based approaches to learn tumor representation from the data. In this work, we maintained an encoder-decoder based segmentation network, but focused on a modification of network training process that minimizes redundancy under perturbations. Given a set trained networks, we further introduce a confidence based ensembling techniques to further improve the performance. We evaluated the method on BraTS 2021, and in terms of dice for enhanced tumor core, tumor core and whole tumor, we achieved 0.8600, 0.8868 and 0.9265 average dice for the validation set, and 0.8769, 0.8721, 0.9266 average dice for the testing set. Our team (NVAUTO) submission was the top performing in terms of ET and TC scores, and using the Brats ranking system (based on the dice and Hausdorff distance ranking per case) achieved the 2nd place on the validation set, and the 4th place on the testing set.

Brain Tumor Segmentation Using UNet-Context Encoding Network

  • Rahman, Md Monibor
  • Sadique, Md Shibly
  • Temtam, Ahmed G.
  • Farzana, Walia
  • Vidyaratne, L.
  • Iftekharuddin, Khan M.
2022 Conference Paper, cited 0 times
Website
Glioblastoma is an aggressive type of cancer that can develop in the brain or spinal cord. Magnetic Resonance Imaging (MRI) is key to diagnosing and tracking brain tumors in clinical settings. Brain tumor segmentation in MRI is required for disease diagnosis, surgical planning, and prognosis. As these tumors are heterogeneous in shape and appearance, their segmentation becomes a challenging task. The performance of automated medical image segmentation has considerably improved because of recent advances in deep learning. Introducing context encoding with deep CNN models has shown promise for semantic segmentation of brain tumors. In this work, we use a 3D UNet-Context Encoding (UNCE) deep learning network for improved brain tumor segmentation. Further, we introduce epistemic and aleatoric Uncertainty Quantification (UQ) using Monte Carlo Dropout (MCDO) and Test Time Augmentation (TTA) with the UNCE deep learning model to ascertain confidence in tumor segmentation performance. We build our model using the training MRI image sets of RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021. We evaluate the model performance using the validation and test images from the BraTS challenge dataset. Online evaluation of validation data shows dice score coefficients (DSC) of 0.7787, 0.8499, and 0.9159 for enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The dice score coefficients of the test datasets are 0.6684 for ET, 0.7056 for TC, and 0.7551 for WT, respectively.

Detection of Acute Myeloid Leukemia from Peripheral Blood Smear Images Using Transfer Learning in Modified CNN Architectures

  • Rahman, Jeba Fairooz
  • Ahmad, Mohiuddin
2023 Book Section, cited 0 times
Acute myeloid leukemia (AML), the most fatal hematological malignancy, is characterized by immature leukocyte proliferation in the bone marrow and peripheral blood. Conventional diagnosis of AML, performed by trained examiners using microscopic images of a peripheral blood smear, is a time-consuming and tedious process. Considering these issues, this study proposes a transfer learning-based approach for the accurate detection of immature leukocytes to diagnose AML. At first, the data was resized and transformed at the pre-processing stage. Then augmentation was performed on training data. Finally, the pre-trained convolutional neural network architectures were used with transfer learning. Transfer learning through modified AlexNet, ResNet50, DenseNet161, and VGG-16 were used to detect immature leukocytes. After model training and validation using different parameters, models with the best parameters were applied to the test set. Among other models, modified AlexNet achieved 96.52% accuracy, 94.94% AUC and an average recall, precision, and F1-score of 97.00%, 97.00%, and 97.00%, respectively. The investigative results of this study demonstrate that the proposed approach can aid the diagnosis of AML through an efficient screening of immature leukocytes.

An Efficient Framework for Accurate Arterial Input Selection in DSC-MRI of Glioma Brain Tumors

  • Rahimzadeh, H
  • Kazerooni, A Fathi
  • Deevband, MR
  • Rad, H Saligheh
Journal of Biomedical Physics and Engineering 2018 Journal Article, cited 0 times
Website

U-Net Based Glioblastoma Segmentation with Patient’s Overall Survival Prediction

  • Rafi, Asra
  • Ali, Junaid
  • Akram, Tahir
  • Fiaz, Kiran
  • Raza Shahid, Ahmad
  • Raza, Basit
  • Mustafa Madni, Tahir
2020 Conference Proceedings, cited 0 times
Glioma is a type of malignant brain tumors which requires early detection for patients Overall Survival (OS) prediction and better treatment planning. This task can be simplified by computer aided automatic segmentation of brain MRI volumes into sub-regions. The MRI volumes segmentation can be achieved by deep learning methods but due to highly imbalance data, it becomes very challenging. In this article, we propose deep learning based solutions for Glioma segmentation and patient’s OS. To segment each pixel, we have designed a simplified version of 2D U-Net which is slice based and to predict OS, we have analyzed radiomic features. The training dataset of BraTS 2019 challenge is partitioned into train and test set and our primary results on test set are promising as dice score of (whole tumor 0.84, core tumor 0.80 and enhancing tumor 0.63) in glioma segmentation. Radiomic features based on intensity and shape are extracted from the MRI volumes and segmented tumor for OS prediction task. We further eliminate the low variance features using Recursive Features Elimination (RFE). The Random Forest Regression is used to predict OS time. By using intensities of peritumoral edema-label 2 of Flair, the necrotic and non-enhancing tumor core-label 1 along with enhancing tumor-label 4 of T1 contrast enhanced volumes and patients age, we are capable to predict patient’s OS with considerable accuracy of 31%.

Detection, quantification, malignancy prediction and growth forecasting of pulmonary nodules using deep learning in follow-up CT scans

  • Rafael Palou, Xavier
2021 Thesis, cited 0 times
Website
Nowadays, lung cancer assessment is a complex and tedious task mainly performed by radiological visual inspection of suspicious pulmonary nodules, using computed tomography (CT) scan images taken to patients over time. Several computational tools relying on conventional artificial intelligence and computer vision algorithms have been proposed for supporting lung cancer detection and classification. These solutions mostly rely on the analysis of individual lung CT images of patients and on the use of hand-crafted image descriptors. Unfortunately, this makes them unable to cope with the complexity and variability of the problem. Recently, the advent of deep learning has led to a major breakthrough in the medical image domain, outperforming conventional approaches. Despite recent promising achievements in nodule detection, segmentation, and lung cancer classification, radiologists are still reluctant to use these solutions in their day-to-day clinical practice. One of the main reasons is that current solutions do not provide support to automatic analysis of the temporal evolution of lung tumours. The difficulty to collect and annotate longitudinal lung CT cases to train models may partially explain the lack of deep learning studies that address this issue. In this dissertation, we investigate how to automatically provide lung cancer assessment through deep learning algorithms and computer vision pipelines, especially taking into consideration the temporal evolution of the pulmonary nodules. To this end, our first goal consisted on obtaining accurate methods for lung cancer assessment (diagnostic ground truth) based on individual lung CT images. Since these types of labels are expensive and difficult to collect (e.g. usually after biopsy), we proposed to train different deep learning models, based on 3D convolutional neural networks (CNN), to predict nodule malignancy based on radiologist visual inspection annotations (which are reasonable to obtain). These classifiers were built based on ground truth consisting of the nodule malignancy, the position and the size of the nodules to classify. Next, we evaluated different ways of synthesizing the knowledge embedded by the nodule malignancy neural network, into an end-to-end pipeline aimed to detect pulmonary nodules and predict lung cancer at the patient level, given a lung CT image. The positive results confirmed the convenience of using CNNs for modelling nodule malignancy, according to radiologists, for the automatic prediction of lung cancer. Next, we focused on the analysis of lung CT image series. Thus, we first faced the problem of automatically re-identifying pulmonary nodules from different lung CT scans of the same patient. To do this, we present a novel method based on a Siamese neural network (SNN) to rank similarity between nodules, overpassing the need for image registration. This change of paradigm avoided introducing potentially erroneous image deformations and provided computationally faster results. Different configurations of the SNN were examined, including the application of transfer learning, using different loss functions, and the combination of several feature maps of different network levels. This method obtained state-of-the-art performances for nodule matching both in an isolated manner and embedded in an end-to-end nodule growth detection pipeline. Afterwards, we moved to the core problem of supporting radiologists in the longitudinal management of lung cancer. For this purpose, we created a novel end-to-end deep learning pipeline, composed of four stages that completely au tomatize from the detection of nodules to the classification of cancer, through the detection of growth in the nodules. In addition, the pipeline integrated a novel approach for nodule growth detection, which relies on a recent hierarchical prob abilistic segmentation network adapted to report uncertainty estimates. Also, a second novel method was introduced for lung cancer nodule classification, integrating into a two stream 3D-CNN the estimated nodule malignancy probabilities derived from a pre-trained nodule malignancy network. The pipeline was evaluated in a longitudinal cohort and the reported outcomes (i.e. nodule detection, re-identification, growth quantification, and malignancy prediction) were compa rable with state-of-the-art work, focused on solving one or a few of the function alities of our pipeline. Thereafter, we also investigated how to help clinicians to prescribe more ac curate tumour treatments and surgical planning. Thus, we created a novel method to forecast nodule growth given a single image of the nodule. Particularly, the method relied on a hierarchical, probabilistic and generative deep neural network able to produce multiple consistent future segmentations of the nodule at a given time. To do this, the network learned to model the multimodal posterior distri bution of future lung tumour segmentations by using variational inference and injecting the posterior latent features. Eventually, by applying Monte-Carlo sampling on the outputs of the trained network, we estimated the expected tumour growth mean and the uncertainty associated with the prediction. Although further evaluation in a larger cohort would be highly recommended, the proposed methods reported accurate results to adequately support the radiological workflow of pulmonary nodule follow-up. Beyond this specific application, the outlined innovations, such as the methods for integrating CNNs into computer vision pipelines, the re-identification of suspicious regions over time based on SNNs, without the need to warp the inherent image structure, or the proposed deep generative and probabilistic network to model tumour growth considering ambiguous images and label uncertainty, they could be easily applicable to other types of cancer (e.g. pancreas), clinical diseases (e.g. Covid-19) or medical applications (e.g. therapy follow-up).

AVT: Multicenter aortic vessel tree CTA dataset collection with ground truth segmentation masks

  • Radl, L.
  • Jin, Y.
  • Pepe, A.
  • Li, J.
  • Gsaxner, C.
  • Zhao, F. H.
  • Egger, J.
2022 Journal Article, cited 2 times
Website
In this article, we present a multicenter aortic vessel tree database collection, containing 56 aortas and their branches. The datasets have been acquired with computed tomography angiography (CTA) scans and each scan covers the ascending aorta, the aortic arch and its branches into the head/neck area, the thoracic aorta, the abdominal aorta and the lower abdominal aorta with the iliac arteries branching into the legs. For each scan, the collection provides a semi-automatically generated segmentation mask of the aortic vessel tree (ground truth). The scans come from three different collections and various hospitals, having various resolutions, which enables studying the geometry/shape variabilities of human aortas and its branches from different geographic locations. Furthermore, creating a robust statistical model of the shape of human aortic vessel trees, which can be used for various tasks such as the development of fully-automatic segmentation algorithms for new, unseen aortic vessel tree cases, e.g. by training deep learning-based approaches. Hence, the collection can serve as an evaluation set for automatic aortic vessel tree segmentation algorithms.

Fully Automated Multi-Modal Anatomic Atlas Generation Using 3D-Slicer

  • Rackerseder, Julia
  • González, Antonio Miguel Luque
  • Düwel, Charlotte
  • Navab, Nassir
  • Frisch, Benjamin
2017 Conference Paper, cited 2 times
Website
Atlases of the human body have many applications, includ-ing for instance the analysis of information from patient cohorts to eval-uate the distribution of tumours and metastases. We present a 3D Slicer module that simplifies the task of generating a multi-modal atlas from anatomical and functional data. It provides for a simpler evaluation of existing image and verbose patient data by integrating a database that isautomatically generated from text files and accompanies the visualization of the atlas volume. The computation of the atlas is a two step process. First, anatomical data is pairwise registered to a reference dataset withan affine initialization and a B-Spline based deformable approach. Sec-ond, the computed transformations are applied to anatomical as well as the corresponding functional data to generate both atlases. The moduleis validated with a publicly available soft tissue sarcoma dataset fromThe Cancer Imaging Archive. We show that functional data in the atlasvolume correlates with the findings from the patient database.

Prostate segmentation: An efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images

  • Qiu, Wu
  • Yuan, Jing
  • Ukwatta, Eranga
  • Sun, Yue
  • Rajchl, Martin
  • Fenster, Aaron
Medical Imaging, IEEE Transactions on 2014 Journal Article, cited 58 times
Website

A Hybrid Attention Ensemble Framework for Zonal Prostate Segmentation

  • Qiu, Mingyan
  • Zhang, Chenxi
  • Song, Zhijian
2021 Conference Paper, cited 0 times
Website
Accurate and automatic segmentation of the prostate sub-regions is of great importance for the diagnosis of prostate cancer and quantitative analysis of prostate. By analyzing the characteristics of prostate images, we propose a hybrid attention ensemble framework (HAEF) to automatically segment the central gland (CG) and peripheral zone (PZ) of the prostate from a 3D MR image. The proposed attention bridge module (ABM) in the HAEF helps the Unet to be more robust for cases with large differences in foreground size. In order to deal with low segmentation accuracy of the PZ caused by small proportion of PZ to CG, we gradually increase the proportion of voxels in the region of interest (ROI) in the image through a multi-stage cropping and then introduce self-attention mechanisms in the channel and spatial domain to enhance the multi-level semantic features of the target. Finally, post-processing methods such as ensemble and classification are used to refine the segmentation results. Extensive experiments on the dataset from NCI-ISBI 2013 Challenge demonstrate that the proposed framework can automatically and accurately segment the prostate sub-regions, with a mean DSC of 0.881 for CG and 0.821 for PZ, the 95% HDE of 3.57 mm for CG and 3.72 mm for PZ, and the ASSD of 1.08 mm for CG and 0.96 mm for PZ, and outperforms the state-of-the-art methods in terms of DSC for PZ and average DSC of CG and PZ.

Texture Classification Study of MR Images for Hepatocellular Carcinoma

  • QIU, Jia-jun
  • WU, Yue
  • HUI, Bei
  • LIU, Yan-bo
电子科技大学学报Bioelectronics 2019 Journal Article, cited 0 times
Website
Combining wavelet multi-resolution analysis method and statistical analysis method, a composite texture classification model is proposed to evaluate its value in computer-aided diagnosis of hepatocellular carcinoma (HCC) and normal liver tissue based on magnetic resonance (MR) images. First, training samples are divided into two groups by two categories, statistics of wavelet coefficients are calculated in each group. Second, two discretizations are performed on wavelet coefficients of a new sample based on the two sets of statistical results, and two groups of features can be extracted by histogram, co-occurrence matrix, and run-length matrix, etc. Finally, classification is performed twice based on the two groups of features to calculate the category attribute probabilities, then a decision is conducted. The experimental results demonstrate that the proposed model can obtain better classification performance than routine methods, it is rewarding for the computer-aided diagnosis of HCC and normal liver tissue based on MR images.

Modal Uncertainty Estimation for Medical Imaging Based Diagnosis

  • Qiu, Di
  • Lui, Lok Ming
2021 Book Section, cited 0 times
Website

ProCDet: a new method for prostate cancer detection based on MR images

  • Y. Qian
  • Z. Zhang
  • B. Wang
IEEE Access 2021 Journal Article, cited 0 times
Website

Identification of biomarkers for pseudo and true progression of GBM based on radiogenomics study

  • Qian, Xiaohua
  • Tan, Hua
  • Zhang, Jian
  • Liu, Keqin
  • Yang, Tielin
  • Wang, Maode
  • Debinskie, Waldemar
  • Zhao, Weilin
  • Chan, Michael D
  • Zhou, Xiaobo
OncotargetOncotarget 2016 Journal Article, cited 8 times
Website
The diagnosis for pseudoprogression (PsP) and true tumor progression (TTP) of GBM is a challenging task in clinical practices. The purpose of this study is to identify potential genetic biomarkers associated with PsP and TTP based on the clinical records, longitudinal imaging features, and genomics data. We are the first to introduce the radiogenomics approach to identify candidate genes for PsP and TTP of GBM. Specifically, a novel longitudinal sparse regression model was developed to construct the relationship between gene expression and imaging features. The imaging features were extracted from tumors along the longitudinal MRI and provided diagnostic information of PsP and TTP. The 33 candidate genes were selected based on their association with the imaging features, reflecting their relation with the development of PsP and TTP. We then conducted biological relevance analysis for 33 candidate genes to identify the potential biomarkers, i.e., Interferon regulatory factor (IRF9) and X-ray repair cross-complementing gene (XRCC1), which were involved in the cancer suppression and prevention, respectively. The IRF9 and XRCC1 were further independently validated in the TCGA data. Our results provided the first substantial evidence that IRF9 and XRCC1 can serve as the potential biomarkers for the development of PsP and TTP.

A Voxel-Based Radiographic Analysis Reveals the Biological Character of Proneural-Mesenchymal Transition in Glioblastoma

  • Qi, T.
  • Meng, X.
  • Wang, Z.
  • Wang, X.
  • Sun, N.
  • Ming, J.
  • Ren, L.
  • Jiang, C.
  • Cai, J.
Front Oncol 2021 Journal Article, cited 0 times
Website
Introduction: Proneural and mesenchymal subtypes are the most distinct demarcated categories in classification scheme, and there is often a shift from proneural type to mesenchymal subtype in the progression of glioblastoma (GBM). The molecular characters are determined by specific genomic methods, however, the application of radiography in clinical practice remains to be further studied. Here, we studied the topography features of GBM in proneural subtype, and further demonstrated the survival characteristics and proneural-mesenchymal transition (PMT) progression of samples by combining with the imaging variables. Methods: Data were acquired from The Cancer Imaging Archive (TCIA, http://cancerimagingarchive.net). The radiography image, clinical variables and transcriptome subtype from 223 samples were used in this study. Proneural and mesenchymal subtype on GBM topography based on overlay and Voxel-based lesion-symptom mapping (VLSM) analysis were revealed. Besides, we carried out the comparison of survival analysis and PMT progression in and outside the VLSM-determined area. Results: The overlay of total GBM and separated image of proneural and mesenchymal subtype revealed a correlation of the two subtypes. By VLSM analysis, proneural subtype was confirmed to be related to left inferior temporal medulla, and no significant voxel was found for mesenchymal subtype. The subsequent comparison between samples in and outside the VLSM-determined area showed difference in overall survival (OS) time, tumor purity, epithelial-mesenchymal transition (EMT) score and clinical variables. Conclusions: PMT progression was determined by radiography approach. GBM samples in the VLSM-determined area tended to harbor the signature of proneural subtype. This study provides a valuable VLSM-determined area related to the predilection site, prognosis and PMT progression by the association between GBM topography and molecular characters.

Predicting glioma IDH mutation using multi-parametric MRI and fractal analysis

  • Qi, Brandon
  • Qi, Jinyi
  • Gimi, Barjor S.
  • Krol, Andrzej
2024 Conference Paper, cited 0 times
Website
This study aims to investigate the effectiveness of applying fractal analysis to pre-operative MRI images for prediction of glioma IDH mutation status. IDH mutation has been shown to provide more prognostic and therapeutic benefits to patients, so predicting it before surgery can provide useful information for planning the proper treatments. This study utilized the UCSF-PDGM dataset from the Cancer Image Archive. We used the modified box counting method to compute the fractal dimension of segmented tumor regions in pre- and post-contrast T1-weighted MRI. The results showed that the FD provided clear differentiation between tumor grades, with higher FD correlated to higher tumor grade. Additionally, FD demonstrated clear separation between IDH wildtype and IDH mutated tumors. Enhanced differentiation based on FD was observed with post-contrast T1-weighted images. Significant p-values from the Wilcoxon rank sum test validated the potential of using fractal analysis. The AUC of ROC for IDH mutation prediction reached 0.88 for both pre- and post-contrast T1-weighted images. In conclusion, this study shows fractal analysis is a promising technique for glioma IDH mutation prediction. Future work will include studies using more advanced MRI imaging contrasts as well as combination of multi-parametric images.

A Reversible and Imperceptible Watermarking Approach for Ensuring the Integrity and Authenticity of Brain MR Images

  • Qasim, Asaad Flayyih
2019 Thesis, cited 0 times
Website
The digital medical workflow has many circumstances in which the image data can be manipulated both within the secured Hospital Information Systems (HIS) and outside, as images are viewed, extracted and exchanged. This potentially grows ethical and legal concerns regarding modifying images details that are crucial in medical examinations. Digital watermarking is recognised as a robust technique for enhancing trust within medical imaging by detecting alterations applied to medical images. Despite its efficiency, digital watermarking has not been widely used in medical imaging. Existing watermarking approaches often suffer from validation of their appropriateness to medical domains. Particularly, several research gaps have been identified: (i) essential requirements for the watermarking of medical images are not well defined; (ii) no standard approach can be found in the literature to evaluate the imperceptibility of watermarked images; and (iii) no study has been conducted before to test digital watermarking in a medical imaging workflow. This research aims to investigate digital watermarking to designing, analysing and applying it to medical images to confirm manipulations can be detected and tracked. In addressing these gaps, a number of original contributions have been presented. A new reversible and imperceptible watermarking approach is presented to detect manipulations of brain Magnetic Resonance (MR) images based on Difference Expansion (DE) technique. Experimental results show that the proposed method, whilst fully reversible, can also realise a watermarked image with low degradation for reasonable and controllable embedding capacity. This is fulfilled by encoding the data into smooth regions (blocks that have least differences between their pixels values) inside the Region of Interest (ROI) part of medical images and also through the elimination of the large location map (location of pixels used for encoding the data) required at extraction to retrieve the encoded data. This compares favourably to outcomes reported under current state-of-art techniques in terms of visual image quality of watermarked images. This was also evaluated through conducting a novel visual assessment based on relative Visual Grading Analysis (relative VGA) to define a perceptual threshold in which modifications become noticeable to radiographers. The proposed approach is then integrated into medical systems to verify its validity and applicability in a real application scenario of medical imaging where medical images are generated, exchanged and archived. This enhanced security measure, therefore, enables the detection of image manipulations, by an imperceptible and reversible watermarking approach, that may establish increased trust in the digital medical imaging workflow.

HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation

  • Qamar, Saqib
  • Ahmad, Parvez
  • Shen, Linlin
2021 Book Section, cited 0 times
The brain tumor segmentation task aims to classify tissue into the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) classes using multimodel MRI images. Quantitative analysis of brain tumors is critical for clinical decision making. While manual segmentation is tedious, time-consuming, and subjective, this task is at the same time very challenging to automatic segmentation methods. Thanks to the powerful learning ability, convolutional neural networks (CNNs), mainly fully convolutional networks, have shown promising brain tumor segmentation. This paper further boosts the performance of brain tumor segmentation by proposing hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block. We use hyper dense connections among factorized convolutional layers to extract more contexual information, with the help of features reusability. We use a dice loss function to cope with class imbalances. We validate the proposed architecture on the multi-modal brain tumor segmentation challenges (BRATS) 2020 testing dataset. Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.

An anatomic transcriptional atlas of human glioblastoma

  • Puchalski, Ralph B
  • Shah, Nameeta
  • Miller, Jeremy
  • Dalley, Rachel
  • Nomura, Steve R
  • Yoon, Jae-Guen
  • Smith, Kimberly A
  • Lankerovich, Michael
  • Bertagnolli, Darren
  • Bickley, Kris
  • Boe, Andrew F
  • Brouner, Krissy
  • Butler, Stephanie
  • Caldejon, Shiella
  • Chapin, Mike
  • Datta, Suvro
  • Dee, Nick
  • Desta, Tsega
  • Dolbeare, Tim
  • Dotson, Nadezhda
  • Ebbert, Amanda
  • Feng, David
  • Feng, Xu
  • Fisher, Michael
  • Gee, Garrett
  • Goldy, Jeff
  • Gourley, Lindsey
  • Gregor, Benjamin W
  • Gu, Guangyu
  • Hejazinia, Nika
  • Hohmann, John
  • Hothi, Parvinder
  • Howard, Robert
  • Joines, Kevin
  • Kriedberg, Ali
  • Kuan, Leonard
  • Lau, Chris
  • Lee, Felix
  • Lee, Hwahyung
  • Lemon, Tracy
  • Long, Fuhui
  • Mastan, Naveed
  • Mott, Erika
  • Murthy, Chantal
  • Ngo, Kiet
  • Olson, Eric
  • Reding, Melissa
  • Riley, Zack
  • Rosen, David
  • Sandman, David
  • Shapovalova, Nadiya
  • Slaughterbeck, Clifford R
  • Sodt, Andrew
  • Stockdale, Graham
  • Szafer, Aaron
  • Wakeman, Wayne
  • Wohnoutka, Paul E
  • White, Steven J
  • Marsh, Don
  • Rostomily, Robert C
  • Ng, Lydia
  • Dang, Chinh
  • Jones, Allan
  • Keogh, Bart
  • Gittleman, Haley R
  • Barnholtz-Sloan, Jill S
  • Cimino, Patrick J
  • Uppin, Megha S
  • Keene, C Dirk
  • Farrokhi, Farrokh R
  • Lathia, Justin D
  • Berens, Michael E
  • Iavarone, Antonio
  • Bernard, Amy
  • Lein, Ed
  • Phillips, John W
  • Rostad, Steven W
  • Cobbs, Charles
  • Hawrylycz, Michael J
  • Foltz, Greg D
Science 2018 Journal Article, cited 6 times
Website
Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment.

Automated segmentation of five different body tissues on computed tomography using deep learning

  • Pu, L.
  • Gezer, N. S.
  • Ashraf, S. F.
  • Ocak, I.
  • Dresser, D. E.
  • Dhupar, R.
Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: To develop and validate a computer tool for automatic and simultaneous segmentation of five body tissues depicted on computed tomography (CT) scans: visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), intermuscular adipose tissue (IMAT), skeletal muscle (SM), and bone. METHODS: A cohort of 100 CT scans acquired on different subjects were collected from The Cancer Imaging Archive-50 whole-body positron emission tomography-CTs, 25 chest, and 25 abdominal. Five different body tissues (i.e., VAT, SAT, IMAT, SM, and bone) were manually annotated. A training-while-annotating strategy was used to improve the annotation efficiency. The 10-fold cross-validation method was used to develop and validate the performance of several convolutional neural networks (CNNs), including UNet, Recurrent Residual UNet (R2Unet), and UNet++. A grid-based three-dimensional patch sampling operation was used to train the CNN models. The CNN models were also trained and tested separately for each body tissue to see if they could achieve a better performance than segmenting them jointly. The paired sample t-test was used to statistically assess the performance differences among the involved CNN models RESULTS: When segmenting the five body tissues simultaneously, the Dice coefficients ranged from 0.826 to 0.840 for VAT, from 0.901 to 0.908 for SAT, from 0.574 to 0.611 for IMAT, from 0.874 to 0.889 for SM, and from 0.870 to 0.884 for bone, which were significantly higher than the Dice coefficients when segmenting the body tissues separately (p < 0.05), namely, from 0.744 to 0.819 for VAT, from 0.856 to 0.896 for SAT, from 0.433 to 0.590 for IMAT, from 0.838 to 0.871 for SM, and from 0.803 to 0.870 for bone. CONCLUSION: There were no significant differences among the CNN models in segmenting body tissues, but jointly segmenting body tissues achieved a better performance than segmenting them separately.

Machine Learning Algorithm Accuracy Using Single- versus Multi-Institutional Image Data in the Classification of Prostate MRI Lesions

  • Provenzano, Destie
  • Melnyk, Oleksiy
  • Imtiaz, Danish
  • McSweeney, Benjamin
  • Nemirovsky, Daniel
  • Wynne, Michael
  • Whalen, Michael
  • Rao, Yuan James
  • Loew, Murray
  • Haji-Momenian, Shawn
Applied Sciences 2023 Journal Article, cited 0 times
Featured Application The purpose of this study was to determine the efficacy of highly accurate ML classification algorithms trained on prostate image data from one institution and tested on image data from another institution. Abstract (1) Background: Recent studies report high accuracies when using machine learning (ML) algorithms to classify prostate cancer lesions on publicly available datasets. However, it is unknown if these trained models generalize well to data from different institutions. (2) Methods: This was a retrospective study using multi-parametric Magnetic Resonance Imaging (mpMRI) data from our institution (63 mpMRI lesions) and the ProstateX-2 challenge, a publicly available annotated image set (112 mpMRI lesions). Residual Neural Network (ResNet) algorithms were trained to classify lesions as high-risk (hrPCA) or low-risk/benign. Models were trained on (a) ProstateX-2 data, (b) local institutional data, and (c) combined ProstateX-2 and local data. The models were then tested on (a) ProstateX-2, (b) local and (c) combined ProstateX-2 and local data. (3) Results: Models trained on either local or ProstateX-2 image data had high Area Under the ROC Curve (AUC)s (0.82–0.98) in the classification of hrPCA when tested on their own respective populations. AUCs decreased significantly (0.23–0.50, p < 0.01) when models were tested on image data from the other institution. Models trained on image data from both institutions re-achieved high AUCs (0.83–0.99). (4) Conclusions: Accurate prostate cancer classification models trained on single-institutional image data performed poorly when tested on outside-institutional image data. Heterogeneous multi-institutional training image data will likely be required to achieve broadly applicable mpMRI models.

A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging

  • Protonotarios, N. E.
  • Katsamenis, I.
  • Sykiotis, S.
  • Dikaios, N.
  • Kastis, G. A.
  • Chatziioannou, S. N.
  • Metaxas, M.
  • Doulamis, N.
  • Doulamis, A.
Biomed Phys Eng Express 2022 Journal Article, cited 0 times
Website
Over the past few years, positron emission tomography/computed tomography (PET/CT) imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii) the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL) scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the users' feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous(18)F-FDG injection. Experimental results indicate the superiority of our approach compared to other state-of-the-art methods.

Unpaired Synthetic Image Generation in Radiology Using GANs

  • Prokopenko, Denis
  • Stadelmann, Joël Valentin
  • Schulz, Heinrich
  • Renisch, Steffen
  • Dylov, Dmitry V.
2019 Journal Article, cited 1 times
Website
In this work, we investigate approaches to generating synthetic Computed Tomography (CT) images from the real Magnetic Resonance Imaging (MRI) data. Generating the radiological scans has grown in popularity in the recent years due to its promise to enable single-modality radiotherapy planning in clinical oncology, where the co-registration of the radiological modalities is cumbersome. We rely on the Generative Adversarial Network (GAN) models with cycle consistency which permit unpaired image-to-image translation between the modalities. We also introduce the perceptual loss function term and the coordinate convolutional layer to further enhance the quality of translated images. The Unsharp masking and the Super-Resolution GAN (SRGAN) were considered to improve the quality of synthetic images. The proposed architectures were trained on the unpaired MRI-CT data and then evaluated on the paired brain dataset. The resulting CT scans were generated with the mean absolute error (MAE), the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) scores of 60.83 HU, 17.21 dB, and 0.8, respectively. DualGAN with perceptual loss function term and coordinate convolutional layer proved to perform best. The MRI-CT translation approach holds potential to eliminate the need for the patients to undergo both examinations and to be clinically accepted as a new tool for radiotherapy planning.

Radiomics-based low and high-grade DCE-MRI breast tumor classification with an array of SVM classifiers

  • Priyadharshini, B.
  • Mythili, A.
  • Anandh, K. R.
2024 Conference Paper, cited 0 times
Website
Breast cancer is an extremely prevalent cancer globally and the prominent cause attributing to cancer-related fatalities. The grade of breast cancer is a prognostic marker representing its aggressive potential. Morphologically, tumors that are well differentiated, have a highly noticeable basal membrane, and with moderate proliferation are considered low-grade (Grade I & II). Tumors with a massive nucleus, irregular shape, and size, prominent nucleoli, inadequate cytoplasm, and high intensity are High grade (Grade III & IV). Dynamic Contrast-EnhancedMRI (DCE-MRI) has been extensively used to assess tumors and tumor grades, with an emphasis on heterogeneity and integrated inspections. Neoadjuvant chemotherapy (NAC) for breast cancer is traditionally administered to patients with locally advanced disease and is advantageous for surgical downstaging. Generally, the histological grade and proliferationindex decrease after neoadjuvant chemotherapy and are connected to the therapeutic response. Radiomics is a novel approach for discovering tumor pathophysiological-related image information and possibly a pre-operative predictor of breast cancer pathological grade. Due to the heterogeneous nature of the tumor, histological grading remains challenging for the radiologist. This work extracts radiomics-based features from the QIN BREAST and QIN BREAST-02 datasets (N=47) of the publicly available TCIA database. The extracted features are used in the classification of low- and high-grade tumors by using an array of Support vector machines (SVM) algorithms such as Quadratic SVM, Linear SVM, Cubic SVM, and Medium Gaussian SVM. Results show that the test accuracy for the LinearSVM is 81.2%, AUC of 0.75, a sensitivity of 0.85, and an F-score of 0.89 which is observed to have better performance than other SVM models. Hence, radiomics-based grade differentiation using DCE MRI in patients with breast cancer could help to determine the potential for recovery with the right treatment.

Performance Analysis of Low and High-Grade Breast Tumors Using DCE MR Images and LASSO Feature Selection

  • Priyadharshini, B.
  • Mythili, A.
  • Anandh, K. R.
2023 Conference Paper, cited 0 times
Website
Breast cancer is a complex genetic disease with diverse morphological and biological characteristics. Generally, the grade of a breast tumor is a prognostic factor and representation of its potential aggressiveness. Presently, Dynamic Contrast-Enhanced MRI (DCE-MRI) has gained a predominant role in assessing tumor grades and vascular physiology. However, due to tumor heterogeneity, tumor-grade classification is still a daunting challenge for radiologists. Therefore, to unburden the tumor grading process, a study has been carried out with 638 patients taken from the Duke-Breast-Cancer-MRI database (431-low-grade & 207-high-grade). The clinicopathological characteristics such as ER receptors, PR receptors, HER2, Pathological Complete Response (PCR or non-PCR), Menopausal status, and Bi-lateral status have shown a high significance of p = <0.00001, <0.00001, 0.0023, <0.00001, 0.0262, and 0.0045 respectively. The LASSO (Least Absolute Shrinkage and Selection Operator) feature selection model has selected 8 optimal features out of 529 feature sets (from Duke-Breast-Cancer-MRI). The extracted features are involved in the classification of high-grade and low-grade tumors by using a collection of classifiers such as Linear Support Vector Machines (LSVM), Logistic regression (LR), Linear discriminant analysis (LDA), Gaussian Naïve Bayes (GNB), k-Nearest Neighbors (KNN), and Random Forest (RF). The outcome of the L-SVM and LR showed better performance metrics values among all classifiers. Hence, the acquired classification results disclose that histological grade prediction using radiomics would aid clinical management and prognosis.

Reduced Chest Computed Tomography Scan Length for Patients Positive for Coronavirus Disease 2019: Dose Reduction and Impact on Diagnostic Utility

  • Principi, S.
  • O'Connor, S.
  • Frank, L.
  • Schmidt, T. G.
2022 Journal Article, cited 0 times
Website
METHODS: This study used the Personalized Rapid Estimation of Dose in CT (PREDICT) tool to estimate patient-specific organ doses from CT image data. The PREDICT is a research tool that combines a linear Boltzmann transport equation solver for radiation dose map generation with deep learning algorithms for organ contouring. Computed tomography images from 74 subjects in the Medical Imaging Data Resource Center-RSNA International COVID-19 Open Radiology Database data set (chest CT of adult patients positive for COVID-19), which included expert annotations including "infectious opacities," were analyzed. First, the full z-scan length of the CT image data set was evaluated. Next, the z-scan length was reduced from the left hemidiaphragm to the top of the aortic arch. Generic dose reduction based on dose length product (DLP) and patient-specific organ dose reductions were calculated. The percentage of infectious opacities excluded from the reduced z-scan length was used to quantify the effect on diagnostic utility. RESULTS: Generic dose reduction, based on DLP, was 69%. The organ dose reduction ranged from approximately equal to 18% (breasts) to approximately equal to 64% (bone surface and bone marrow). On average, 12.4% of the infectious opacities were not included in the reduced z-coverage, per patient, of which 5.1% were above the top of the arch and 7.5% below the left hemidiaphragm. CONCLUSIONS: Limiting z-scan length of chest CTs reduced radiation dose without significantly compromising diagnostic utility in COVID-19 patients. The PREDICT demonstrated that patient-specific organ dose reductions varied from generic dose reduction based on DLP.

Automated detection and segmentation of non-small cell lung cancer computed tomography images

  • Primakov, Sergey P.
  • Ibrahim, Abdalla
  • van Timmeren, Janita E.
  • Wu, Guangyao
  • Keek, Simon A.
  • Beuque, Manon
  • Granzier, Renée W. Y.
  • Lavrova, Elizaveta
  • Scrivener, Madeleine
  • Sanduleanu, Sebastian
  • Kayan, Esma
  • Halilaj, Iva
  • Lenaers, Anouk
  • Wu, Jianlin
  • Monshouwer, René
  • Geets, Xavier
  • Gietema, Hester A.
  • Hendriks, Lizza E. L.
  • Morin, Olivier
  • Jochems, Arthur
  • Woodruff, Henry C.
  • Lambin, Philippe
Nature Communications 2022 Journal Article, cited 3 times
Website
Detection and segmentation of abnormalities on medical images is highly important for patient management including diagnosis, radiotherapy, response evaluation, as well as for quantitative image research. We present a fully automated pipeline for the detection and volumetric segmentation of non-small cell lung cancer (NSCLC) developed and validated on 1328 thoracic CT scans from 8 institutions. Along with quantitative performance detailed by image slice thickness, tumor size, image interpretation difficulty, and tumor location, we report an in-silico prospective clinical trial, where we show that the proposed method is faster and more reproducible compared to the experts. Moreover, we demonstrate that on average, radiologists & radiation oncologists preferred automatic segmentations in 56% of the cases. Additionally, we evaluate the prognostic power of the automatic contours by applying RECIST criteria and measuring the tumor volumes. Segmentations by our method stratified patients into low and high survival groups with higher significance compared to those methods based on manual contours.

Disorder in Pixel-Level Edge Directions on T1WI Is Associated with the Degree of Radiation Necrosis in Primary and Metastatic Brain Tumors: Preliminary Findings

  • Prasanna, P
  • Rogers, L
  • Lam, TC
  • Cohen, M
  • Siddalingappa, A
  • Wolansky, L
  • Pinho, M
  • Gupta, A
  • Hatanpaa, KJ
  • Madabhushi, A
American Journal of Neuroradiology 2019 Journal Article, cited 0 times
Website

Radiomic features from the peritumoral brain parenchyma on treatment-naïve multi-parametric MR imaging predict long versus short-term survival in glioblastoma multiforme: Preliminary findings

  • Prasanna, Prateek
  • Patel, Jay
  • Partovi, Sasan
  • Madabhushi, Anant
  • Tiwari, Pallavi
European Radiology 2016 Journal Article, cited 45 times
Website

Optimization of Deep CNN Techniques to Classify Breast Cancer and Predict Relapse

  • Prasad, Venkata vara
  • Venkataramana, Lokeswari Y.
  • S. Keerthana
  • Subha, R.
2023 Journal Article, cited 0 times
Breast cancer is a fatal disease that has a high rate of morbidity and mortality. Finding the right diagnosis is one of the most crucial steps in breast cancer treatment. Doctors can use machine learning (ML) and deep learning techniques to aid with diagnosis. This work makes an effort to devise a methodology for the classification of Breast cancer into its molecular subtypes and prediction of relapse. The objective is to compare the performance of Deep CNN, Tuned CNN and Hypercomplex-Valued CNN, and infer the results, thus automating the classification process. The traditional method used by doctors to detect is tedious and time consuming. It employs multiple methods, including MRI, CT scanning, aspiration, and blood tests as well as image testing. The proposed approach uses image processing techniques to detect irregular breast tissues in the MRI. The survivors of Breast Cancer are still at risk for relapse after remission, and once the disease relapses, the survival rate is much lower. A thorough analysis of data can potentially identify risk factors and reduce the risk of relapse in the first place. A SVM (Support Vector Machine) module with GridSearchCV for hyperparameter tuning is used to identify patterns in those patients who experience a relapse, so that these patterns can be used to predict the relapse before it occurs. The traditional deep learning CNN model achieved an accuracy of 27%, the tuned CNN model achieved an accuracy of 92% and the hypercomplex-valued CNN achieved an accuracy of 98%. The SVM model achieved an accuracy of 89% and on tuning the hyperparameters by using GridSearchCV it achieved and accuracy of 98%.

Stratification by Tumor Grade Groups in a Holistic Evaluation of Machine Learning for Brain Tumor Segmentation

  • Prabhudesai, S.
  • Wang, N. C.
  • Ahluwalia, V.
  • Huan, X.
  • Bapuraj, J. R.
  • Banovic, N.
  • Rao, A.
Front Neurosci 2021 Journal Article, cited 0 times
Website
Accurate and consistent segmentation plays an important role in the diagnosis, treatment planning, and monitoring of both High Grade Glioma (HGG), including Glioblastoma Multiforme (GBM), and Low Grade Glioma (LGG). Accuracy of segmentation can be affected by the imaging presentation of glioma, which greatly varies between the two tumor grade groups. In recent years, researchers have used Machine Learning (ML) to segment tumor rapidly and consistently, as compared to manual segmentation. However, existing ML validation relies heavily on computing summary statistics and rarely tests the generalizability of an algorithm on clinically heterogeneous data. In this work, our goal is to investigate how to holistically evaluate the performance of ML algorithms on a brain tumor segmentation task. We address the need for rigorous evaluation of ML algorithms and present four axes of model evaluation-diagnostic performance, model confidence, robustness, and data quality. We perform a comprehensive evaluation of a glioma segmentation ML algorithm by stratifying data by specific tumor grade groups (GBM and LGG) and evaluate these algorithms on each of the four axes. The main takeaways of our work are-(1) ML algorithms need to be evaluated on out-of-distribution data to assess generalizability, reflective of tumor heterogeneity. (2) Segmentation metrics alone are limited to evaluate the errors made by ML algorithms and their describe their consequences. (3) Adoption of tools in other domains such as robustness (adversarial attacks) and model uncertainty (prediction intervals) lead to a more comprehensive performance evaluation. Such a holistic evaluation framework could shed light on an algorithm's clinical utility and help it evolve into a more clinically valuable tool.

A Clinical System for Non-invasive Blood-Brain Barrier Opening Using a Neuronavigation-Guided Single-Element Focused Ultrasound Transducer

  • Pouliopoulos, Antonios N
  • Wu, Shih-Ying
  • Burgess, Mark T
  • Karakatsani, Maria Eleni
  • Kamimura, Hermes A S
  • Konofagou, Elisa E
Ultrasound Med Biol 2020 Journal Article, cited 3 times
Website
Focused ultrasound (FUS)-mediated blood-brain barrier (BBB) opening is currently being investigated in clinical trials. Here, we describe a portable clinical system with a therapeutic transducer suitable for humans, which eliminates the need for in-line magnetic resonance imaging (MRI) guidance. A neuronavigation-guided 0.25-MHz single-element FUS transducer was developed for non-invasive clinical BBB opening. Numerical simulations and experiments were performed to determine the characteristics of the FUS beam within a human skull. We also validated the feasibility of BBB opening obtained with this system in two non-human primates using U.S. Food and Drug Administration (FDA)-approved treatment parameters. Ultrasound propagation through a human skull fragment caused 44.4 +/- 1% pressure attenuation at a normal incidence angle, while the focal size decreased by 3.3 +/- 1.4% and 3.9 +/- 1.8% along the lateral and axial dimension, respectively. Measured lateral and axial shifts were 0.5 +/- 0.4 mm and 2.1 +/- 1.1 mm, while simulated shifts were 0.1 +/- 0.2 mm and 6.1 +/- 2.4 mm, respectively. A 1.5-MHz passive cavitation detector transcranially detected cavitation signals of Definity microbubbles flowing through a vessel-mimicking phantom. T1-weighted MRI confirmed a 153 +/- 5.5 mm(3) BBB opening in two non-human primates at a mechanical index of 0.4, using Definity microbubbles at the FDA-approved dose for imaging applications, without edema or hemorrhage. In conclusion, we developed a portable system for non-invasive BBB opening in humans, which can be achieved at clinically relevant ultrasound exposures without the need for in-line MRI guidance. The proposed FUS system may accelerate the adoption of non-invasive FUS-mediated therapies due to its fast application, low cost and portability.

Automated Systems of High-Productive Identification of Image Objects by Geometric Features

  • Poplavskyi, Oleksandr
  • Bondar, Olena
  • Pavlov, Sergiy
  • Poplavska, Anna
Applied Geometry and Engineering Graphics 2020 Journal Article, cited 0 times
The article substantiates the feasibility and practical value of using a specific simulation modeling methodology, which provides for digital processing and the mathematical essence of neural network technology. A brain tumor is a serious disease, and the number of people who die due to a brain tumor, despite significant progress in treatment remains impressive. In this research presents in detail the developed algorithm for high performance identification of objects (early detection and identification of tumors) on MRI images by geometric features. This algorithm, based on image pre-processing, analyzes the data array using a convolutional neural network (CNN) and recognizes pathologies in the images. The obtained algorithm is a step towards the creation of autonomous automatic identification and decision-making systems for the diagnosis of malignant tumors and other neoplasms in the brain by geometric features.

Genomics of Brain Tumor Imaging

  • Pope, Whitney B
2015 Journal Article, cited 26 times
Website

Anatomical study and meta-analysis of the episternal ossicles

  • Pongruengkiat, W.
  • Pitaksinagorn, W.
  • Yurasakpong, L.
  • Taradolpisut, N.
  • Kruepunga, N.
  • Chaiyamoon, A.
  • Suwannakhan, A.
2024 Journal Article, cited 0 times
Website
Episternal ossicles (EO) are accessory bones located superior and posterior to the manubrium, representing an anatomical variation in the thoracic region. This study aimed to investigate the prevalence and developmental aspects of EO in global populations. The prevalence of EO in pediatric populations was assessed using the "Pediatric-CT-SEG" open-access data set obtained from The Cancer Imaging Archive, revealing a single incidence of EO among 233 subjects, occurring in a 14-year-old patient. A meta-analysis was conducted using data from 16 studies (from 14 publications) through three electronic databases (Google Scholar, PubMed, and Journal Storage) encompassing 7997 subjects. An overall EO prevalence was 2.1% (95% CI 1.1-3.0%, I(2) = 93.75%). Subgroup analyses by continent and diagnostic methods were carried out. Asia exhibited the highest prevalence of EO at 3.8% (95% CI 0.3-7.5%, I(2) = 96.83%), and X-ray yielded the highest prevalence of 0.7% (95% CI 0.5-8.9%, I(2) = 0.00%) compared with other modalities. The small-study effect was indicated by asymmetric funnel plots (Egger's z = 4.78, p < 0.01; Begg's z = 2.30, p = 0.02). Understanding the prevalence and developmental aspects of EO is crucial for clinical practitioners' awareness of this anatomical variation.

Brain Tumor Segmentation with Self-supervised Enhance Region Post-processing

  • Pnev, Sergey
  • Groza, Vladimir
  • Tuchinov, Bair
  • Amelina, Evgeniya
  • Pavlovskiy, Evgeniy
  • Tolstokulakov, Nikolay
  • Amelin, Mihail
  • Golushko, Sergey
  • Letyagin, Andrey
2022 Book Section, cited 0 times
Website
In this paper, we extend the previous research works on the robust multi-sequences segmentation methods which allows to consider all available information from MRI scans by the composition of T1, T1C, T2 and T2-FLAIR sequences. It is based on the clinical radiology hypothesis and presents an efficient approach to combining and matching 3D methods to search for areas of comprised the GD-enhancing tumor in order to significantly improve the model’s performance of the particular applied numerical problem of brain tumor segmentation. Proposed in this paper method also demonstrates strong improvement on the segmentation problem. This conclusion was done with respect to Dice and Hausdorff metric, Sensitivity and Specificity compare to identical training/test procedure based only on any single sequence and regardless of the chosen neural network architecture. We achieved on the test set of 0.866, 0.921 and 0.869 for ET, WT, and TC Dice scores. Obtained results demonstrate significant performance improvement while combining several 3D approaches for considered tasks of brain tumor segmentation. In this work we provide the comparison of various 3D and 2D approaches, pre-processing to self-supervised clean data, post-processing optimization methods and the different backbone architectures.

Multi-Class Brain Tumor Segmentation via 3d and 2d Neural Networks

  • Pnev, S.
  • Groza, V.
  • Tuchinov, B.
  • Amelina, E.
  • Pavlovskiy, E.
  • Tolstokulakov, N.
  • Amelin, M.
  • Golushko, S.
  • Letyagin, A.
2022 Conference Paper, cited 0 times
Website
Brain tumor segmentation is an important and time-consuming part of the usual clinical diagnosis process. Multi-class segmentation of different tumor types is a challenging task, due to the differences in shape, size, location and scanner parameters. Many 2D and 3D convolution neural network architectures have been proposed to address this problem achieving a significant success. It is well known that 2D approach is generally faster and more popular in the most of such problems. However, the usage of 3D models allows us to simultaneously improve the quality of segmentation. Accounting the context along the sagittal plane leads to the learning of 3-dimensional features that we used for computationally expensive 3D operations what in its turn increases the learning time as well as decreases the speed of operation.In this paper, we compare the 2D and 3D approaches on 2 datasets with MRI images: the one from the BraTS 2020 competition and a private Siberian Brain tumor dataset. In each dataset, any single scan is represented by 4 sequences T1, T1C, T2 and T2-Flair, annotated by two certified neuro-radiologist specialists. The datasets differ from each other in the dimension, grade set and tumor type. Numerical comparison was performed based on the Dice score index. We provide the case by case analysis for the samples that caused most difficulties for the models. The results obtained in our work demonstrate the significant over performing of 3D methods keeping robustness in a regard of data source and type that allow us to get a little closer to AI-assisted diagnosis.

A versatile method for bladder segmentation in computed tomography two-dimensional images under adverse conditions

  • Pinto, João Ribeiro
  • Tavares, João Manuel RS
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2017 Journal Article, cited 1 times
Website

Short-and long-term lung cancer risk associated with noncalcified nodules observed on low-dose CT

  • Pinsky, Paul F
  • Nath, P Hrudaya
  • Gierada, David S
  • Sonavane, Sushil
  • Szabo, Eva
Cancer prevention research 2014 Journal Article, cited 10 times
Website

National lung screening trial: variability in nodule detection rates in chest CT studies

  • Pinsky, P. F.
  • Gierada, D. S.
  • Nath, P. H.
  • Kazerooni, E.
  • Amorosa, J.
RadiologyRadiology 2013 Journal Article, cited 43 times
Website
PURPOSE: To characterize the variability in radiologists' interpretations of computed tomography (CT) studies in the National Lung Screening Trial (NLST) (including assessment of false-positive rates [FPRs] and sensitivity), to examine factors that contribute to variability, and to evaluate trade-offs between FPRs and sensitivity among different groups of radiologists. MATERIALS AND METHODS: The HIPAA-compliant NLST was approved by the institutional review board at each screening center; all participants provided informed consent. NLST radiologists reported overall screening results, nodule-specific findings, and recommendations for diagnostic follow-up. A noncalcified nodule of 4 mm or larger constituted a positive screening result. The FPR was defined as the rate of positive screening examinations in participants without a cancer diagnosis within 1 year. Descriptive analyses and mixed-effects models were utilized. The average odds ratio (OR) for a false-positive result across all pairs of radiologists was used as a measure of variability. RESULTS: One hundred twelve radiologists at 32 screening centers each interpreted 100 or more NLST CT studies, interpreting 72 160 of 75 126 total NLST CT studies in aggregate. The mean FPR for radiologists was 28.7% +/- 13.7 (standard deviation), with a range of 3.8%-69.0%. The model yielded an average OR of 2.49 across all pairs of radiologists and an OR of 1.83 for pairs within the same screening center. Mean FPRs were similar for academic versus nonacademic centers (27.9% and 26.7%, respectively) and for centers inside (25.0%) versus outside (28.7%) the U.S. "histoplasmosis belt." Aggregate sensitivity was 96.5% for radiologists with FPRs higher than the median (27.1%), compared with 91.9% for those with FPRs lower than the median (P = .02). CONCLUSION: There was substantial variability in radiologists' FPRs. Higher FPRs were associated with modestly higher sensitivity.

ROC curves for low-dose CT in the National Lung Screening Trial

  • Pinsky, P. F.
  • Gierada, D. S.
  • Nath, H.
  • Kazerooni, E. A.
  • Amorosa, J.
J Med ScreenJournal of medical screening 2013 Journal Article, cited 4 times
Website
The National Lung Screening Trial (NLST) reported a 20% reduction in lung cancer specific mortality using low-dose chest CT (LDCT) compared with chest radiograph (CXR) screening. The high number of false positive screens with LDCT (around 25%) raises concerns. NLST radiologists reported LDCT screens as either positive or not positive, based primarily on the presence of a 4+ mm non-calcified lung nodule (NCN). They did not explicitly record a propensity score for lung cancer. However, by using maximum NCN size, or alternatively, radiologists' recommendations for diagnostic follow-up categorized hierarchically, surrogate propensity scores (PSSZ and PSFR) were created. These scores were then used to compute ROC curves, which determine possible operating points of sensitivity versus false positive rate (1-Specificity). The area under the ROC curve (AUC) was 0.934 and 0.928 for PSFR and PSSZ, respectively; the former was significantly greater than the latter. With the NLST definition of a positive screen, sensitivity and specificity of LDCT was 93.1% and 76.5%, respectively. With cutoffs based on PSFR, a specificity of 92.4% could be achieved while only lowering sensitivity to 86.9%. Radiologists using LDCT have good predictive ability; the optimal operating point for sensitivity and specificity remains to be determined.

Precision Medicine and Radiogenomics in Breast Cancer: New Approaches toward Diagnosis and Treatment

  • Pinker, Katja
  • Chin, Joanne
  • Melsaether, Amy N
  • Morris, Elizabeth A
  • Moy, Linda
RadiologyRadiology 2018 Journal Article, cited 7 times
Website

Accuracy of emphysema quantification performed with reduced numbers of CT sections

  • Pilgram, Thomas K
  • Quirk, James D
  • Bierhals, Andrew J
  • Yusen, Roger D
  • Lefrak, Stephen S
  • Cooper, Joel D
  • Gierada, David S
American Journal of Roentgenology 2010 Journal Article, cited 8 times
Website

Texture classification of lung computed tomography images

  • Pheng, Hang See
  • Shamsuddin, Siti M
2013 Conference Proceedings, cited 2 times
Website
Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.

Beam Geometry Generation: Reverse Beam with Fused Opacity Map

  • Phan, Danh Thanh
2023 Thesis, cited 0 times
Website
Cancer is hard to cure and radiation therapy is one of the most popular treatment modalities. Even though the benefits of radiation therapy are undeniable, it still has possible side effects. To avoid severe side effects, with clinical evidence, delivering optimal radiation doses to patients is crucial. Intensity-modulated radiation therapy (IMRT) is an advanced radiation therapy technique and will be discussed in this thesis. One important step when creating an IMRT treatment plan is radiation beam geometry generation, which means choosing the number of radiation beams and their directions. The primary goal of this thesis was to find good gantry angles for IMRT plans by combining computer graphics and machine learning. To aid the plan generation process, a new method called reverse beam was introduced in this work. The new solution consists of two stages: angle discovery and angle selection. In the first stage, an algorithm based on the ray casting technique will be used to find all potential angles of the beams. For the second stage, with a predefined beam number, K-means clustering algorithm will be employed to select the gantry angles based on the clusters. The proposed method was tested against non-small cell lung cancer dataset from The Cancer Imaging Archive. By using IMRT plans with seven equidistant fields with 45◦collimator rotations generated by the Ethos therapy system from Varian Medical Systems as a baseline for comparison, the plans generated by the reverse beam method illustrated good performance with the capability of avoiding organs while targeting tumors.

Liver Segmentation in CT with MRI Data: Zero-Shot Domain Adaptation by Contour Extraction and Shape Priors

  • Pham, D. D.
  • Dovletov, G.
  • Pauli, J.
2020 Conference Paper, cited 0 times
Website
In this work we address the problem of domain adaptation for segmentation tasks with deep convolutional neural networks. We focus on managing the domain shift from MRI to CT volumes on the example of 3D liver segmentation. Domain adaptation between modalities is particularly of practical importance, as different hospital departments usually tend to use different imaging modalities and protocols in their clinical routine. Thus, training a model with source data from one department may not be sufficient for application in another institution. Most adaptation strategies make use of target domain samples and often additionally incorporate the corresponding ground truths from the target domain during the training process. In contrast to these approaches, we investigate the possibility of training our model solely on source domain data sets, i.e. we apply zero-shot domain adaptation. To compensate the missing target domain data, we use prior knowledge about both modalities to steer the model towards more general features during the training process. We particularly make use of fixed Sobel kernels to enhance contour information and apply anatomical priors, learned separately by a convolutional autoencoder. Although we completely discard including the target domain in the training process, our proposed approach improves a vanilla U-Net implementation drastically and yields promising segmentation results.

Prediction of lung cancer incidence on the low-dose computed tomography arm of the National Lung Screening Trial: A dynamic Bayesian network

  • Petousis, Panayiotis
  • Han, Simon X
  • Aberle, Denise
  • Bui, Alex AT
Artificial intelligence in medicine 2016 Journal Article, cited 13 times
Website

3D spatial priors for semi-supervised organ segmentation with deep convolutional neural networks

  • Petit, O.
  • Thome, N.
  • Soler, L.
Int J Comput Assist Radiol Surg 2022 Journal Article, cited 0 times
Website
PURPOSE: Fully Convolutional neural Networks (FCNs) are the most popular models for medical image segmentation. However, they do not explicitly integrate spatial organ positions, which can be crucial for proper labeling in challenging contexts. METHODS: In this work, we propose a method that combines a model representing prior probabilities of an organ position in 3D with visual FCN predictions by means of a generalized prior-driven prediction function. The prior is also used in a self-labeling process to handle low-data regimes, in order to improve the quality of the pseudo-label selection. RESULTS: Experiments carried out on CT scans from the public TCIA pancreas segmentation dataset reveal that the resulting STIPPLE model can significantly increase performances compared to the FCN baseline, especially with few training images. We also show that STIPPLE outperforms state-of-the-art semi-supervised segmentation methods by leveraging the spatial prior information. CONCLUSIONS: STIPPLE provides a segmentation method effective with few labeled examples, which is crucial in the medical domain. It offers an intuitive way to incorporate absolute position information by mimicking expert annotators.

An Automated Method for Locating Phantom Nodules in Anthropomorphic Thoracic Phantom CT Studies

  • Peskin, Adele P
  • Dima, Alden A
  • Saiprasad, Ganesh
2011 Conference Paper, cited 1 times
Website

Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging

  • Perkonigg, M.
  • Hofmanninger, J.
  • Herold, C. J.
  • Brink, J. A.
  • Pianykh, O.
  • Prosch, H.
  • Langs, G.
2021 Journal Article, cited 0 times
Website
Medical imaging is a central part of clinical diagnosis and treatment guidance. Machine learning has increasingly gained relevance because it captures features of disease and treatment response that are relevant for therapeutic decision-making. In clinical practice, the continuous progress of image acquisition technology or diagnostic procedures, the diversity of scanners, and evolving imaging protocols hamper the utility of machine learning, as prediction accuracy on new data deteriorates, or models become outdated due to these domain shifts. We propose a continual learning approach to deal with such domain shifts occurring at unknown time points. We adapt models to emerging variations in a continuous data stream while counteracting catastrophic forgetting. A dynamic memory enables rehearsal on a subset of diverse training data to mitigate forgetting while enabling models to expand to new domains. The technique balances memory by detecting pseudo-domains, representing different style clusters within the data stream. Evaluation of two different tasks, cardiac segmentation in magnetic resonance imaging and lung nodule detection in computed tomography, demonstrate a consistent advantage of the method.

Peritumoral and intratumoral radiomic features predict survival outcomes among patients diagnosed in lung cancer screening

  • Perez-Morales, J.
  • Tunali, I.
  • Stringfield, O.
  • Eschrich, S. A.
  • Balagurunathan, Y.
  • Gillies, R. J.
  • Schabath, M. B.
2020 Journal Article, cited 0 times
Website
The National Lung Screening Trial (NLST) demonstrated that screening with low-dose computed tomography (LDCT) is associated with a 20% reduction in lung cancer mortality. One potential limitation of LDCT screening is overdiagnosis of slow growing and indolent cancers. In this study, peritumoral and intratumoral radiomics was used to identify a vulnerable subset of lung patients associated with poor survival outcomes. Incident lung cancer patients from the NLST were split into training and test cohorts and an external cohort of non-screen detected adenocarcinomas was used for further validation. After removing redundant and non-reproducible radiomics features, backward elimination analyses identified a single model which was subjected to Classification and Regression Tree to stratify patients into three risk-groups based on two radiomics features (NGTDM Busyness and Statistical Root Mean Square [RMS]). The final model was validated in the test cohort and the cohort of non-screen detected adenocarcinomas. Using a radio-genomics dataset, Statistical RMS was significantly associated with FOXF2 gene by both correlation and two-group analyses. Our rigorous approach generated a novel radiomics model that identified a vulnerable high-risk group of early stage patients associated with poor outcomes. These patients may require aggressive follow-up and/or adjuvant therapy to mitigate their poor outcomes.

Automated lung cancer diagnosis using three-dimensional convolutional neural networks

  • Perez, Gustavo
  • Arbelaez, Pablo
2020 Journal Article, cited 0 times
Website
Lung cancer is the deadliest cancer worldwide. It has been shown that early detection using low-dose computer tomography (LDCT) scans can reduce deaths caused by this disease. We present a general framework for the detection of lung cancer in chest LDCT images. Our method consists of a nodule detector trained on the LIDC-IDRI dataset followed by a cancer predictor trained on the Kaggle DSB 2017 dataset and evaluated on the IEEE International Symposium on Biomedical Imaging (ISBI) 2018 Lung Nodule Malignancy Prediction test set. Our candidate extraction approach is effective to produce accurate candidates with a recall of 99.6%. In addition, our false positive reduction stage classifies successfully the candidates and increases precision by a factor of 2000. Our cancer predictor obtained a ROC AUC of 0.913 and was ranked 1st place at the ISBI 2018 Lung Nodule Malignancy Prediction challenge. Graphical abstract.

Examining the Effects of Slice Thickness on the Reproducibility of CT Radiomics for Patients with Colorectal Liver Metastases

  • Peoples, Jacob J.
  • Hamghalam, Mohammad
  • James, Imani
  • Wasim, Maida
  • Gangai, Natalie
  • Kang, HyunSeon Christine
  • Rong, Xiujiang John
  • Chun, Yun Shin
  • Do, Richard K. G.
  • Simpson, Amber L.
2023 Conference Paper, cited 0 times
We present an analysis of 81 patients with colorectal liver metastases from two major cancer centers prospectively enrolled in an imaging trial to assess reproducibility of radiomic features in contrast-enhanced CT. All scans were reconstructed with different slice thicknesses and levels of iterative reconstruction. Radiomic features were extracted from the liver parenchyma and largest metastasis from each reconstruction, using different levels of resampling and methods of feature aggregation. The prognostic value of reproducible features was tested using Cox proportional hazards to model overall survival in an independent, public data set of 197 hepatic resection patients with colorectal liver metastases. Our results show that larger differences in slice thickness reduced the concordance of features (p<10^-6). Extracting features with 2.5D aggregation and no axial resampling produced the most robust features, and the best test-set performance in the survival model on the independent data set (C-index = 0.65). Across all feature extraction methods, restricting the survival models to use reproducible features had no statistically significant effect on the test set performance (p=0.98). In conclusion, our results show that feature extraction settings can positively impact the robustness of radiomics features to variations in slice thickness, without negatively effecting prognostic performance.

A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing

  • Peng, Z.
  • Fang, X.
  • Yan, P.
  • Shan, H.
  • Liu, T.
  • Pei, X.
  • Wang, G.
  • Liu, B.
  • Kalra, M. K.
  • Xu, X. G.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS: A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS: The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean +/- standard deviation) for all organs: 1.8% +/- 1.4% (vs 16.0% +/- 11.8%) for the lung, 0.8% +/- 0.7% (vs 34.0% +/- 31.1%) for the heart, 1.6% +/- 1.7% (vs 45.7% +/- 29.3%) for the esophagus, 0.6% +/- 1.2% (vs 15.8% +/- 12.7%) for the spleen, 1.2% +/- 1.0% (vs 18.1% +/- 15.7%) for the pancreas, 0.9% +/- 0.6% (vs 20.0% +/- 15.2%) for the left kidney, 1.7% +/- 3.1% (vs 19.1% +/- 9.8%) for the gallbladder, 0.3% +/- 0.3% (vs 24.2% +/- 18.7%) for the liver, and 1.6% +/- 1.7% (vs 19.3% +/- 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION: This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.

Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies

  • Peng, Yige
  • Bi, Lei
  • Guo, Yuyu
  • Feng, Dagan
  • Fulham, Michael
  • Kim, Jinman
2019 Conference Proceedings, cited 0 times

Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation

  • Pendse, Mihir
  • Thangarasa, Vithursan
  • Chiley, Vitaliy
  • Holmdahl, Ryan
  • Hestness, Joel
  • DeCoste, Dennis
2021 Book Section, cited 0 times
We propose combining memory saving techniques with traditional U-Net architectures to increase the complexity of the models on the Brain Tumor Segmentation (BraTS) challenge. The BraTS challenge consists of a 3D segmentation of a 240 × 240 × 155 × 4 input image into a set of tumor classes. Because of the large volume and need for 3D convolutional layers, this task is very memory intensive. To address this, prior approaches use smaller cropped images while constraining the model’s depth and width. Our 3D U-Net uses a reversible version of the mobile inverted bottleneck block defined in MobileNetV2, MnasNet and the more recent EfficientNet architectures to save activation memory during training. Using reversible layers enables the model to recompute input activations given the outputs of that layer, saving memory by eliminating the need to store activations during the forward pass. The inverted residual bottleneck block uses lightweight depthwise separable convolutions to reduce computation by decomposing convolutions into a pointwise convolution and a depthwise convolution. Further, this block inverts traditional bottleneck blocks by placing an intermediate expansion layer between the input and output linear 1 × 1 convolution, reducing the total number of channels. Given a fixed memory budget, with these memory saving techniques, we are able to train image volumes up to 3x larger, models with 25% more depth, or models with up to 2x the number of channels than a corresponding non-reversible network.

Auto Diagnostics of Lung Nodules Using Minimal Characteristics Extraction Technique

  • Peña, Diego M
  • Luo, Shouhua
  • Abdelgader, Abdeldime
Diagnostics 2016 Journal Article, cited 6 times
Website
Computer-aided detection (CAD) systems provide useful tools and an advantageous process to physicians aiming to detect lung nodules. This paper develops a method composed of four processes for lung nodule detection. The first step employs image acquisition and pre-processing techniques to isolate the lungs from the rest of the body. The second stage involves the segmentation process using a 2D algorithm to affect every layer of a scan eliminating non-informative structures inside the lungs, and a 3D blob algorithm associated with a connectivity algorithm to select possible nodule shape candidates. The combinations of these algorithms efficiently eliminate the high rates of false positives. The third process extracts eight minimal representative characteristics of the possible candidates. The final step utilizes a support vector machine for classifying the possible candidates into nodules and non-nodules depending on their features. As the objective is to find nodules bigger than 4mm, the proposed approach demonstrated quite encouraging results. Among 65 computer tomography (CT) scans, 94.23% of sensitivity and 84.75% in specificity were obtained. The accuracy of these two results was 89.19% taking into consideration that 45 scans were used for testing and 20 for training. The rate of false positives was 0.2 per scan.

Deep-learning-based super-resolution for accelerating chemical exchange saturation transfer MRI

  • Pemmasani Prabakaran, R. S.
  • Park, S. W.
  • Lai, J. H. C.
  • Wang, K.
  • Xu, J.
  • Chen, Z.
  • Ilyas, A. O.
  • Liu, H.
  • Huang, J.
  • Chan, K. W. Y.
2024 Journal Article, cited 0 times
Website
Chemical exchange saturation transfer (CEST) MRI is a molecular imaging tool that provides physiological information about tissues, making it an invaluable tool for disease diagnosis and guided treatment. Its clinical application requires the acquisition of high-resolution images capable of accurately identifying subtle regional changes in vivo, while simultaneously maintaining a high level of spectral resolution. However, the acquisition of such high-resolution images is time consuming, presenting a challenge for practical implementation in clinical settings. Among several techniques that have been explored to reduce the acquisition time in MRI, deep-learning-based super-resolution (DLSR) is a promising approach to address this problem due to its adaptability to any acquisition sequence and hardware. However, its translation to CEST MRI has been hindered by the lack of the large CEST datasets required for network development. Thus, we aim to develop a DLSR method, named DLSR-CEST, to reduce the acquisition time for CEST MRI by reconstructing high-resolution images from fast low-resolution acquisitions. This is achieved by first pretraining the DLSR-CEST on human brain T1w and T2w images to initialize the weights of the network and then training the network on very small human and mouse brain CEST datasets to fine-tune the weights. Using the trained DLSR-CEST network, the reconstructed CEST source images exhibited improved spatial resolution in both peak signal-to-noise ratio and structural similarity index measure metrics at all downsampling factors (2-8). Moreover, amide CEST and relayed nuclear Overhauser effect maps extrapolated from the DLSR-CEST source images exhibited high spatial resolution and low normalized root mean square error, indicating a negligible loss in Z-spectrum information. Therefore, our DLSR-CEST demonstrated a robust reconstruction of high-resolution CEST source images from fast low-resolution acquisitions, thereby improving the spatial resolution and preserving most Z-spectrum information.

Unsupervised Multimodal Supervoxel Merging Towards Brain Tumor Segmentation

  • Pelluet, Guillaume
  • Rizkallah, Mira
  • Acosta, Oscar
  • Mateus, Diana
2022 Book Section, cited 0 times
Automated brain tumor segmentation is challenging given the tumor’s variability in size, shape, and image intensity. This paper focuses on the fusion of multimodal information coming from different Magnetic Resonance (MR) imaging sequences. We argue it is important to exploit all the modality complementarity to better segment and later determine the aggressiveness of tumors. However, simply concatenating the multimodal data as channels of a single image generates a high volume of redundant information. Therefore, we propose a supervoxel-based approach that regroups pixels sharing perceptually similar information across the different modalities to produce a single coherent oversegmentation. To further reduce redundant information while keeping meaningful borders, we include a variance constraint and a supervoxel merging step. Our experimental validation shows that the proposed merging strategy produces high-quality clustering results useful for brain tumor segmentation. Indeed, our method reaches an ASA score of 0.712 compared to 0.316 for the monomodal approach, indicating that the supervoxels accommodate well tumor boundaries. Our approach also improves by 11.5% the Global Score (GS), showing clusters effectively group pixels similar in intensity and texture.

Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images

  • Pellicer-Valero, Oscar J
  • Marenco Jimenez, Jose L
  • Gonzalez-Perez, Victor
  • Casanova Ramon-Borja, Juan Luis
  • Martin Garcia, Isabel
  • Barrios Benito, Maria
  • Pelechano Gomez, Paula
  • Rubio-Briones, Jose
  • Ruperez, Maria Jose
  • Martin-Guerrero, Jose D
2022 Journal Article, cited 7 times
Website
Although the emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), analyzing these images remains still complex even for experts. This paper proposes a fully automatic system based on Deep Learning that performs localization, segmentation and Gleason grade group (GGG) estimation of PCa lesions from prostate mpMRIs. It uses 490 mpMRIs for training/validation and 75 for testing from two different datasets: ProstateX and Valencian Oncology Institute Foundation. In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG[Formula: see text]2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. At a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. The full code for the ProstateX-trained model is openly available at https://github.com/OscarPellicer/prostate_lesion_detection . We hope that this will represent a landmark for future research to use, compare and improve upon.

Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks

  • Pellicer-Valero, Oscar J.
  • Gonzalez-Perez, Victor
  • Ramón-Borja, Juan Luis Casanova
  • García, Isabel Martín
  • Benito, María Barrios
  • Gómez, Paula Pelechano
  • Rubio-Briones, José
  • Rupérez, María José
  • Martín-Guerrero, José D.
Applied Sciences 2021 Journal Article, cited 1 times
Website
Prostate segmentations are required for an ever-increasing number of medical applications, such as image-based lesion detection, fusion-guided biopsy and focal therapies. However, obtaining accurate segmentations is laborious, requires expertise and, even then, the inter-observer variability remains high. In this paper, a robust, accurate and generalizable model for Magnetic Resonance (MR) and three-dimensional (3D) Ultrasound (US) prostate image segmentation is proposed. It uses a densenet-resnet-based Convolutional Neural Network (CNN) combined with techniques such as deep supervision, checkpoint ensembling and Neural Resolution Enhancement. The MR prostate segmentation model was trained with five challenging and heterogeneous MR prostate datasets (and two US datasets), with segmentations from many different experts with varying segmentation criteria. The model achieves a consistently strong performance in all datasets independently (mean Dice Similarity Coefficient -DSC- above 0.91 for all datasets except for one), outperforming the inter-expert variability significantly in MR (mean DSC of 0.9099 vs. 0.8794). When evaluated on the publicly available Promise12 challenge dataset, it attains a similar performance to the best entries. In summary, the model has the potential of having a significant impact on current prostate procedures, undercutting, and even eliminating, the need of manual segmentations through improvements in terms of robustness, generalizability and output resolution. Featured Application: The proposed model has the potential of having a significant impact on current prostate procedures, undercutting, and even eliminating, the need of manual segmentations through improvements in terms of robustness, generalizability and output resolution.

Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation

  • Peiris, Himashi
  • Hayat, Munawar
  • Chen, Zhaolin
  • Egan, Gary
  • Harandi, Mehrtash
Nature Machine Intelligence 2023 Journal Article, cited 0 times
Deep learning has led to tremendous progress in the field of medical artificial intelligence. However, training deep-learning models usually require large amounts of annotated data. Annotating large-scale datasets is prone to human biases and is often very laborious, especially for dense prediction tasks such as image segmentation. Inspired by semi-supervised algorithms that use both labelled and unlabelled data for training, we propose a dual-view framework based on adversarial learning for segmenting volumetric images. In doing so, we use critic networks to allow each view to learn from high-confidence predictions of the other view by measuring a notion of uncertainty. Furthermore, to jointly learn the dual-views and the critics, we formulate the learning problem as a min–max problem. We analyse and contrast our proposed method against state-of-the-art baselines, both qualitatively and quantitatively, on four public datasets with multiple modalities (for example, computerized topography and magnetic resonance imaging) and demonstrate that the proposed semi-supervised method substantially outperforms the competing baselines while achieving competitive performance compared to fully supervised counterparts. Our empirical results suggest that an uncertainty-guided co-training framework can make two neural networks robust to data artefacts and have the ability to generate plausible segmentation masks that can be helpful for semi-automated segmentation processes.

Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task

  • Peiris, Himashi
  • Chen, Zhaolin
  • Egan, Gary
  • Harandi, Mehrtash
2022 Book Section, cited 0 times
This paper proposes an adversarial learning based training approach for brain tumor segmentation task. In this concept, the 3D segmentation network learns from dual reciprocal adversarial learning approaches. To enhance the generalization across the segmentation predictions and to make the segmentation network robust, we adhere to the Virtual Adversarial Training approach by generating more adversarial examples via adding some noise on original patient data. By incorporating a critic that acts as a quantitative subjective referee, the segmentation network learns from the uncertainty information associated with segmentation results. We trained and evaluated network architecture on the RSNA-ASNR-MICCAI BraTS 2021 dataset. Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.38%, 90.77% and 85.39%; Hausdorff Distance (95%) of 21.83 mm, 5.37 mm, 8.56 mm for the enhancing tumor, whole tumor and tumor core, respectively. Similarly, our approach achieved a Dice Similarity Score of 84.55%, 90.46% and 85.30%, as well as Hausdorff Distance (95%) of 13.48 mm, 6.32 mm and 16.98 mm on the final test dataset. Overall, our proposed approach yielded better performance in segmentation accuracy for each tumor sub-region. Our code implementation is publicly available.

Multimodal Brain Tumor Segmentation and Survival Prediction Using Hybrid Machine Learning

  • Pei, Linmin
  • Vidyaratne, Lasitha
  • Monibor Rahman, M.
  • Shboul, Zeina A.
  • Iftekharuddin, Khan M.
2020 Book Section, cited 0 times
In this paper, we propose a UNet-VAE deep neural network architecture for brain tumor segmentation and survival prediction. UNet-VAE architecture has shown great success in brain tumor segmentation in the multimodal brain tumor segmentation (BraTS) 2018 challenge. In this work, we utilize the UNet-VAE to extract high dimension features, then fuse with hand-crafted texture features to perform survival prediction. We apply the proposed method to the BraTS 2019 validation dataset for both tumor segmentation and survival prediction. The tumor segmentation result shows dice score coefficient (DSC) of 0.759, 0.90, and 0.806 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively. For the feature fusion-based survival prediction method, we achieve 56.4% classification accuracy with mean square error (MSE) 101577, and 51.7% accuracy with MSE 70590 for training and validation, respectively. In testing phase, the proposed method for tumor segmentation achieves average DSC of 0.81328, 0.88616, and 0.84084 for ET, WT, and TC, respectively. Moreover, the model offers accuracy of 0.439 with MSE of 449009.135 for overall survival prediction in testing phase.

Multimodal Brain Tumor Segmentation and Survival Prediction Using a 3D Self-ensemble ResUNet

  • Pei, Linmin
  • Murat, A. K.
  • Colen, Rivka
2021 Book Section, cited 0 times
In this paper, we propose a 3D self-ensemble ResUNet (srUNet) deep neural network architecture for brain tumor segmentation and machine learning-based method for overall survival prediction of patients with gliomas. UNet architecture has been using for semantic image segmentation. It also been used for medical imaging segmentation, including brain tumor segmentation. In this work, we utilize the srUNet to differentiate brain tumors, then the segmented tumors are used for survival prediction. We apply the proposed method to the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 validation dataset for both tumor segmentation and survival prediction. The tumor segmentation result shows dice score coefficient (DSC) of 0.7634, 0.899, and 0.816 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively. For the survival prediction method, we achieve 56.4% classification accuracy with mean square error (MSE) 101697, and 55.2% accuracy with MSE 56169 for training and validation, respectively. In the testing phase, the proposed method offers the DSC of 0.786, 0.881, and 0.823, for ET, WT, and TC, respectively. It also achieves an accuracy of 0.43 for overall survival prediction.

Multimodal Brain Tumor Segmentation Using a 3D ResUNet in BraTS 2021

  • Pei, Linmin
  • Liu, Yanling
2022 Book Section, cited 0 times
In this paper, we propose a multimodal brain tumor segmentation using a 3D ResUNet deep neural network architecture. Deep neural network has been applying in many domains, including computer vision, natural language processing, etc. It has also been used for semantic segmentation in medical imaging segmentation, including brain tumor segmentation. In this work, we utilize a 3D ResUNet to segment tumors in brain magnetic resonance image (MRI). Multimodal MRI is prevailing in brain tumor analysis due to providing rich tumor information. We apply the proposed method to the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2021 validation dataset for tumor segmentation. The online evaluation of brain tumor segmentation using the proposed method offers the dice score coefficient (DSC) of 0.8196, 0.9195, and 0.8503 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively.

A Hybrid Convolutional Neural Network Based-Method for Brain Tumor Classification Using mMRI and WSI

  • Pei, Linmin
  • Hsu, Wei-Wen
  • Chiang, Ling-An
  • Guo, Jing-Ming
  • Iftekharuddin, Khan M.
  • Colen, Rivka
2021 Book Section, cited 0 times
In this paper, we propose a hybrid deep learning-based method for brain tumor classification using whole slide images (WSIs) and multimodal magnetic resonance image (mMRI). It comprises two methods: a WSI-based method and a mMRI-based method. For the WSI-based method, many patches are sampled from the WSI for each category as the training dataset. However, not all the sampling patches are representative of the category to which their corresponding WSI belongs without the annotations by pathologists. Therefore, some error tolerance schemes were applied when training the classification model to achieve better generalization. For the mMRI-based method, we firstly apply a 3D convolutional neural network (3DCNN) on the multimodal magnetic resonance image (mMRI) for brain tumor segmentation, which distinguishes brain tumors from healthy tissues, then the segmented tumors are used for tumor subtype classification using 3DCNN. Lastly, an ensemble scheme using the two methods was performed to reach a consensus as the final predictions. We evaluate the proposed method with the patient dataset from Computational Precision Medicine: Radiology-Pathology Challenge (CPM: Rad-Path) on Brain Tumor Classification 2020. The performance of the prediction results on the validation set reached 0.886 in f1_micro, 0.801 in kappa, 0.8 in balance_acc, and 0.829 in the overall average. The experimental results show that the performance with the consideration of both MRI and WSI outperforms the performance using single type of image dataset. Accordingly, the fusion from two image datasets can provide more sufficient information in diagnosis for the system.

Semantic imaging features predict disease progression and survival in glioblastoma multiforme patients

  • Peeken, J. C.
  • Hesse, J.
  • Haller, B.
  • Kessel, K. A.
  • Nusslin, F.
  • Combs, S. E.
Strahlenther Onkol 2018 Journal Article, cited 1 times
Website
BACKGROUND: For glioblastoma (GBM), multiple prognostic factors have been identified. Semantic imaging features were shown to be predictive for survival prediction. No similar data have been generated for the prediction of progression. The aim of this study was to assess the predictive value of the semantic visually accessable REMBRANDT [repository for molecular brain neoplasia data] images (VASARI) imaging feature set for progression and survival, and the creation of joint prognostic models in combination with clinical and pathological information. METHODS: 189 patients were retrospectively analyzed. Age, Karnofsky performance status, gender, and MGMT promoter methylation and IDH mutation status were assessed. VASARI features were determined on pre- and postoperative MRIs. Predictive potential was assessed with univariate analyses and Kaplan-Meier survival curves. Following variable selection and resampling, multivariate Cox regression models were created. Predictive performance was tested on patient test sets and compared between groups. The frequency of selection for single variables and variable pairs was determined. RESULTS: For progression free survival (PFS) and overall survival (OS), univariate significant associations were shown for 9 and 10 VASARI features, respectively. Multivariate models yielded concordance indices significantly different from random for the clinical, imaging, combined, and combined+ MGMT models of 0.657, 0.636, 0.694, and 0.716 for OS, and 0.602, 0.604, 0.633, and 0.643 for PFS. "Multilocality," "deep white-matter invasion," "satellites," and "ependymal invasion" were over proportionally selected for multivariate model generation, underlining their importance. CONCLUSIONS: We demonstrated a predictive value of several qualitative imaging features for progression and survival. The performance of prognostic models was increased by combining clinical, pathological, and imaging features.

CT-based radiomic features predict tumor grading and have prognostic value in patients with soft tissue sarcomas treated with neoadjuvant radiation therapy

  • Peeken, J. C.
  • Bernhofer, M.
  • Spraker, M. B.
  • Pfeiffer, D.
  • Devecka, M.
  • Thamer, A.
  • Shouman, M. A.
  • Ott, A.
  • Nusslin, F.
  • Mayr, N. A.
  • Rost, B.
  • Nyflot, M. J.
  • Combs, S. E.
Radiother Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE: In soft tissue sarcoma (STS) patients systemic progression and survival remain comparably low despite low local recurrence rates. In this work, we investigated whether quantitative imaging features ("radiomics") of radiotherapy planning CT-scans carry a prognostic value for pre-therapeutic risk assessment. METHODS: CT-scans, tumor grade, and clinical information were collected from three independent retrospective cohorts of 83 (TUM), 87 (UW) and 51 (McGill) STS patients, respectively. After manual segmentation and preprocessing, 1358 radiomic features were extracted. Feature reduction and machine learning modeling for the prediction of grading, overall survival (OS), distant (DPFS) and local (LPFS) progression free survival were performed followed by external validation. RESULTS: Radiomic models were able to differentiate grade 3 from non-grade 3 STS (area under the receiver operator characteristic curve (AUC): 0.64). The Radiomic models were able to predict OS (C-index: 0.73), DPFS (C-index: 0.68) and LPFS (C-index: 0.77) in the validation cohort. A combined clinical-radiomics model showed the best prediction for OS (C-index: 0.76). The radiomic scores were significantly associated in univariate and multivariate cox regression and allowed for significant risk stratification for all three endpoints. CONCLUSION: This is the first report demonstrating a prognostic potential and tumor grading differentiation by CT-based radiomics.

Orthogonal-Nets: A Large Ensemble of 2D Neural Networks for 3D Brain Tumor Segmentation

  • Pawar, Kamlesh
  • Zhong, Shenjun
  • Goonatillake, Dilshan Sasanka
  • Egan, Gary
  • Chen, Zhaolin
2022 Book Section, cited 0 times
We propose Orthogonal-Nets consisting of a large number of ensembles of 2D encoder-decoder convolutional neural networks. The Orthogonal-Nets takes 2D slices of the image from axial, sagittal, and coronal views of the 3D brain volume and predicts the probability for the tumor segmentation region. The predicted probability distributions from all three views are averaged to generate a 3D probability distribution map that is subsequently used to predict the tumor regions for the 3D images. In this work, we propose a two-stage Orthogonal-Nets. Stage-I predicts the brain tumor labels for the whole 3D image using the axial, sagittal, and coronal views. The labels from the first stage are then used to crop only the tumor region. Multiple Orthogonal-Nets were then trained in stage-II, which takes only the cropped region as input. The two-stage strategy substantially reduces the computational burden on the stage-II networks and thus many Orthogonal-Nets can be used in stage-II. We used one Orthogonal-Net for stage-I and 28 Orthogonal-Nets for stage-II. The mean dice score on the testing datasets was 0.8660, 0.8776, 0.9118 for enhancing tumor, core tumor, and whole tumor respectively.

An Ensemble of 2D Convolutional Neural Network for 3D Brain Tumor Segmentation

  • Pawar, Kamlesh
  • Chen, Zhaolin
  • Jon Shah, N.
  • Egan, Gary F.
2020 Book Section, cited 0 times
We propose an ensemble of 2D convolutional neural networks to predict the 3D brain tumor segmentation mask using the multi-contrast brain images. A pretrained Resnet50 and Nasnet-mobile architecture were used as an encoder, which was appended with a decoder network to create an encoder-decoder neural network architecture. The encoder-decoder network was trained end to end using T1, T1 contrast-enhanced, T2 and T2-Flair images to classify each pixel in the 2D input image to either no tumor, necrosis/non-enhancing tumor (NCR/NET), enhancing tumor (ET) or edema (ED). Separate Resent50 and Nasnet-mobile architectures were trained for axial, sagittal and coronal slices. Predictions from 5 inferences including Resnet at all three orientations and Nasnet-mobile at two orientations were averaged to predict the final probabilities and subsequently the tumor mask. The mean dice scores calculated from 166 were 0.8865, 0.7372 and 0.7743 for whole tumor, tumor core and enhancing tumor respectively.

Controlling camera movement in VR colonography

  • Paulo, Soraia F.
  • Medeiros, Daniel
  • Lopes, Daniel
  • Jorge, Joaquim
Virtual Reality 2022 Journal Article, cited 1 times
Website
Objectives: To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning. Methods: In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++. Results: The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 ± 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of ≥ 0.25 in 90% of polyp tissue. Conclusions: In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader. Key points: • Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans. • Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts. • Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6-9 mm size. Keywords: Colonic polyp; Colonography; Computed tomographic; Deep learning; Early detection of cancer.

3D Reconstruction from CT Images Using Free Software Tools

  • Paulo, Soraia Figueiredo
  • Lopes, Daniel Simões
  • Jorge, Joaquim
2021 Book Section, cited 0 times
Website

Using Anatomical Priors for Deep 3D One-shot Segmentation

  • Pauli, Josef
  • Dovletov, Gurbandurdy
  • Pham, Duc
2021 Conference Proceedings, cited 0 times
Website
With the success of deep convolutional neural networks for semantic segmentation in the medical imaging domain, there is a high demand for labeled training data, that is often not available or expensive to acquire.Training with little data usually leads to overfitting, which prohibits the model to generalize to unseen problems. However, in the medical imaging setting, image perspectives and anatomical topology do not vary as much as in natural images, as the patient is often instructed to hold a specific posture to follow a standardized protocol. In this work we therefore investigate the one-shot segmentation capabilities of a standard 3D U-Net architecture in such setting and propose incorporating anatomical priors to increase the segmentation performance. We evaluate our proposed method on the example of liver segmentation in abdominal CT volumes.

Lung cancer incidence and mortality in National Lung Screening Trial participants who underwent low-dose CT prevalence screening: a retrospective cohort analysis of a randomised, multicentre, diagnostic screening trial

  • Patz Jr, Edward F
  • Greco, Erin
  • Gatsonis, Constantine
  • Pinsky, Paul
  • Kramer, Barnett S
  • Aberle, Denise R
The Lancet Oncology 2016 Journal Article, cited 67 times
Website

Limited parameter denoising for low-dose X-ray computed tomography using deep reinforcement learning

  • Patwari, M.
  • Gutjahr, R.
  • Raupach, R.
  • Maier, A.
Med Phys 2022 Journal Article, cited 0 times
Website
BACKGROUND: The use of deep learning has successfully solved several problems in the field of medical imaging. Deep learning has been applied to the CT denoising problem successfully. However, the use of deep learning requires large amounts of data to train deep convolutional networks (CNNs). Moreover, due to the large parameter count, such deep CNNs may cause unexpected results. PURPOSE: In this study, we introduce a novel CT denoising framework, which has interpretable behavior and provides useful results with limited data. METHODS: We employ bilateral filtering in both the projection and volume domains to remove noise. To account for nonstationary noise, we tune the sigma parameters of the volume for every projection view and every volume pixel. The tuning is carried out by two deep CNNs. Due to the impracticality of labeling, the two-deep CNNs are trained via a Deep-Q reinforcement learning task. The reward for the task is generated by using a custom reward function represented by a neural network. Our experiments were carried out on abdominal scans for the Mayo Clinic the cancer imaging archive (TCIA) dataset and the American association of physicists in medicine (AAPM) Low Dose CT Grand Challenge. RESULTS: Our denoising framework has excellent denoising performance increasing the peak signal to noise ratio (PSNR) from 28.53 to 28.93 and increasing the structural similarity index (SSIM) from 0.8952 to 0.9204. We outperform several state-of-the-art deep CNNs, which have several orders of magnitude higher number of parameters (p-value [PSNR] = 0.000, p-value [SSIM] = 0.000). Our method does not introduce any blurring, which is introduced by mean squared error (MSE) loss-based methods, or any deep learning artifacts, which are introduced by wasserstein generative adversarial network (WGAN)-based models. Our ablation studies show that parameter tuning and using our reward network results in the best possible results. CONCLUSIONS: We present a novel CT denoising framework, which focuses on interpretability to deliver good denoising performance, especially with limited data. Our method outperforms state-of-the-art deep neural networks. Future work will be focused on accelerating our method and generalizing it to different geometries and body parts.

Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization

  • Patra, Dipak Kumar
  • Si, Tapas
  • Mondal, Sukumar
  • Mukherjee, Prakash
Biomedical Signal Processing and Control 2021 Journal Article, cited 0 times
Website
In recent years, the high prevalence of breast cancer in women has risen dramatically. Therefore, segmentation of breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is a necessary task to assist the radiologist inaccurate diagnosis and detection of breast cancer in breast DCE-MRI. For image segmentation, thresholding is a simple and effective method. In breast DCE-MRI analysis for lesion detection and segmentation, radiologists agree that optimization via multi-level thresholding technique is important to differentiate breast lesions from dynamic DCE-MRI. In this paper, multi-level thresholding using Student Psychology-Based Optimizer (SPBO) is proposed to segment the breast DCE-MR images for lesion detection. First, MR images are denoised using the anisotropic diffusion filter and then, Intensity Inhomogeneities (IIHs) are corrected in the preprocessing step. The preprocessed MR images are segmented using the SPBO algorithm. Finally, the lesions are extracted from the segmented images and localized in the original MR images. The proposed method is applied on 300 Sagittal T2-Weighted DCE-MRI slices of 50 patients, histologically proven, and analyzed. The roposed method is compared with algorithms such as Particle Swarm Optimizer (PSO), Dragonfly Optimization (DA), Slime Mould Optimization (SMA), Multi-Verse Optimization (MVO), Grasshopper Optimization Algorithm (GOA), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF) methods. The high accuracy level of 99.44%, sensitivity 96.84%, and Dice Similarity Coefficient (DSC) 93.41% are achieved using the proposed automatic segmentation method. Both quantitative and qualitative results demonstrate that the proposed method performs better than the eight compared methods.

An Approach Toward Automatic Classification of Tumor Histopathology of Non–Small Cell Lung Cancer Based on Radiomic Features

  • Patil, Ravindra
  • Mahadevaiah, Geetha
  • Dekker, Andre
Tomography: a journal for imaging research 2016 Journal Article, cited 2 times
Website

GaNDLF: the generally nuanced deep learning framework for scalable end-to-end clinical workflows

  • Pati, Sarthak
  • Thakur, Siddhesh P.
  • Hamamcı, İbrahim Ethem
  • Baid, Ujjwal
  • Baheti, Bhakti
  • Bhalerao, Megh
  • Güley, Orhun
  • Mouchtaris, Sofia
  • Lang, David
  • Thermos, Spyridon
  • Gotkowski, Karol
  • González, Camila
  • Grenko, Caleb
  • Getka, Alexander
  • Edwards, Brandon
  • Sheller, Micah
  • Wu, Junwen
  • Karkada, Deepthi
  • Panchumarthy, Ravi
  • Ahluwalia, Vinayak
  • Zou, Chunrui
  • Bashyam, Vishnu
  • Li, Yuemeng
  • Haghighi, Babak
  • Chitalia, Rhea
  • Abousamra, Shahira
  • Kurc, Tahsin M.
  • Gastounioti, Aimilia
  • Er, Sezgin
  • Bergman, Mark
  • Saltz, Joel H.
  • Fan, Yong
  • Shah, Prashant
  • Mukhopadhyay, Anirban
  • Tsaftaris, Sotirios A.
  • Menze, Bjoern
  • Davatzikos, Christos
  • Kontos, Despina
  • Karargyris, Alexandros
  • Umeton, Renato
  • Mattson, Peter
  • Bakas, Spyridon
2023 Journal Article, cited 5 times
Website
Deep Learning (DL) has the potential to optimize machine learning in both the scientific and clinical communities. However, greater expertise is required to develop DL algorithms, and the variability of implementations hinders their reproducibility, translation, and deployment. Here we present the community-driven Generally Nuanced Deep Learning Framework (GaNDLF), with the goal of lowering these barriers. GaNDLF makes the mechanism of DL development, training, and inference more stable, reproducible, interpretable, and scalable, without requiring an extensive technical background. GaNDLF aims to provide an end-to-end solution for all DL-related tasks in computational precision medicine. We demonstrate the ability of GaNDLF to analyze both radiology and histology images, with built-in support for k-fold cross-validation, data augmentation, multiple modalities and output classes. Our quantitative performance evaluation on numerous use cases, anatomies, and computational tasks supports GaNDLF as a robust application framework for deployment in clinical workflows.

Estimating Glioblastoma Biophysical Growth Parameters Using Deep Learning Regression

  • Pati, S.
  • Sharma, V.
  • Aslam, H.
  • Thakur, S. P.
  • Akbari, H.
  • Mang, A.
  • Subramanian, S.
  • Biros, G.
  • Davatzikos, C.
  • Bakas, S.
Brainlesion 2021 Journal Article, cited 0 times
Website
Glioblastoma ( GBM ) is arguably the most aggressive, infiltrative, and heterogeneous type of adult brain tumor. Biophysical modeling of GBM growth has contributed to more informed clinical decision-making. However, deploying a biophysical model to a clinical environment is challenging since underlying computations are quite expensive and can take several hours using existing technologies. Here we present a scheme to accelerate the computation. In particular, we present a deep learning ( DL )-based logistic regression model to estimate the GBM's biophysical growth in seconds. This growth is defined by three tumor-specific parameters: 1) a diffusion coefficient in white matter ( Dw ), which prescribes the rate of infiltration of tumor cells in white matter, 2) a mass-effect parameter ( Mp ), which defines the average tumor expansion, and 3) the estimated time ( T ) in number of days that the tumor has been growing. Preoperative structural multi-parametric MRI ( mpMRI ) scans from n = 135 subjects of the TCGA-GBM imaging collection are used to quantitatively evaluate our approach. We consider the mpMRI intensities within the region defined by the abnormal FLAIR signal envelope for training one DL model for each of the tumor-specific growth parameters. We train and validate the DL-based predictions against parameters derived from biophysical inversion models. The average Pearson correlation coefficients between our DL-based estimations and the biophysical parameters are 0.85 for Dw, 0.90 for Mp, and 0.94 for T, respectively. This study unlocks the power of tumor-specific parameters from biophysical tumor growth estimation. It paves the way towards their clinical translation and opens the door for leveraging advanced radiomic descriptors in future studies by means of a significantly faster parameter reconstruction compared to biophysical growth modeling approaches.

An efficient low-dose CT reconstruction technique using partial derivatives based guided image filter

  • Pathak, Yadunath
  • Arya, KV
  • Tiwari, Shailendra
Multimedia Tools and Applications 2018 Journal Article, cited 0 times
Website

T2-FLAIR Mismatch, an Imaging Biomarker for IDH and 1p/19q Status in Lower-grade Gliomas: A TCGA/TCIA Project

  • Patel, S. H.
  • Poisson, L. M.
  • Brat, D. J.
  • Zhou, Y.
  • Cooper, L.
  • Snuderl, M.
  • Thomas, C.
  • Franceschi, A. M.
  • Griffith, B.
  • Flanders, A. E.
  • Golfinos, J. G.
  • Chi, A. S.
  • Jain, R.
Clin Cancer Res 2017 Journal Article, cited 320 times
Website
Purpose: Lower-grade gliomas (WHO grade II/III) have been classified into clinically relevant molecular subtypes based on IDH and 1p/19q mutation status. The purpose was to investigate whether T2/FLAIR MRI features could distinguish between lower-grade glioma molecular subtypes.Experimental Design: MRI scans from the TCGA/TCIA lower grade glioma database (n = 125) were evaluated by two independent neuroradiologists to assess (i) presence/absence of homogenous signal on T2WI; (ii) presence/absence of "T2-FLAIR mismatch" sign; (iii) sharp or indistinct lesion margins; and (iv) presence/absence of peritumoral edema. Metrics with moderate-substantial agreement underwent consensus review and were correlated with glioma molecular subtypes. Somatic mutation, DNA copy number, DNA methylation, gene expression, and protein array data from the TCGA lower-grade glioma database were analyzed for molecular-radiographic associations. A separate institutional cohort (n = 82) was analyzed to validate the T2-FLAIR mismatch sign.Results: Among TCGA/TCIA cases, interreader agreement was calculated for lesion homogeneity [kappa = 0.234 (0.111-0.358)], T2-FLAIR mismatch sign [kappa = 0.728 (0.538-0.918)], lesion margins [kappa = 0.292 (0.135-0.449)], and peritumoral edema [kappa = 0.173 (0.096-0.250)]. All 15 cases that were positive for the T2-FLAIR mismatch sign were IDH-mutant, 1p/19q non-codeleted tumors (P < 0.0001; PPV = 100%, NPV = 54%). Analysis of the validation cohort demonstrated substantial interreader agreement for the T2-FLAIR mismatch sign [kappa = 0.747 (0.536-0.958)]; all 10 cases positive for the T2-FLAIR mismatch sign were IDH-mutant, 1p/19q non-codeleted tumors (P < 0.00001; PPV = 100%, NPV = 76%).Conclusions: Among lower-grade gliomas, T2-FLAIR mismatch sign represents a highly specific imaging biomarker for the IDH-mutant, 1p/19q non-codeleted molecular subtype. Clin Cancer Res; 23(20); 6078-85. (c)2017 AACR.

MRI and CT Identify Isocitrate Dehydrogenase (IDH)-Mutant Lower-Grade Gliomas Misclassified to 1p/19q Codeletion Status with Fluorescence in Situ Hybridization

  • Patel, Sohil H
  • Batchala, Prem P
  • Mrachek, E Kelly S
  • Lopes, Maria-Beatriz S
  • Schiff, David
  • Fadul, Camilo E
  • Patrie, James T
  • Jain, Rajan
  • Druzgal, T Jason
  • Williams, Eli S
RadiologyRadiology 2020 Journal Article, cited 0 times
Website
Background Fluorescence in situ hybridization (FISH) is a standard method for 1p/19q codeletion testing in diffuse gliomas but occasionally renders erroneous results. Purpose To determine whether MRI/CT analysis identifies isocitrate dehydrogenase (IDH)-mutant gliomas misassigned to 1p/19q codeletion status with FISH. Materials and Methods Data in patients with IDH-mutant lower-grade gliomas (World Health Organization grade II/III) and 1p/19q codeletion status determined with FISH that were accrued from January 1, 2010 to October 1, 2017, were included in this retrospective study. Two neuroradiologist readers analyzed the pre-resection MRI findings (and CT findings, when available) to predict 1p/19q status (codeleted or noncodeleted) and provided a prediction confidence score (1 = low, 2 = moderate, 3 = high). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was calculated. For gliomas where (a) consensus neuroradiologist 1p/19q prediction differed from the FISH result and (b) consensus neuroradiologist confidence score was 2 or greater, further 1p/19q testing was performed with chromosomal microarray analysis (CMA). Nine control specimens were randomly chosen from the remaining study sample for CMA. Percentage concordance between FISH and CMA among the CMA-tested cases was calculated. Results A total of 112 patients (median age, 38 years [interquartile range, 31–51 years]; 57 men) were evaluated (112 gliomas). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was 84.8% (95 of 112; 95% confidence interval: 76.8%, 90.9%). Among the 17 neuroradiologist-FISH discordances, there were nine gliomas associated with a consensus neuroradiologist confidence score of 2 or greater. In six (66.7%) of these nine gliomas, the 1p/19q codeletion status as determined with CMA disagreed with the FISH result and agreed with the consensus neuroradiologist prediction. For the nine control specimens, there was 100% agreement between CMA and FISH for 1p/19q determination. Conclusion MRI and CT analysis can identify diffuse gliomas misassigned to 1p/19q codeletion status with fluorescence in situ hybridization (FISH). Further molecular testing should be considered for gliomas with discordant neuroimaging and FISH results.

Swift Pre Rendering Volumetric Visualization of Magnetic Resonance Cardiac Images based on Isosurface Technique

  • Patel, Nikhilkumar P
  • Parmar, Shankar K
  • Jain, Kavindra R
Procedia Technology 2014 Journal Article, cited 0 times
Website
Magnetic Resonance imaging (MRI) is a medical imaging procedure which uses strong magnetic fields and radio waves to produce cross sectional images of organs and internal structures of the body. Three dimensional (3D) models of CT is available and it has been practiced by almost all radiologists for pre-diagnosis. But in MRI still there is a scope for researcher to improvise a 3D model. Two dimensional images are taken from different viewpoints to reconstruct them in 3D, which is known as rendering process. In this paper, we have proposed a rendering concept for Medical (cardiac MRI) images based on iso values and number of marching cubes. Designer can place colors and textures over the 3D model to make it look realistic. This makes it easier for people to observe and visualize a substance in a better sense. The algorithm basically works on triangulation methods with various iso value and different combination of marching cube pairs. As a result of an application of marching cube concept, volumetric data (voxels) is generated. Voxels are then arranged and projected to visualize a 3D scene. Approximate processing time for various iso values are also compared in this paper.

Decorin Expression Is Associated With Diffusion MR Phenotypes in Glioblastoma

  • Patel, Kunal S.
  • Raymond, Catalina
  • Yao, Jingwen
  • Tsung, Joseph
  • Liau, Linda M.
  • Everson, Richard
  • Cloughesy, Timothy F.
  • Ellingson, Benjamin
Neurosurgery 2019 Journal Article, cited 0 times
Abstract INTRODUCTION Significant evidence from multiple phase II trials have suggested diffusion-weighted imaging estimates of apparent diffusion coefficient (ADC) are a predictive imaging biomarker for survival benefit for recurrent glioblastoma when treated with anti-VEGF therapies, including bevacizumab, cediranib, and cabozantinib. Despite this observation, the underlying mechanism linking anti-VEGF therapeutic efficacy with diffusion MR characteristics remains unknown. We hypothesized that a high expression of decorin, a small proteoglycan that has been associated with sequestration of pro-angiogenic signaling as well as reduction in the viscosity of the extracellular environment, may be associated with elevated ADC. METHODS A differential gene expression analysis was carried out in human glioblastoma samples in whom preoperative diffusion imaging was obtained. ADC histogram analysis was carried out to calculate preoperative ADCL values, the average ADC in the lower distribution using a double Gaussian mixed model. The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) databases were queried to identify diffusion imaging and levels of decorin protein expression. Patients with recurrent glioblastoma who undergo resection prospectively had targeted biopsies based on the ADC analysis collected. These samples were stained for decorin and quantified using whole-slide image analysis software. RESULTS Differential gene expression analysis between tumors associated with high and low preoperative ADCL showed that patients with high ADCL had increased decorin gene expression. Patients from the TCGA database with elevated ADCL had a significantly higher level of decorin gene expression (P = .01). These patients had a survival advantage with a log-rank analysis (P = .002). Patients with preoperative diffusion imaging had multiple targeted intraoperative biopsies stained for decorin. Patients with high ADCL had increased decorin expression on immunohistochemistry (P = .002). CONCLUSION Increased ADCL on diffusion MR imaging is associated with high decorin expression as well as increased survival in glioblastoma. Decorin may play an important role the imaging features on diffusion MR and anti-VEGF treatment efficacy. Decorin expression may serve as a future therapeutic target in patients with favorable diffusion MR characteristics.

Segmentation, Survival Prediction, and Uncertainty Estimation of Gliomas from Multimodal 3D MRI Using Selective Kernel Networks

  • Patel, Jay
  • Chang, Ken
  • Hoebel, Katharina
  • Gidwani, Mishka
  • Arun, Nishanth
  • Gupta, Sharut
  • Aggarwal, Mehak
  • Singh, Praveer
  • Rosen, Bruce R.
  • Gerstner, Elizabeth R.
  • Kalpathy-Cramer, Jayashree
2021 Book Section, cited 0 times
Segmentation of gliomas into distinct sub-regions can help guide clinicians in tasks such as surgical planning, prognosis, and treatment response assessment. Manual delineation is time-consuming and prone to inter-rater variability. In this work, we propose a deep learning based automatic segmentation method that takes T1-pre, T1-post, T2, and FLAIR MRI as input and outputs a segmentation map of the sub-regions of interest (enhancing tumor (ET), whole tumor (WT), and tumor core (TC)). Our U-Net based architecture incorporates a modified selective kernel block to enable the network to adjust its receptive field via an attention mechanism, enabling more robust segmentation of gliomas of all appearances, shapes, and scales. Using this approach on the official BraTS 2020 testing set, we obtain Dice scores of .822, .889, and .834, and Hausdorff distances (95%) of 11.588, 4.812, and 21.984 for ET, WT, and TC, respectively. For prediction of overall survival, we extract deep features from the bottleneck layer of this network and train a Cox Proportional Hazards model, obtaining .495 accuracy. For uncertainty prediction, we achieve AUCs of .850, .914, and .854 for ET, WT, and TC, respectively, which earned us third place for this task.

Predicting Mutation Status and Recurrence Free Survival in Non-Small Cell Lung Cancer: A Hierarchical ct Radiomics – Deep Learning Approach

  • Patel, Divak
  • Cowan, Connor
  • Prasanna, Prateek
2021 Conference Paper, cited 0 times
Website
Non-Small Cell Lung Cancer (NSCLC) is the world's leading cause of cancer deaths. A significant portion of these patients develop recurrence despite curative resection. Prognostic modeling of recurrence free survival in NSCLC has been attempted using computed tomography (CT) imaging features. Radiomic features have also been used to identify mutation subtypes in various cancers, however, the implications of such features on eventual patient outcome are unclear. Studies have shown that genetic mutation subtypes in lung cancers (KRAS and EGFR) have imaging correlates that can be detected using radiomic features from CT scans. In this study, we provide a degree of interpretability to quantitative imaging features predictive of mutation status by demonstrating their association with recurrence free survival using a hierarchical CT radiomics - deep learning pipeline.

Millisecond speed deep learning based proton dose calculation with Monte Carlo accuracy

  • Pastor-Serrano, O.
  • Perko, Z.
Phys Med Biol 2022 Journal Article, cited 0 times
Website
Objective.Next generation online and real-time adaptive radiotherapy workflows require precise particle transport simulations in sub-second times, which is unfeasible with current analytical pencil beam algorithms (PBA) or Monte Carlo (MC) methods. We present a deep learning based millisecond speed dose calculation algorithm (DoTA) accurately predicting the dose deposited by mono-energetic proton pencil beams for arbitrary energies and patient geometries.Approach.Given the forward-scattering nature of protons, we frame 3D particle transport as modeling a sequence of 2D geometries in the beam's eye view. DoTA combines convolutional neural networks extracting spatial features (e.g. tissue and density contrasts) with a transformer self-attention backbone that routes information between the sequence of geometry slices and a vector representing the beam's energy, and is trained to predict low noise MC simulations of proton beamlets using 80 000 different head and neck, lung, and prostate geometries.Main results.Predicting beamlet doses in 5 +/- 4.9 ms with a very high gamma pass rate of 99.37 +/- 1.17% (1%, 3 mm) compared to the ground truth MC calculations, DoTA significantly improves upon analytical pencil beam algorithms both in precision and speed. Offering MC accuracy 100 times faster than PBAs for pencil beams, our model calculates full treatment plan doses in 10-15 s depending on the number of beamlets (800-2200 in our plans), achieving a 99.70 +/- 0.14% (2%, 2 mm) gamma pass rate across 9 test patients.Significance.Outperforming all previous analytical pencil beam and deep learning based approaches, DoTA represents a new state of the art in data-driven dose calculation and can directly compete with the speed of even commercial GPU MC approaches. Providing the sub-second speed required for adaptive treatments, straightforward implementations could offer similar benefits to other steps of the radiotherapy workflow or other modalities such as helium or carbon treatments.

Phenotyping the Histopathological Subtypes of Non-Small-Cell Lung Carcinoma: How Beneficial Is Radiomics?

  • Pasini, G.
  • Stefano, A.
  • Russo, G.
  • Comelli, A.
  • Marinozzi, F.
  • Bini, F.
Diagnostics (Basel) 2023 Journal Article, cited 0 times
Website
The aim of this study was to investigate the usefulness of radiomics in the absence of well-defined standard guidelines. Specifically, we extracted radiomics features from multicenter computed tomography (CT) images to differentiate between the four histopathological subtypes of non-small-cell lung carcinoma (NSCLC). In addition, the results that varied with the radiomics model were compared. We investigated the presence of the batch effects and the impact of feature harmonization on the models' performance. Moreover, the question on how the training dataset composition influenced the selected feature subsets and, consequently, the model's performance was also investigated. Therefore, through combining data from the two publicly available datasets, this study involves a total of 152 squamous cell carcinoma (SCC), 106 large cell carcinoma (LCC), 150 adenocarcinoma (ADC), and 58 no other specified (NOS). Through the matRadiomics tool, which is an example of Image Biomarker Standardization Initiative (IBSI) compliant software, 1781 radiomics features were extracted from each of the malignant lesions that were identified in CT images. After batch analysis and feature harmonization, which were based on the ComBat tool and were integrated in matRadiomics, the datasets (the harmonized and the non-harmonized) were given as an input to a machine learning modeling pipeline. The following steps were articulated: (i) training-set/test-set splitting (80/20); (ii) a Kruskal-Wallis analysis and LASSO linear regression for the feature selection; (iii) model training; (iv) a model validation and hyperparameter optimization; and (v) model testing. Model optimization consisted of a 5-fold cross-validated Bayesian optimization, repeated ten times (inner loop). The whole pipeline was repeated 10 times (outer loop) with six different machine learning classification algorithms. Moreover, the stability of the feature selection was evaluated. Results showed that the batch effects were present even if the voxels were resampled to an isotropic form and whether feature harmonization correctly removed them, even though the models' performances decreased. Moreover, the results showed that a low accuracy (61.41%) was reached when differentiating between the four subtypes, even though a high average area under curve (AUC) was reached (0.831). Further, a NOS subtype was classified as almost completely correct (true positive rate ~90%). The accuracy increased (77.25%) when only the SCC and ADC subtypes were considered, as well as when a high AUC (0.821) was obtained-although harmonization decreased the accuracy to 58%. Moreover, the features that contributed the most to models' performance were those extracted from wavelet decomposed and Laplacian of Gaussian (LoG) filtered images and they belonged to the texture feature class.. In conclusion, we showed that our multicenter data were affected by batch effects, that they could significantly alter the models' performance, and that feature harmonization correctly removed them. Although wavelet features seemed to be the most informative features, an absolute subset could not be identified since it changed depending on the training/testing splitting. Moreover, performance was influenced by the chosen dataset and by the machine learning methods, which could reach a high accuracy in binary classification tasks, but could underperform in multiclass problems. It is, therefore, essential that the scientific community propose a more systematic radiomics approach, focusing on multicenter studies, with clear and solid guidelines to facilitate the translation of radiomics to clinical practice.

Fast and robust methods for non-rigid registration of medical images

  • Parraguez, Stefan Philippo Pszczolkowski
2014 Thesis, cited 1 times
Website

Radiomic feature clusters and Prognostic Signatures specific for Lung and Head &Neck cancer

  • Parmar, C.
  • Leijenaar, R. T.
  • Grossmann, P.
  • Rios Velazquez, E.
  • Bussink, J.
  • Rietveld, D.
  • Rietbergen, M. M.
  • Haibe-Kains, B.
  • Lambin, P.
  • Aerts, H. J.
2015 Journal Article, cited 0 times
Radiomics provides a comprehensive quantification of tumor phenotypes by extracting and mining large number of quantitative image features. To reduce the redundancy and compare the prognostic characteristics of radiomic features across cancer types, we investigated cancer-specific radiomic feature clusters in four independent Lung and Head &Neck (H) cancer cohorts (in total 878 patients). Radiomic features were extracted from the pre-treatment computed tomography (CT) images. Consensus clustering resulted in eleven and thirteen stable radiomic feature clusters for Lung and H cancer, respectively. These clusters were validated in independent external validation cohorts using rand statistic (Lung RS = 0.92, p < 0.001, H RS = 0.92, p < 0.001). Our analysis indicated both common as well as cancer-specific clustering and clinical associations of radiomic features. Strongest associations with clinical parameters: Prognosis Lung CI = 0.60 +/- 0.01, Prognosis H CI = 0.68 +/- 0.01; Lung histology AUC = 0.56 +/- 0.03, Lung stage AUC = 0.61 +/- 0.01, H HPV AUC = 0.58 +/- 0.03, H stage AUC = 0.77 +/- 0.02. Full utilization of these cancer-specific characteristics of image features may further improve radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor phenotypic characteristics in clinical practice.

Machine Learning methods for Quantitative Radiomic Biomarkers

  • Parmar, C.
  • Grossmann, P.
  • Bussink, J.
  • Lambin, P.
  • Aerts, H. J.
2015 Journal Article, cited 178 times
Website
Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 +/- 0.05, AUC = 0.65 +/- 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 +/- 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.

Machine learning applications for Radiomics: towards robust non-invasive predictors in clinical oncology

  • Parmar, Chintan
2017 Thesis, cited 1 times
Website

Brain Tumor Segmentation and Survival Prediction Using Patch Based Modified 3D U-Net

  • Parmar, Bhavesh
  • Parikh, Mehul
2021 Book Section, cited 0 times
Brain tumor segmentation is a vital clinical requirement. In recent years, the developments of the prevalence of deep learning in medical image processing have been experienced. Automated brain tumor segmentation can reduce the diagnosis time and increase the potential of clinical intervention. In this work, we have used a patch selection methodology based on modified U-Net deep learning architecture with appropriate normalization and patch selection methods for the brain tumor segmentation task in BraTS 2020 challenge. Two-phase network training was implemented with patch selection methods. The performance of our deep learning-based brain tumor segmentation approach was done on CBICA’s Image Processing Portal. We achieved a Dice score of 0.795, 0.886, 0.827 in the testing phase, for the enhancing tumor, whole tumor, and tumor core respectively. The segmentation outcome with various radiomic features was used for Overall survival (OS) prediction. For OS prediction we achieved an accuracy of 0.570 for the testing phase. The algorithm can further be improved for tumor inter-class segmentation and OS prediction with various network implementation strategies. As the OS prediction results are based on segmentation, there is a scope of improvement in the segmentation and OS prediction thereby.

Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications

  • Parks, Connie L
  • Monson, Keith L
2016 Journal Article, cited 3 times
Website
The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

Tumor Propagation Model using Generalized Hidden Markov Model

  • Park, Sun Young
  • Sargent, Dustin
2017 Conference Proceedings, cited 0 times
Website

Radiomics risk score may be a potential imaging biomarker for predicting survival in isocitrate dehydrogenase wild-type lower-grade gliomas

  • Park, C. J.
  • Han, K.
  • Kim, H.
  • Ahn, S. S.
  • Choi, Y. S.
  • Park, Y. W.
  • Chang, J. H.
  • Kim, S. H.
  • Jain, R.
  • Lee, S. K.
Eur Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVES: Isocitrate dehydrogenase wild-type (IDHwt) lower-grade gliomas of histologic grades II and III follow heterogeneous clinical outcomes, which necessitates risk stratification. We aimed to evaluate whether radiomics from MRI would allow prediction of overall survival in patients with IDHwt lower-grade gliomas and to investigate the added prognostic value of radiomics over clinical features. METHODS: Preoperative MRIs of 117 patients with IDHwt lower-grade gliomas from January 2007 to February 2018 were retrospectively analyzed. The external validation cohort consisted of 33 patients from The Cancer Genome Atlas. A total of 182 radiomic features were extracted. Radiomics risk scores (RRSs) for overall survival were derived from the least absolute shrinkage and selection operator (LASSO) and elastic net. Multivariable Cox regression analyses, including clinical features and RRSs, were performed. The integrated areas under the receiver operating characteristic curves (iAUCs) from models with and without RRSs were calculated for comparisons. The prognostic value of RRS was assessed in the validation cohort. RESULTS: The RRS derived from LASSO and elastic net independently predicted survival with hazard ratios of 9.479 (95% confidence interval [CI], 3.220-27.847) and 6.148 (95% CI, 3.009-12.563), respectively. Those RRSs enhanced model performance for predicting overall survival (iAUC increased to 0.780-0.797 from 0.726), which was externally validated. The RRSs stratified IDHwt lower-grade gliomas in the validation cohort with significantly different survival. CONCLUSION: Radiomics has the potential for noninvasive risk stratification and can improve prediction of overall survival in patients with IDHwt lower-grade gliomas when integrated with clinical features. KEY POINTS: * Isocitrate dehydrogenase wild-type lower-grade gliomas with histologic grades II and III follow heterogeneous clinical outcomes, which necessitates further risk stratification. * Radiomics risk scores derived from MRI independently predict survival even after incorporating strong clinical prognostic features (hazard ratios 6.148-9.479). * Radiomics risk scores derived from MRI have the potential to improve survival prediction when added to clinical features (integrated areas under the receiver operating characteristic curves increased from 0.726 to 0.780-0.797).

Influence of Contrast Administration on Computed Tomography–Based Analysis of Visceral Adipose and Skeletal Muscle Tissue in Clear Cell Renal Cell Carcinoma

  • Paris, Michael T
  • Furberg, Helena F
  • Petruzella, Stacey
  • Akin, Oguz
  • Hötker, Andreas M
  • Mourtzakis, Marina
Journal of Parenteral and Enteral Nutrition 2018 Journal Article, cited 0 times
Website

Content dependent intra mode selection for medical image compression using HEVC

  • Parikh, S
  • Ruiz, D
  • Kalva, H
  • Fern, G
2016 Conference Proceedings, cited 3 times
Website
This paper presents a method for complexity reduction in medical image encoding that exploits the structure of medical images. The amount of texture detail and structure in medical images depends on the modality used to capture the image and the body part captured by that image. The proposed approach was evaluated using Computed Radiography (CR) modality, commonly known as x-ray imaging, and three body parts. The proposed method essentially reduces the number of CU partitions evaluated as well as the number of intra prediction modes for each evaluated partition. Evaluation using the HEVC reference software (HM) 16.4 and lossless intra coding shows an average reduction of 52.47% in encoding time with a negligible penalty of up to 0.22%, increase in compressed file size.

Tumor Connectomics: Mapping the Intra-Tumoral Complex Interaction Network Using Machine Learning

  • Parekh, V. S.
  • Pillai, J. J.
  • Macura, K. J.
  • LaViolette, P. S.
  • Jacobs, M. A.
Cancers (Basel) 2022 Journal Article, cited 0 times
Website
The high-level relationships that form complex networks within tumors and between surrounding tissue is challenging and not fully understood. To better understand these tumoral networks, we developed a tumor connectomics framework (TCF) based on graph theory with machine learning to model the complex interactions within and around the tumor microenvironment that are detectable on imaging. The TCF characterization model was tested with independent datasets of breast, brain, and prostate lesions with corresponding validation datasets in breast and brain cancer. The TCF network connections were modeled using graph metrics of centrality, average path length (APL), and clustering from multiparametric MRI with IsoSVM. The Matthews Correlation Coefficient (MCC), Area Under the Curve-ROC, and Precision-Recall (AUC-ROC and AUC-PR) were used for statistical analysis. The TCF classified the breast and brain tumor cohorts with an IsoSVM AUC-PR and MCC of 0.86, 0.63 and 0.85, 0.65, respectively. The TCF benign breast lesions had a significantly higher clustering coefficient and degree centrality than malignant TCFs. Grade 2 brain tumors demonstrated higher connectivity compared to Grade 4 tumors with increased degree centrality and clustering coefficients. Gleason 7 prostate lesions had increased betweenness centrality and APL compared to Gleason 6 lesions with AUC-PR and MCC ranging from 0.90 to 0.99 and 0.73 to 0.87, respectively. These TCF findings were similar in the validation breast and brain datasets. In conclusion, we present a new method for tumor characterization and visualization that results in a better understanding of the global and regional connections within the lesion and surrounding tissue.

Deep learning for segmentation of brain tumors: Can we train with images from different institutions?

  • Paredes, David
  • Saha, Ashirbani
  • Mazurowski, Maciej A
2017 Conference Proceedings, cited 2 times
Website

LGMSU-Net: Local Features, Global Features, and Multi-Scale Features Fused the U-Shaped Network for Brain Tumor Segmentation

  • Pang, X. J.
  • Zhao, Z. J.
  • Wang, Y. L.
  • Li, F.
  • Chang, F. L.
2022 Journal Article, cited 0 times
Brain tumors are one of the deadliest cancers in the world. Researchers have conducted a lot of research work on brain tumor segmentation with good performance due to the rapid development of deep learning for assisting doctors in diagnosis and treatment. However, most of these methods cannot fully combine multiple feature information and their performances need to be improved. This study developed a novel network fusing local features representing detailed information, global features representing global information, and multi-scale features enhancing the model's robustness to fully extract the features of brain tumors and proposed a novel axial-deformable attention module for modeling global information to improve the performance of brain tumor segmentation to assist clinicians in the automatic segmentation of brain tumors. Moreover, positional embeddings were used to make the network training faster and improve the method's performance. Six metrics were used to evaluate the proposed method on the BraTS2018 dataset. Outstanding performance was obtained with Dice score, mean Intersection over Union, precision, recall, params, and inference time of 0.8735, 0.7756, 0.9477, 0.8769, 69.02 M, and 15.66 millisecond, respectively, for the whole tumor. Extensive experiments demonstrated that the proposed network obtained excellent performance and was helpful in providing supplementary advice to the clinicians.

A Novel End-to-End Classifier Using Domain Transferred Deep Convolutional Neural Networks for Biomedical Images

  • Pang, Shuchao
  • Yu, Zhezhou
  • Orgun, Mehmet A
Computer Methods and Programs in Biomedicine 2017 Journal Article, cited 21 times
Website
BACKGROUND AND OBJECTIVES: Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. METHODS: We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. RESULTS: With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. CONCLUSIONS: We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets.

A Novel Biomedical Image Indexing and Retrieval System via Deep Preference Learning

  • Pang, Shuchao
  • Orgun, MA
  • Yu, Zhezhou
Computer Methods and Programs in Biomedicine 2018 Journal Article, cited 4 times
Website
BACKGROUND AND OBJECTIVES: The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. METHODS: We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. RESULTS: We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. CONCLUSIONS: We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications.

A novel fused convolutional neural network for biomedical image classification

  • Pang, Shuchao
  • Du, Anan
  • Orgun, Mehmet A
  • Yu, Zhezhou
2018 Journal Article, cited 0 times
Website

Glioma Segmentation Using Encoder-Decoder Network and Survival Prediction Based on Cox Analysis

  • Pang, Enshuai
  • Shi, Wei
  • Li, Xuan
  • Wu, Qiang
2021 Book Section, cited 0 times
Glioma imaging analysis is a challenging task. In this paper, we used the encoder-decoder structure to complete the task of glioma segmentation. The most important characteristic of the presented segmentation structure is that it can extract more abundant features, and at the same time, it greatly reduces the amount of network parameters and the consumption of computing resources. Different textures, first order statistics and shape-based features were extracted from the BraTS 2020 dataset. Then, we use cox survival analysis to perform feature selection on the extracted features. Finally, we use randomforest regression model to predict the survival time of the patients. The result of survival prediction with five-fold cross-validation on the training dataset is better than the baseline system.

A prediction error based reversible data hiding scheme in encrypted image using block marking and cover image pre-processing

  • Panchikkil, Shaiju
  • Manikandan, V. M.
Multimedia Tools and Applications 2023 Journal Article, cited 0 times
Website
A drastic change in communication is happening with digitization. Technological advancements will escalate its pace further. The human health care systems have improved with technology, remodeling the traditional way of treatments. There has been a peak increase in the rate of telehealth and e-health care services during the coronavirus disease 2019 (COVID-19) pandemic. These implications make reversible data hiding (RDH) a hot topic in research, especially for medical image transmission. Recovering the transmitted medical image (MI) at the receiver side is challenging, as an incorrect MI can lead to the wrong diagnosis. Hence, in this paper, we propose a MSB prediction error-based RDH scheme in an encrypted image with high embedding capacity, which recovers the original image with a peak signal-to-noise ratio (PSNR) of ∞ dB and structural similarity index (SSIM) value of 1. We scan the MI from the first pixel on the top left corner using the snake scan approach in dual modes: i) performing a rightward direction scan and ii) performing a downward direction scan to identify the best optimal embedding rate for an image. Banking upon the prediction error strategy, multiple MSBs are utilized for embedding the encrypted PHR data. The experimental studies on test images project a high embedding rate with more than 3 bpp for 16-bit high-quality DICOM images and more than 1 bpp for most natural images. The outcomes are much more promising compared to other similar state-of-the-art RDH methods.

A subregion-based prediction model for local-regional recurrence risk in head and neck squamous cell carcinoma

  • Pan, Ziqi
  • Men, Kuo
  • Liang, Bin
  • Song, Zhiyue
  • Wu, Runye
  • Dai, Jianrong
Radiother Oncol 2023 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Given that the intratumoral heterogeneity of head and neck squamous cell carcinoma may be related to the local control rate of radiotherapy, the aim of this study was to construct a subregion-based model that can predict the risk of local-regional recurrence, and to quantitatively assess the relative contribution of subregions. MATERIALS AND METHODS: The CT images, PET images, dose images and GTVs of 228 patients with head and neck squamous cell carcinoma from four different institutions of the The Cancer Imaging Archive(TCIA) were included in the study. Using a supervoxel segmentation algorithm called maskSLIC to generate individual-level subregions. After extracting 1781 radiomics and 1767 dosiomics features from subregions, an attention-based multiple instance risk prediction model (MIR) was established. The GTV model was developed based on the whole tumour area and was used to compare the prediction performance with the MIR model. Furthermore, the MIR-Clinical model was constructed by integrating the MIR model with clinical factors. Subregional analysis was carried out through the Wilcoxon test to find the differential radiomic features between the highest and lowest weighted subregions. RESULTS: Compared with the GTV model, the C-index of MIR model was significantly increased from 0.624 to 0.721(Wilcoxon test, p value< 0.0001). When MIR model was combined with clinical factors, the C-index was further increased to 0.766. Subregional analysis showed that for LR patients, the top three differential radiomic features between the highest and lowest weighted subregions were GLRLM_ShortRunHighGrayLevelEmphasis, GRLM_HghGrayLevelRunEmphasis and GLRLM_LongRunHighGrayLevelEmphasis. CONCLUSION: This study developed a subregion-based model that can predict the risk of local-regional recurrence and quantitatively assess relevant subregions, which may provide technical support for the precision radiotherapy in head and neck squamous cell carcinoma.

A survival prediction model via interpretable machine learning for patients with oropharyngeal cancer following radiotherapy

  • Pan, Xiaoying
  • Feng, Tianhao
  • Liu, Chen
  • Savjani, Ricky R.
  • Chin, Robert K.
  • Sharon Qi, X.
2023 Journal Article, cited 0 times
Purpose To explore interpretable machine learning (ML) methods, with the hope of adding more prognosis value, for predicting survival for patients with Oropharyngeal-Cancer (OPC). Methods A cohort of 427 OPC patients (Training 341, Test 86) from TCIA database was analyzed. Radiomic features of gross-tumor-volume (GTV) extracted from planning CT using Pyradiomics, and HPV p16 status, etc. patient characteristics were considered as potential predictors. A multi-level dimension reduction algorithm consisting of Least-Absolute-Selection-Operator (Lasso) and Sequential-Floating-Backward-Selection (SFBS) was proposed to effectively remove redundant/irrelevant features. The interpretable model was constructed by quantifying the contribution of each feature to the Extreme-Gradient-Boosting (XGBoost) decision by Shapley-Additive-exPlanations (SHAP) algorithm. Results The Lasso-SFBS algorithm proposed in this study finally selected 14 features, and our prediction model achieved an area-under-ROC-curve (AUC) of 0.85 on the test dataset based on this feature set. The ranking of the contribution values calculated by SHAP shows that the top predictors that were most correlated with survival were ECOG performance status, wavelet-LLH_firstorder_Mean, chemotherapy, wavelet-LHL_glcm_InverseVariance, tumor size. Those patients who had chemotherapy, with positive HPV p16 status, and lower ECOG performance status, tended to have higher SHAP scores and longer survival; who had an older age at diagnosis, heavy drinking and smoking pack year history, tended to lower SHAP scores and shorter survival. Conclusion We demonstrated predictive values of combined patient characteristics and imaging features for the overall survival of OPC patients. The multi-level dimension reduction algorithm can reliably identify the most plausible predictors that are mostly associated with overall survival. The interpretable patient-specific survival prediction model, capturing correlations of each predictor and clinical outcome, was developed to facilitate clinical decision-making for personalized treatment.

Compressibility variations of JPEG2000 compressed computed tomography

  • Pambrun, Jean-Francois
  • Noumeir, Rita
2013 Conference Proceedings, cited 3 times
Website
Compression is increasingly used in medical applications to enable efficient and universally accessible electronic health records. However, lossy compression introduces artifacts that can alter diagnostic accuracy, interfere with image processing algorithms and cause liability issues in cases of diagnostic errors. Compression guidelines were introduced to mitigate these issues and foster the use of modern compression algorithms with diagnostic imaging. However, these guidelines are usually defined as maximum compression ratios for each imaging protocol and do not take compressibility variations due to image content into account. In this paper we have evaluated the compressibility of thousands of computed tomography slices of an anthropomorphic thoracic phantom acquired with different parameters. We have shown that exposure, slice thickness and reconstruction filters have a significant impact on compressibility suggesting that guidelines based solely on compression ratios may be inadequate.

Prediction of MGMT Methylation Status of Glioblastoma Using Radiomics and Latent Space Shape Features

  • Pálsson, Sveinn
  • Cerri, Stefano
  • Van Leemput, Koen
2022 Book Section, cited 0 times
In this paper we propose a method for predicting the status of MGMT promoter methylation in high-grade gliomas. From the available MR images, we segment the tumor using deep convolutional neural networks and extract both radiomic features and shape features learned by a variational autoencoder. We implemented a standard machine learning workflow to obtain predictions, consisting of feature selection followed by training of a random forest classification model. We trained and evaluated our method on the RSNA-ASNR-MICCAI BraTS 2021 challenge dataset and submitted our predictions to the challenge.

Semi-supervised Variational Autoencoder for Survival Prediction

  • Pálsson, Sveinn
  • Cerri, Stefano
  • Dittadi, Andrea
  • Leemput, Koen Van
2020 Book Section, cited 0 times
In this paper we propose a semi-supervised variational autoencoder for classification of overall survival groups from tumor segmentation masks. The model can use the output of any tumor segmentation algorithm, removing all assumptions on the scanning platform and the specific type of pulse sequences used, thereby increasing its generalization properties. Due to its semi-supervised nature, the method can learn to classify survival time by using a relatively small number of labeled subjects. We validate our model on the publicly available dataset from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019.

PROST-Net: A Deep Learning Approach to Support Real-Time Fusion in Prostate Biopsy

  • Palladino, Luigi
  • Maris, Bogdan
  • Antonelli, Alessandro
  • Fiorini, Paolo
2022 Journal Article, cited 0 times
Website
Prostate biopsy fusion systems employ manual segmentation of the prostate before the procedure, therefore the image registration is static. To pave the way for dynamic fusion, we introduce PROST-Net, a deep learning (DL) based method to segment the prostate in real-time. The algorithm works in three steps: firstly, it detects the presence of the prostate, secondly defines a region of interest around it, discharging other pixels of the image before the last step which is the segmentation. This approach reduces the amount of data to be processed during segmentation and allows to contour the prostate regardless of the image modality (e.g., magnetic resonance (MRI) or ultrasound (US)) and, in the case of US, regardless of the geometric disposition of the sensor array (e.g., linear or convex). PROST-Net produced a mean Dice similarity coefficient of 86% in US images and 77% in MRI images and outperformed other CNN-based techniques. PROST-Net is integrated in a robotic system–PROST– for trans-perineal fusion biopsy. The robot with PROST-Net gives the potential to track the prostate in real-time, thus reducing human errors during the biopsy procedure.

Automatic Pancreas Segmentation using A Novel Modified Semantic Deep Learning Bottom-Up Approach

  • Paithane, Pradip Mukundrao
  • Kakarwal, Dr S. N.
International Journal of Intelligent Systems and Applications in Engineering 2022 Journal Article, cited 0 times
Website
Sharpe and smooth pancreas segmentation is acrucial and arduous problem in medical image analysis and investigation. A semantic deep learning bottom-up approach isthemost popular and efficient method used for pancreas segmentation withasmooth and sharp result. The Automatic pancreas segmentation process is performed through semantic segmentation for abdominal computed tomography (CT) clinical images. A novel semantic segmentation is applied for acute pancreas segmentation with different anglesof CT images. Inthenovel modified semantic approach,12 layers are used. The proposed model is executed on a dataset of 80 patient single-phase CT images. For training purposes,699 images and testing purposes150 images are taken from a dataset with a different angle. The Proposed approach is used for many organs segmentation from CT scans clinical images with high accuracy. “transposedConv2dLayer” layeris used for up-sampling and down-sampling so the computation time period is reduced as related to the state-of-art. Bfscore, Dice Coefficient, Jaccard Coefficient are used to calculate similarity index values between test image and expected output image only. The proposed approach achieved a dice similarity index score upto 81±7.43%.The Class balancing process is executed with the help of class weight and data augmentation. In novel modified semantic segmentation, max-pooling layer, RELU layer, softmax layer, transposed conv2d layer and dicePixelClassification layer are used. DicePixelClassification is newly introduced and incorporated in a novel method for improved results. VGG-16, VGG-19 and RSnet-18 deep learning models are used for pancreas segmentation.

3D PULMONARY NODULES DETECTION USING FAST MARCHING SEGMENTATION

  • Paing, MP
  • Choomchuay, S
Journal of Fundamental and Applied Sciences 2017 Journal Article, cited 1 times
Website

Foundation model for cancer imaging biomarkers

  • Pai, S.
  • Bontempi, D.
  • Hadzic, I.
  • Prudente, V.
  • Sokac, M.
  • Chaunzwa, T. L.
  • Bernatz, S.
  • Hosny, A.
  • Mak, R. H.
  • Birkbak, N. J.
  • Aerts, Hjwl
2024 Journal Article, cited 0 times
Website
Foundation models in deep learning are characterized by a single large-scale model trained on vast amounts of data serving as the foundation for various downstream tasks. Foundation models are generally trained using self-supervised learning and excel in reducing the demand for training samples in downstream applications. This is especially important in medicine, where large labelled datasets are often scarce. Here, we developed a foundation model for cancer imaging biomarker discovery by training a convolutional encoder through self-supervised learning using a comprehensive dataset of 11,467 radiographic lesions. The foundation model was evaluated in distinct and clinically relevant applications of cancer imaging-based biomarkers. We found that it facilitated better and more efficient learning of imaging biomarkers and yielded task-specific models that significantly outperformed conventional supervised and other state-of-the-art pretrained implementations on downstream tasks, especially when training dataset sizes were very limited. Furthermore, the foundation model was more stable to input variations and showed strong associations with underlying biology. Our results demonstrate the tremendous potential of foundation models in discovering new imaging biomarkers that may extend to other clinical use cases and can accelerate the widespread translation of imaging biomarkers into clinical settings.

Modeling and Operator Control of a Robotic Tool for Bidirectional Manipulation in Targeted Prostate Biopsy

  • Padasdao, B.
  • Batsaikhan, Z.
  • Lafreniere, S.
  • Rabiei, M.
  • Konh, B.
2022 Journal Article, cited 0 times
Website
This work introduces design, manipulation, and operator control of a bidirectional robotic tool for minimally invasive targeted prostate biopsy. The robotic tool is purposed to be used as a compliant flexure section of active biopsy needles. The design of the robotic tool comprises of a flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons. The statics of the flexure section is presented and validated with experimental data. Finally, the capability of the robotic tool to reach targeted positions inside prostate gland is evaluated.

Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy

  • Özyurt, Fatih
  • Sert, Eser
  • Avci, Engin
  • Dogantekin, Esin
Measurement 2019 Journal Article, cited 0 times
Brain tumor classification is a challenging task in the field of medical image processing. The present study proposes a hybrid method using Neutrosophy and Convolutional Neural Network (NS-CNN). It aims to classify tumor region areas that are segmented from brain images as benign and malignant. In the first stage, MRI images were segmented using the neutrosophic set – expert maximum fuzzy-sure entropy (NS-EMFSE) approach. The features of the segmented brain images in the classification stage were obtained by CNN and classified using SVM and KNN classifiers. Experimental evaluation was carried out based on 5-fold cross-validation on 80 of benign tumors and 80 of malign tumors. The findings demonstrated that the CNN features displayed a high classification performance with different classifiers. Experimental results indicate that CNN features displayed a better classification performance with SVM as simulation results validated output data with an average success of 95.62%.

An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine

  • Ozyurt, F.
  • Sert, E.
  • Avci, D.
Med Hypotheses 2020 Journal Article, cited 10 times
Website
Super-resolution, which is one of the trend issues of recent times, increases the resolution of the images to higher levels. Increasing the resolution of a vital image in terms of the information it contains such as brain magnetic resonance image (MRI), makes the important information in the MRI image more visible and clearer. Thus, it is provided that the borders of the tumors in the related image are found more successfully. In this study, brain tumor detection based on fuzzy C-means with super-resolution and convolutional neural networks with extreme learning machine algorithms (SR-FCM-CNN) approach has been proposed. The aim of this study has been segmented the tumors in high performance by using Super Resolution Fuzzy-C-Means (SR-FCM) approach for tumor detection from brain MR images. Afterward, feature extraction and pretrained SqueezeNet architecture from convolutional neural network (CNN) architectures and classification process with extreme learning machine (ELM) were performed. In the experimental studies, it has been determined that brain tumors have been better segmented and removed using SR-FCM method. Using the SquezeeNet architecture, features were extracted from a smaller neural network model with fewer parameters. In the proposed method, 98.33% accuracy rate has been detected in the diagnosis of segmented brain tumors using SR-FCM. This rate is greater 10% than the rate of recognition of brain tumors segmented with fuzzy C-means (FCM) without SR.

A Transfer Representation Learning Approach for Breast Cancer Diagnosis from Mammograms using EfficientNet Models

  • Oza, Parita Rajiv
  • Sharma, Paawan
  • Patel, Samir
2022 Journal Article, cited 0 times
Website
Breast cancer is a deadly disease that affects the lives of millions of women throughout the world. Over time, the number of cases of breast cancer has increased. Preventing this disease is difficult and remains unidentified, but the survival percentage can be improved if diagnosed early. The progress of computer-assisted diagnosis (CAD) of breast cancer has seen a lot of improvements thanks to advances in deep learning. With the notable advancement of deep neural networks, diagnostic capabilities are nearing a human expert's. In this paper, we used EfficientNet to classify mammograms. This model is introduced with the new concept of model scaling called compound scaling. Compound scaling is the strategy which scales the model by adding more layers to extend the receptive field along with more channels to catch the detailed features of larger input. We also compare the performance of various variants of EfficientNet over CBIS-DDSM mammogram datasets. We used the optimum fine-tuning procedure to represent the importance of transfer learning (TL) during training.

A Drive Through Computer-Aided Diagnosis of Breast Cancer: A Comprehensive Study of Clinical and Technical Aspects

  • Oza, Parita
  • Sharma, Paawan
  • Patel, Samir
2022 Conference Paper, cited 0 times
Breast cancer is a very common and life-threatening disease in women worldwide. The number of breast cancer cases is increasing with time. Prevention of this disease is very challenging and still remains a question at large, but if detected in advance, the survival rate can be increased. The advances in deep learning have demonstrated a lot of changes in the development of Computer-Aided Diagnosis (CAD) of breast cancer. With the noteworthy progress of the new development of artificial intelligence which is deep neural networks, the diagnostic potentialities of deep learning methods are closely approaching the expertise of a human. Although deep learning has substantial improvements and advancements, especially Convolutional Neural Networks (CNN), there are still some challenges that are required to be addressed to build an effective CAD system that can serve as a “second opinion” tool for practitioners. A comprehensive review of clinical aspects of breast cancer like risk factors, breast abnormalities, and BIRADS (Breast Imaging Reporting and Data System) is presented in the paper. This paper also presents CAD systems that are recently developed for breast cancer segmentation, detection, and classification. An overview of mammography datasets used in literature and challenges in applying CNN for medical images are also discussed in the paper.

Hybridized Deep Convolutional Neural Network and Fuzzy Support Vector Machines for Breast Cancer Detection

  • Oyetade, Idowu Sunday
  • Ayeni, Joshua Ojo
  • Ogunde, Adewale Opeoluwa
  • Oguntunde, Bosede Oyenike
  • Olowookere, Toluwase Ayobami
SN Computer Science 2021 Journal Article, cited 0 times
Website
A cancerous development that originates from breast tissue is known as breast cancer, and it is reported to be the leading cause of women death globally. Previous researches have proved that the application of Computer-Aided Detection (CADe) in screening mammography can assist the radiologist in avoiding missing breast cancer cases. However, many of the existing systems are prone to false detections or misclassifications and are majorly tailored towards either binary classification or three-class classification. Therefore, this study seeks to develop both two-class and three-class models for breast cancer detection and classification employing a deep convolutional neural network (DCNN) with fuzzy support vector machines. The models were developed using mammograms downloaded from the digital database for screening mammography (DDSM) and curated breast imaging subset CBISDDSM data repositories. The datasets were pre-processed, and features extracted for classification with DCNN and fuzzy support vector machines (SVM). The system was evaluated using accuracy, sensitivity, AUC, F1-score, and confusion matrix. The 3-class model gave an accuracy of 81.43% for the DCNN and 85.00% accuracy for the fuzzy SVM. The first layer of the serial 2-layer DCNN with fuzzy SVM for binary prediction yielded 99.61% and 100.00% accuracy, respectively. However, the second layer gave 86.60% and 91.65%, respectively. This study’s contribution to knowledge includes the hybridization of deep convolutional neural network with fuzzy support vector machines to improve the detection and classification of cancerous and non-cancerous breast tumours in both binary classification and three-class classification scenarios.

WarpDrive: Improving spatial normalization using manual refinements

  • Oxenford, S.
  • Rios, A. S.
  • Hollunder, B.
  • Neudorfer, C.
  • Boutet, A.
  • Elias, G. J. B.
  • Germann, J.
  • Loh, A.
  • Deeb, W.
  • Salvato, B.
  • Almeida, L.
  • Foote, K. D.
  • Amaral, R.
  • Rosenberg, P. B.
  • Tang-Wai, D. F.
  • Wolk, D. A.
  • Burke, A. D.
  • Sabbagh, M. N.
  • Salloway, S.
  • Chakravarty, M. M.
  • Smith, G. S.
  • Lyketsos, C. G.
  • Okun, M. S.
  • Anderson, W. S.
  • Mari, Z.
  • Ponce, F. A.
  • Lozano, A.
  • Neumann, W. J.
  • Al-Fatly, B.
  • Horn, A.
Med Image Anal 2023 Journal Article, cited 0 times
Website
Spatial normalization-the process of mapping subject brain images to an average template brain-has evolved over the last 20+ years into a reliable method that facilitates the comparison of brain imaging results across patients, centers & modalities. While overall successful, sometimes, this automatic process yields suboptimal results, especially when dealing with brains with extensive neurodegeneration and atrophy patterns, or when high accuracy in specific regions is needed. Here we introduce WarpDrive, a novel tool for manual refinements of image alignment after automated registration. We show that the tool applied in a cohort of patients with Alzheimer's disease who underwent deep brain stimulation surgery helps create more accurate representations of the data as well as meaningful models to explain patient outcomes. The tool is built to handle any type of 3D imaging data, also allowing refinements in high-resolution imaging, including histology and multiple modalities to precisely aggregate multiple data sources together.

Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence

  • Owais, Muhammad
  • Arsalan, Muhammad
  • Choi, Jiho
  • Park, Kang Ryoung
J Clin Med 2019 Journal Article, cited 0 times
Website
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).

Medical image retrieval using hybrid wavelet network classifier

  • Othman, Sufri
  • Jemai, Olfa
  • Zaied, Mourad
  • Ben Amar, Chokri
2014 Conference Proceedings, cited 3 times
Website
Nowadays the amount of imaging data is rapidly increasing with the widespread dissemination of picture archiving in medical systems. Effective image retrieval systems are required to manage these complex and large image databases. Indexing medical images become, for clinical applications, an essential and effective tool which assists the monitoring in diagnosis therapeutic. CBIR (Content Based Image Retrieval) is one of the possible solutions to manage effectively these bases. In order to achieve this application, we have to ensure these key tasks: indexing medical images and classification. Accordingly to accomplish this work, medical images are indexed and classified using wavelet network classifier (WNC) based on fast wavelet transform (FWT) for its robustness and for its pertinent results in the classification domain.

Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification

  • Otalora, S.
  • Marini, N.
  • Muller, H.
  • Atzori, M.
BMC Med Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels. RESULTS: As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists ([Formula: see text]). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with [Formula: see text]. Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels [Formula: see text]. CONCLUSION: Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper: https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning.

Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

  • Otake, Y
  • Schafer, S
  • Stayman, JW
  • Zbijewski, W
  • Kleinszig, G
  • Graumann, R
  • Khanna, AJ
  • Siewerdsen, JH
2012 Conference Proceedings, cited 8 times
Website
Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.

Application of Sparse-Coding Super-Resolution to 16-Bit DICOM Images for Improving the Image Resolution in MRI

  • Ota, Junko
  • Umehara, Kensuke
  • Ishimaru, Naoki
  • Ishida, Takayuki
Open Journal of Medical Imaging 2017 Journal Article, cited 1 times
Website

Contrast-enhanced MRI synthesis using dense-dilated residual convolutions based 3D network toward elimination of gadolinium in neuro-oncology

  • Osman, A. F. I.
  • Tamam, N. M.
J Appl Clin Med Phys 2023 Journal Article, cited 0 times
Website
Recent studies have raised broad safety and health concerns about using of gadolinium contrast agents during magnetic resonance imaging (MRI) to enhance identification of active tumors. In this paper, we developed a deep learning-based method for three-dimensional (3D) contrast-enhanced T1-weighted (T1) image synthesis from contrast-free image(s). The MR images of 1251 patients with glioma from the RSNA-ASNR-MICCAI BraTS Challenge 2021 dataset were used in this study. A 3D dense-dilated residual U-Net (DD-Res U-Net) was developed for contrast-enhanced T1 image synthesis from contrast-free image(s). The model was trained on a randomly split training set (n = 800) using a customized loss function and validated on a validation set (n = 200) to improve its generalizability. The generated images were quantitatively assessed against the ground-truth on a test set (n = 251) using the mean absolute error (MAE), mean-squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), normalized mutual information (NMI), and Hausdorff distance (HDD) metrics. We also performed a qualitative visual similarity assessment between the synthetic and ground-truth images. The effectiveness of the proposed model was compared with a 3D U-Net baseline model and existing deep learning-based methods in the literature. Our proposed DD-Res U-Net model achieved promising performance for contrast-enhanced T1 synthesis in both quantitative metrics and perceptual evaluation on the test set (n = 251). Analysis of results on the whole brain region showed a PSNR (in dB) of 29.882 +/- 5.924, a SSIM of 0.901 +/- 0.071, a MAE of 0.018 +/- 0.013, a MSE of 0.002 +/- 0.002, a HDD of 2.329 +/- 9.623, and a NMI of 1.352 +/- 0.091 when using only T1 as input; and a PSNR (in dB) of 30.284 +/- 4.934, a SSIM of 0.915 +/- 0.063, a MAE of 0.017 +/- 0.013, a MSE of 0.001 +/- 0.002, a HDD of 1.323 +/- 3.551, and a NMI of 1.364 +/- 0.089 when combining T1 with other MRI sequences. Compared to the U-Net baseline model, our model revealed superior performance. Our model demonstrated excellent capability in generating synthetic contrast-enhanced T1 images from contrast-free MR image(s) of the whole brain region when using multiple contrast-free images as input. Without incorporating tumor mask information during network training, its performance was inferior in the tumor regions compared to the whole brain which requires further improvements to replace the gadolinium administration in neuro-oncology.

Deep learning-based convolutional neural network for intramodality brain MRI synthesis

  • Osman, A. F. I.
  • Tamam, N. M.
J Appl Clin Med Phys 2022 Journal Article, cited 2 times
Website
PURPOSE: The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state-of-the-art deep learning convolutional neural network (CNN) for image-to-image translation across three standards MRI contrasts for the brain. METHODS: BRATS'2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1-weighted (T1), T2-weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U-Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean-squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic-MR images were evaluated against the ground-truth images by computing the MSE, mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: The generated synthetic-MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44-33.25 dB, 0.0005-0.0012, 0.0086-0.0149, and 0.932-0.946, respectively. Our results were as good as the best-reported results by other deep learning models on BRATS datasets. CONCLUSIONS: Our U-Net model exhibited that it can accurately perform image-to-image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision-making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.

A Neuro-Fuzzy Based System for the Classification of Cells as Cancerous or Non-Cancerous

  • Omotosho, Adebayo
  • Oluwatobi, Asani Emmanuel
  • Oluwaseun, Ogundokun Roseline
  • Chukwuka, Ananti Emmanuel
  • Adekanmi, Adegun
International Journal of Medical Research & Health Sciences 2018 Journal Article, cited 0 times
Website

Image segmentation on GPGPUs: a cellular automata-based approach

  • Olmedo, Irving
  • Perez, Yessika Guerra
  • Johnson, James F
  • Raut, Lakshman
  • Hoe, David HK
2013 Conference Proceedings, cited 0 times
Website

Method for compressing DICOM images with bit-normalization and video CODECs

  • Oliveira, Marcos
  • Murta-Junior, Luiz O.
2021 Conference Paper, cited 0 times
Website
The constant increase in the volume of data generated by various medical modalities has generated discussions regarding the space needed for storage. Although the storage and network bandwidth costs are decreasing, medical data production grows faster, thus forcing an increase in spending. With the application of image compression and decompression techniques, such expectations and challenges overcoming can preserve all clinically relevant information. This research evaluates a lossy method that combines an adaptive normalization for each DICOM slice and volume compression using a video CODEC. Similarity metrics results show that the best result achieved for these tests was the method combines the normalization function and the H264 using as parameters FPS 60, bitrate 120, with an image in PNG format, where SSIM and CC have the maximum value (1:00), PSNR has value 77:02 and CR was 5:46, twice higher the CR results from JPEG-LS and J2K.

Uma Proposta Para Utilização De Workflows Científicos Para A Definição De Pipelines Para A Recuperação De Imagens Médicas Por Conteúdo Em Um Ambiente Distribuído

  • Oliveira, Luis Fernando Milano
2016 Thesis, cited 1 times
Website

Development of Clinically-Informed 3D Tumor Models for Microwave Imaging Applications

  • Oliveira, Barbara
  • O'Halloran, Martin
  • Conceicao, Raquel
  • Glavin, Martin
  • Jones, Edward
2016 Journal Article, cited 8 times
Website

An integrated convolutional neural network with attention guidance for improved performance of medical image classification

  • Öksüz, Coşku
  • Urhan, Oğuzhan
  • Güllü, Mehmet Kemal
Neural Computing and Applications 2023 Journal Article, cited 0 times
Today, it becomes essential to develop computer vision algorithms that are both highly effective and cost-effective for supporting physicians' decisions. Convolutional Neural Network (CNN) is a deep learning architecture that enables learning relevant imaging features by simultaneously optimizing feature extraction and classification phases and has a high potential to meet this need. On the other hand, the lack of low- and high-level local details in a CNN is an issue that can reduce the task performance and prevent the network from focusing on the region of interest. To tackle this issue, we propose an attention-guided CNN architecture, which combines three lightweight encoders (the ensembled encoder) at the feature level to consolidate the feature maps with local details in this study. The proposed model is validated on the publicly available data sets for two commonly studied classification tasks, i.e., the brain tumor and COVID-19 disease classification. Performance improvements of 2.21% and 1.32%, respectively, achieved for brain tumor and COVID-19 classification tasks confirm our assumption that combining encoders recovers local details missed in a deeper encoder. In addition, the attention mechanism used after the ensembled encoder further improves performance by 2.29% for the brain tumor and 6.13% for the COVID-19 classification tasks. Besides that, our ensembled encoder with the attention mechanism enhances the focus on the region of interest by 4.4% in terms of the IoU score. Competitive performance scores accomplished for each classification task against state-of-the-art methods indicate that the proposed model can be an effective tool for medical image classification.

Brain tumor classification using the fused features extracted from expanded tumor region

  • Öksüz, Coşku
  • Urhan, Oğuzhan
  • Güllü, Mehmet Kemal
Biomedical Signal Processing and Control 2022 Journal Article, cited 0 times
Website

Optothermal tissue response for laser-based investigation of thyroid cancer

  • Okebiorun, Michael O.
  • ElGohary, Sherif H.
Informatics in Medicine Unlocked 2020 Journal Article, cited 0 times
Website
To characterize thyroid cancer imaging-based detection, we implemented a simulation of the optical and thermal response in an optical investigation of thyroid cancer. We employed the 3D Monte Carlo method and the bio-heat equation to determine the fluence and temperature distribution via the Molecular Optical Simulation Environment (MOSE) with a Finite element (FE) simulator. The optothermal effect of a neck surface-based source is also compared to a trachea-based source. Results show fluence and temperature distribution in a realistic 3D neck model with both endogenous and hypothetical tissue-specific exogenous contrast agents. It also reveals that the trachea illumination has a factor of ten better absorption and temperature change than the neck-surface illumination, and tumor-specific exogenous contrast agents have a relatively higher absorption and temperature change in the tumors, which could be assistive to clinicians and researchers to improve and better understand the region's response to laser-based diagnosis.

Artifact Reduction for Sparse-view CT using Deep Learning with Band Patch

  • Okamoto, Takayuki
  • Ohnishi, Takashi
  • Haneishi, Hideaki
2022 Journal Article, cited 1 times
Website
Sparse-view computed tomography (CT), an imaging technique that reduces the number of projections, can reduce the total scan duration and radiation dose. However, sparse data sampling causes streak artifacts on images reconstructed with analytical algorithms. In this paper, we propose an artifact reduction method for sparse-view CT using deep learning. We developed a light-weight fully convolutional network to estimate a fully sampled sinogram from a sparse-view sinogram by enlargement in the vertical direction. Furthermore, we introduced the band patch, a rectangular region cropped in the vertical direction, as an input image for the network based on the sinogram’s characteristics. Comparison experiments using a swine rib dataset of micro-CT scans and a chest dataset of clinical CT scans were conducted to compare the proposed method, improved U-net from a previous study, and the U-net with band patches. The experimental results showed that the proposed method achieved the best performance and the U-net with band patches had the second-best result in terms of accuracy and prediction time. In addition, the reconstructed images of the proposed method suppressed streak artifacts while preserving the object’s structural information. We confirmed that the proposed method and band patch are useful for artifact reduction for sparse-view CT.

Memory-efficient 3D connected component labeling with parallel computing

  • Ohira, Norihiro
Signal, Image and Video Processing 2017 Journal Article, cited 0 times
Website

Reproducibility of radiomic features using network analysis and its application in Wasserstein k-means clustering

  • Oh, Jung Hun
  • Apte, Aditya P.
  • Katsoulakis, Evangelia
  • Riaz, Nadeem
  • Hatzoglou, Vaios
  • Yu, Yao
  • Mahmood, Usman
  • Veeraraghavan, Harini
  • Pouryahya, Maryam
  • Iyer, Aditi
  • Shukla-Dave, Amita
  • Tannenbaum, Allen
  • Lee, Nancy Y.
  • Deasy, Joseph O.
Journal of Medical Imaging 2021 Journal Article, cited 0 times
Website
Purpose: The goal of this study is to develop innovative methods for identifying radiomic features that are reproducible over varying image acquisition settings. Approach: We propose a regularized partial correlation network to identify reliable and reproducible radiomic features. This approach was tested on two radiomic feature sets generated using two different reconstruction methods on computed tomography (CT) scans from a cohort of 47 lung cancer patients. The largest common network component between the two networks was tested on phantom data consisting of five cancer samples. To further investigate whether radiomic features found can identify phenotypes, we propose a k-means clustering algorithm coupled with the optimal mass transport theory. This approach following the regularized partial correlation network analysis was tested on CT scans from 77 head and neck squamous cell carcinoma (HNSCC) patients in the Cancer Imaging Archive (TCIA) and validated using an independent dataset. Results: A set of common radiomic features was found in relatively large network components between the resultant two partial correlation networks resulting from a cohort of lung cancer patients. The reliability and reproducibility of those radiomic features were further validated on phantom data using the Wasserstein distance. Further analysis using the network-based Wasserstein k-means algorithm on the TCIA HNSCC data showed that the resulting clusters separate tumor subsites as well as HPV status, and this was validated on an independent dataset. Conclusion: We showed that a network-based analysis enables identifying reproducible radiomic features and use of the selected set of features can enhance clustering results.

A novel prostate segmentation method: triple fusion model with hybrid loss

  • Ocal, Hakan
  • Barisci, Necaattin
Neural Computing and Applications 2022 Journal Article, cited 0 times
Website
Early and rapid diagnosis of prostate cancer, the horsehead disease among men, has become increasingly important. Nowadays, many methods are used in the early diagnosis of prostate cancer. Compared to other imaging methods, magnetic resonance imaging (MRI) based on prostate gland imaging is preferred because angular imaging (axial, sagittal, and coronal) provides precise information. But diagnosing the disease from these MR images is time-consuming. For example, imaging differences between MR devices for prostate segmentation and inhomogeneous and inconsistent prostate appearance are significant challenges. Because of these segmentation difficulties, manual segmentation of prostate images is challenging. In recent years, computer-aided intelligent architectures (deep learning-based architecture) have been used to overcome the manual segmentation of prostate images. These architectures can now perform manual prostate segmentation in seconds that used to take days thanks to their end-to-end automatic deep convolutional neural networks (DCNN). Inspired by the studies mentioned above, this study proposes a novel DCNN approach for prostate segmentation by combining ResUnet 2D with residual blocks and Edge Attention Vnet 3D architectures. In addition, the weighted foal Twersky loss function, which was proposed for the first time, significantly increased the architecture's performance. Evaluation experiments were performed on the MICCAI 2012 Prostate Segmentation Challenge Dataset (PROMISE12) and the NCI-ISBI 2013(NCI_ISBI-13) Prostate Segmentation Challenge Dataset. As a result of the tests performed, Dice scores of 91.92 and 91.15% in the whole prostate volume were obtained in the Promise 12 and NCI_ISBI 13 datasets, respectively. Comparative analyses show that the advantages and robustness of our method are superior to the state-of-the-art approaches.

Prostate Segmentation via Dynamic Fusion Model

  • Ocal, Hakan
  • Barisci, Necaattin
Arabian Journal for Science and Engineering 2022 Journal Article, cited 0 times
Website
Nowadays, many different methods are used in diagnosing prostate cancer. Among these methods, MRI-based imaging methods provide more precise information than other methods by obtaining the prostate's image from different angles (axial, sagittal, coronal). However, manually segmenting these images is very time-consuming and laborious. Besides, another challenge is the inhomogeneous and inconsistent appearance around the prostate borders, which is essential for cancer diagnosis. Nowadays, scientists are working intensively on deep learning-based techniques to identify prostate boundaries more efficiently and with high accuracy. In this study, a dynamic fusion architecture is proposed. For the fusion model, the Unet + Resnet3D and Unet + Resnet2D models were fused. Evaluation experiments were performed on the MICCAI 2012 Prostate Segmentation Challenge Dataset (PROMISE12) and the NCI-ISBI 2013(NCI_ISBI-13) Prostate Segmentation Challenge Dataset. Comparative analyzes show that the advantages and robustness of our method are superior to state-of-the-art approaches.

Autocorrection of lung boundary on 3D CT lung cancer images

  • Nurfauzi, R.
  • Nugroho, H. A.
  • Ardiyanto, I.
  • Frannita, E. L.
Journal of King Saud University - Computer and Information Sciences 2019 Journal Article, cited 0 times
Website
Lung cancer in men has the highest mortality rate among all types of cancer. Juxta-pleural and juxta-vascular are the most common nodules located on the lung surface. A computer aided detection (CADe) system is effective for assisting radiologists in diagnosing lung nodules. However, the lung segmentation step requires sophisticated methods when juxta-pleural and juxta-vascular nodules are present. Fast computational time and low error in covering nodule areas are the aims of this study. The proposed method consists of five stages, namely ground truth (GT) extraction, data preparation, tracheal extraction, separation of lung fusion and lung border correction. The used data consist of 57 3D CT lung cancer images taken from selected LIDC-IDRI dataset. These nodules are determined as the outer areas labeled by four radiologists. The proposed method achieves the fastest computational time of 0.32 s per slice or 60 times faster than that of conventional adaptive border marching (ABM). Moreover, it produces under segmentation of nodule value as low as 14.6%. It indicates that the proposed method has a potential to be embedded in the lung CADe system to cover pleural juxta and vascular nodule areas in lung segmentation.

Radiogenomic modeling predicts survival-associated prognostic groups in glioblastoma

  • Nuechterlein, Nicholas
  • Li, Beibin
  • Feroze, Abdullah
  • Holland, Eric C
  • Shapiro, Linda
  • Haynor, David
  • Fink, James
  • Cimino, Patrick J
Neuro-oncology advances 2021 Journal Article, cited 0 times
Website

Medical Image Retrieval Using Vector Quantization and Fuzzy S-tree

  • Nowaková, Jana
  • Prílepok, Michal
  • Snášel, Václav
Journal of Medical Systems 2017 Journal Article, cited 33 times
Website
The aim of the article is to present a novel method for fuzzy medical image retrieval (FMIR) using vector quantization (VQ) with fuzzy signatures in conjunction with fuzzy S-trees. In past times, a task of similar pictures searching was not based on searching for similar content (e.g. shapes, colour) of the pictures but on the picture name. There exist some methods for the same purpose, but there is still some space for development of more efficient methods. The proposed image retrieval system is used for finding similar images, in our case in the medical area - in mammography, in addition to the creation of the list of similar images - cases. The created list is used for assessing the nature of the finding - whether the medical finding is malignant or benign. The suggested method is compared to the method using Normalized Compression Distance (NCD) instead of fuzzy signatures and fuzzy S-tree. The method with NCD is useful for the creation of the list of similar cases for malignancy assessment, but it is not able to capture the area of interest in the image. The proposed method is going to be added to the complex decision support system to help to determine appropriate healthcare according to the experiences of similar, previous cases.

Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network

  • Nomura, Yusuke
  • Xu, Qiong
  • Shirato, Hiroki
  • Shimizu, Shinichi
  • Xing, Lei
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.

Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation

  • Nomura, Yusuke
  • Xu, Qiong
  • Peng, Hao
  • Takao, Seishin
  • Shimizu, Shinichi
  • Xing, Lei
  • Shirato, Hiroki
Medical Physics 2020 Journal Article, cited 0 times
Website

Image Quality Evaluation in Computed Tomography Using Super-resolution Convolutional Neural Network

  • Nm, Kibok
  • Cho, Jeonghyo
  • Lee, Seungwan
  • Kim, Burnyoung
  • Yim, Dobin
  • Lee, Dahye
2020 Journal Article, cited 0 times
High-quality computed tomography (CT) images enable precise lesion detection and accurate diagnosis. A lot of studies have been performed to improve CT image quality while reducing radiation dose. Recently, deep learning-based techniques for improving CT image quality have been developed and show superior performance compared to conventional techniques. In this study, a super-resolution convolutional neural network (SRCNN) model was used to improve the spatial resolution of CT images, and image quality according to the hyperparameters, which determine the performance of the SRCNN model, was evaluated in order to verify the effect of hyperparameters on the SRCNN model. Profile, structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and full-width at half-maximum (FWHM) were measured to evaluate the performance of the SRCNN model. The results showed that the performance of the SRCNN model was improved with an increase of the numbers of epochs and training sets, and the learning rate needed to be optimized for obtaining acceptable image quality. Therefore, the SRCNN model with optimal hyperparameters is able to improve CT image quality.

Leveraging different learning styles for improved knowledge distillation in biomedical imaging

  • Niyaz, Usma
  • Sambyal, Abhishek Singh
  • Bathula, Deepti R.
2024 Journal Article, cited 0 times
Learning style refers to a type of training mechanism adopted by an individual to gain new knowledge. As suggested by the VARK model, humans have different learning preferences, like Visual (V), Auditory (A), Read/Write (R), and Kinesthetic (K), for acquiring and effectively processing information. Our work endeavors to leverage this concept of knowledge diversification to improve the performance of model compression techniques like Knowledge Distillation (KD) and Mutual Learning (ML). Consequently, we use a single-teacher and two-student network in a unified framework that not only allows for the transfer of knowledge from teacher to students (KD) but also encourages collaborative learning between students (ML). Unlike the conventional approach, where the teacher shares the same knowledge in the form of predictions or feature representations with the student network, our proposed approach employs a more diversified strategy by training one student with predictions and the other with feature maps from the teacher. We further extend this knowledge diversification by facilitating the exchange of predictions and feature maps between the two student networks, enriching their learning experiences. We have conducted comprehensive experiments with three benchmark datasets for both classification and segmentation tasks using two different network architecture combinations. These experimental results demonstrate that knowledge diversification in a combined KD and ML framework outperforms conventional KD or ML techniques (with similar network configuration) that only use predictions with an average improvement of 2%. Furthermore, consistent improvement in performance across different tasks, with various network architectures, and over state-of-the-art techniques establishes the robustness and generalizability of the proposed model.

Segmentation of lung from CT using various active contour models

  • Nithila, Ezhil E
  • Kumar, SS
Biomedical Signal Processing and Control 2018 Journal Article, cited 0 times
Website

MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms

  • Nissar, I.
  • Alam, S.
  • Masood, S.
  • Kashif, M.
Comput Methods Programs Biomed 2024 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Deep Learning models have emerged as a significant tool in generating efficient solutions for complex problems including cancer detection, as they can analyze large amounts of data with high efficiency and performance. Recent medical studies highlight the significance of molecular subtype detection in breast cancer, aiding the development of personalized treatment plans as different subtypes of cancer respond better to different therapies. METHODS: In this work, we propose a novel lightweight dual-channel attention-based deep learning model MOB-CBAM that utilizes the backbone of MobileNet-V3 architecture with a Convolutional Block Attention Module to make highly accurate and precise predictions about breast cancer. We used the CMMD mammogram dataset to evaluate the proposed model in our study. Nine distinct data subsets were created from the original dataset to perform coarse and fine-grained predictions, enabling it to identify masses, calcifications, benign, malignant tumors and molecular subtypes of cancer, including Luminal A, Luminal B, HER-2 Positive, and Triple Negative. The pipeline incorporates several image pre-processing techniques, including filtering, enhancement, and normalization, for enhancing the model's generalization ability. RESULTS: While identifying benign versus malignant tumors, i.e., coarse-grained classification, the MOB-CBAM model produced exceptional results with 99 % accuracy, precision, recall, and F1-score values of 0.99 and MCC of 0.98. In terms of fine-grained classification, the MOB-CBAM model has proven to be highly efficient in accurately identifying mass with (benign/malignant) and calcification with (benign/malignant) classification tasks with an impressive accuracy rate of 98 %. We have also cross-validated the efficiency of the proposed MOB-CBAM deep learning architecture on two datasets: MIAS and CBIS-DDSM. On the MIAS dataset, an accuracy of 97 % was reported for the task of classifying benign, malignant, and normal images, while on the CBIS-DDSM dataset, an accuracy of 98 % was achieved for the classification of mass with either benign or malignant, and calcification with benign and malignant tumors. CONCLUSION: This study presents lightweight MOB-CBAM, a novel deep learning framework, to address breast cancer diagnosis and subtype prediction. The model's innovative incorporation of the CBAM enhances precise predictions. The extensive evaluation of the CMMD dataset and cross-validation on other datasets affirm the model's efficacy.

Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization

  • Nishio, Mizuho
  • Nishizawa, Mitsuo
  • Sugiyama, Osamu
  • Kojima, Ryosuke
  • Yakami, Masahiro
  • Kuroda, Tomohiro
  • Togashi, Kaori
PLoS One 2018 Journal Article, cited 3 times
Website

Computer-aided Diagnosis for Lung Cancer: Usefulness of Nodule Heterogeneity

  • Nishio, Mizuho
  • Nagashima, Chihiro
Academic Radiology 2017 Journal Article, cited 12 times
Website
RATIONALE AND OBJECTIVES: To develop a computer-aided diagnosis system to differentiate between malignant and benign nodules. MATERIALS AND METHODS: Seventy-three lung nodules revealed on 60 sets of computed tomography (CT) images were analyzed. Contrast-enhanced CT was performed in 46 CT examinations. The images were provided by the LUNGx Challenge, and the ground truth of the lung nodules was unavailable; a surrogate ground truth was, therefore, constructed by radiological evaluation. Our proposed method involved novel patch-based feature extraction using principal component analysis, image convolution, and pooling operations. This method was compared to three other systems for the extraction of nodule features: histogram of CT density, local binary pattern on three orthogonal planes, and three-dimensional random local binary pattern. The probabilistic outputs of the systems and surrogate ground truth were analyzed using receiver operating characteristic analysis and area under the curve. The LUNGx Challenge team also calculated the area under the curve of our proposed method based on the actual ground truth of their dataset. RESULTS: Based on the surrogate ground truth, the areas under the curve were as follows: histogram of CT density, 0.640; local binary pattern on three orthogonal planes, 0.688; three-dimensional random local binary pattern, 0.725; and the proposed method, 0.837. Based on the actual ground truth, the area under the curve of the proposed method was 0.81. CONCLUSIONS: The proposed method could capture discriminative characteristics of lung nodules and was useful for the differentiation between malignant and benign nodules.

Robust radiogenomics approach to the identification of EGFR mutations among patients with NSCLC from three different countries using topologically invariant Betti numbers

  • Ninomiya, K.
  • Arimura, H.
  • Chan, W. Y.
  • Tanaka, K.
  • Mizuno, S.
  • Muhammad Gowdh, N. F.
  • Yaakup, N. A.
  • Liam, C. K.
  • Chai, C. S.
  • Ng, K. H.
PLoS One 2021 Journal Article, cited 0 times
Website
OBJECTIVES: To propose a novel robust radiogenomics approach to the identification of epidermal growth factor receptor (EGFR) mutations among patients with non-small cell lung cancer (NSCLC) using Betti numbers (BNs). MATERIALS AND METHODS: Contrast enhanced computed tomography (CT) images of 194 multi-racial NSCLC patients (79 EGFR mutants and 115 wildtypes) were collected from three different countries using 5 manufacturers' scanners with a variety of scanning parameters. Ninety-nine cases obtained from the University of Malaya Medical Centre (UMMC) in Malaysia were used for training and validation procedures. Forty-one cases collected from the Kyushu University Hospital (KUH) in Japan and fifty-four cases obtained from The Cancer Imaging Archive (TCIA) in America were used for a test procedure. Radiomic features were obtained from BN maps, which represent topologically invariant heterogeneous characteristics of lung cancer on CT images, by applying histogram- and texture-based feature computations. A BN-based signature was determined using support vector machine (SVM) models with the best combination of features that maximized a robustness index (RI) which defined a higher total area under receiver operating characteristics curves (AUCs) and lower difference of AUCs between the training and the validation. The SVM model was built using the signature and optimized in a five-fold cross validation. The BN-based model was compared to conventional original image (OI)- and wavelet-decomposition (WD)-based models with respect to the RI between the validation and the test. RESULTS: The BN-based model showed a higher RI of 1.51 compared with the models based on the OI (RI: 1.33) and the WD (RI: 1.29). CONCLUSION: The proposed model showed higher robustness than the conventional models in the identification of EGFR mutations among NSCLC patients. The results suggested the robustness of the BN-based approach against variations in image scanner/scanning parameters.

Homological radiomics analysis for prognostic prediction in lung cancer patients

  • Ninomiya, Kenta
  • Arimura, Hidetaka
Physica Medica 2020 Journal Article, cited 0 times
Website

Deep cross-view co-regularized representation learning for glioma subtype identification

  • Ning, Zhenyuan
  • Tu, Chao
  • Di, Xiaohui
  • Feng, Qianjin
  • Zhang, Yu
Medical Image Analysis 2021 Journal Article, cited 0 times
Website
The new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) on the basis of genotypes, e.g., isocitrate dehydrogenase and chromosome arms 1p/19q, in addition to the histologic phenotype. Glioma subtype identification can provide valid guidances for both risk-benefit assessment and clinical decision. The feature representations of gliomas in magnetic resonance imaging (MRI) have been prevalent for revealing underlying subtype status. However, since gliomas are highly heterogeneous tumors with quite variable imaging phenotypes, learning discriminative feature representations in MRI for gliomas remains challenging. In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification, in which view representation learning and multiple constraints are integrated into a unified paradigm. Specifically, we first learn latent view-specific representations based on cross-view images generated from MRI via a bi-directional mapping connecting original imaging space and latent space, and view-correlated regularizer and output-consistent regularizer in the latent space are employed to explore view correlation and derive view consistency, respectively. We further learn view-sharable representations which can explore complementary information of multiple views by projecting the view-specific representations into a holistically shared space and enhancing via adversary learning strategy. Finally, the view-specific and view-sharable representations are incorporated for identifying glioma subtype. Experimental results on multi-site datasets demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.

Integrative analysis of cross-modal features for the prognosis prediction of clear cell renal cell carcinoma

  • Ning, Zhenyuan
  • Pan, Weihao
  • Chen, Yuting
  • Xiao, Qing
  • Zhang, Xinsen
  • Luo, Jiaxiu
  • Wang, Jian
  • Zhang, Yuan
Bioinformatics 2020 Journal Article, cited 0 times
Website
MOTIVATION: As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance. RESULTS: We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728-0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data. AVAILABILITY AND IMPLEMENTATION: https://github.com/zhang-de-lab/zhang-lab? from=singlemessage. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Multi-modal magnetic resonance imaging-based grading analysis for gliomas by integrating radiomics and deep features

  • Ning, Z.
  • Luo, J.
  • Xiao, Q.
  • Cai, L.
  • Chen, Y.
  • Yu, X.
  • Wang, J.
  • Zhang, Y.
Ann Transl Med 2021 Journal Article, cited 0 times
Website
Background: To investigate the feasibility of integrating global radiomics and local deep features based on multi-modal magnetic resonance imaging (MRI) for developing a noninvasive glioma grading model. Methods: In this study, 567 patients [211 patients with glioblastomas (GBMs) and 356 patients with low-grade gliomas (LGGs)] between May 2006 and September 2018, were enrolled and divided into training (n=186), validation (n=47), and testing cohorts (n=334), respectively. All patients underwent postcontrast enhanced T1-weighted and T2 fluid-attenuated inversion recovery MRI scanning. Radiomics and deep features (trained by 8,510 3D patches) were extracted to quantify the global and local information of gliomas, respectively. A kernel fusion-based support vector machine (SVM) classifier was used to integrate these multi-modal features for grading gliomas. The performance of the grading model was assessed using the area under receiver operating curve (AUC), sensitivity, specificity, Delong test, and t-test. Results: The AUC, sensitivity, and specificity of the model based on combination of radiomics and deep features were 0.94 [95% confidence interval (CI): 0.85, 0.99], 86% (95% CI: 64%, 97%), and 92% (95% CI: 75%, 99%), respectively, for the validation cohort; and 0.88 (95% CI: 0.84, 0.91), 88% (95% CI: 80%, 93%), and 81% (95% CI: 76%, 86%), respectively, for the independent testing cohort from a local hospital. The developed model outperformed the models based only on either radiomics or deep features (Delong test, both of P<0.001), and was also comparable to the clinical radiologists. Conclusions: This study demonstrated the feasibility of integrating multi-modal MRI radiomics and deep features to develop a promising noninvasive grading model for gliomas.

Multi image super resolution of MRI images using generative adversarial network

  • Nimitha, U.
  • Ameer, P. M.
Journal of Ambient Intelligence and Humanized Computing 2024 Journal Article, cited 0 times
Website
In recent decades, computer-aided medical image analysis has become a popular techniques for disease detection and diagnosis. Deep learning-based image processing techniques, have gained popularity in areas such as remote sensing, computer vision, and healthcare,compared to conventional techniques. However, hardware limitations, acquisition time, low radiation dose, and patient motion are factors that can limit the quality of medical images. High-resolution medical images are more accurate in localizing disease regions than low-resolution images. Hardware limitations, patient motion, radiation dose etc. can result in low-resolution (LR) medical images. To enhance the quality of LR medical images, we propose a multi-image super-resolution architecture using a generative adversarial network (GAN) with a generator architecture that employs multi-stage feature extraction, incorporating both residual blocks and an attention network and a discriminator having fewer convolutional layers to reduce computational complexity. The method enhances the resolution of low-resolution (LR) prostate cancer MRI images by combining multiple MRI slices with slight spatial shifts, utilizing shared weights for feature extraction for each MRI image. Unlike super-resolution techniques in literature, the network uses perceptual loss, which is computed by fine-tuning the VGG19 network with sparse categorical cross entropy loss. The features to compute perceptual loss are extracted from the final dense layer, instead of the convolutional block in VGG19 the literature. Our experiments were conducted on MRI images having a resolution of 80x80 for lower resolution and 320x320 for high resolution achieving an upscaling of x4. The experimental analysis shows that the proposed model outperforms the existing deep learning architectures for super-resolution with an average peak-signal-to-noise ratio (PSNR) of 30.58 ± 0.76 dB and average structural similarity index measure (SSIM) of 0.8105 ± 0.0656 for prostate MRI images. The application of a CNN-based SVM classifier confirmed that enhancing the resolution of normal LR brain MRI images using super-resolution techniques did not result in any false positive cases. This same architecture has the potential to be extended to other medical imaging modalities as well.

A convolutional neural network for fully automated blood SUV determination to facilitate SUR computation in oncological FDG-PET

  • Nikulin, P.
  • Hofheinz, F.
  • Maus, J.
  • Li, Y.
  • Butof, R.
  • Lange, C.
  • Furth, C.
  • Zschaeck, S.
  • Kreissl, M. C.
  • Kotzerke, J.
  • van den Hoff, J.
Eur J Nucl Med Mol Imaging 2021 Journal Article, cited 0 times
Website
PURPOSE: The standardized uptake value (SUV) is widely used for quantitative evaluation in oncological FDG-PET but has well-known shortcomings as a measure of the tumor's glucose consumption. The standard uptake ratio (SUR) of tumor SUV and arterial blood SUV (BSUV) possesses an increased prognostic value but requires image-based BSUV determination, typically in the aortic lumen. However, accurate manual ROI delineation requires care and imposes an additional workload, which makes the SUR approach less attractive for clinical routine. The goal of the present work was the development of a fully automated method for BSUV determination in whole-body PET/CT. METHODS: Automatic delineation of the aortic lumen was performed with a convolutional neural network (CNN), using the U-Net architecture. A total of 946 FDG PET/CT scans from several sites were used for network training (N = 366) and testing (N = 580). For all scans, the aortic lumen was manually delineated, avoiding areas affected by motion-induced attenuation artifacts or potential spillover from adjacent FDG-avid regions. Performance of the network was assessed using the fractional deviations of automatically and manually derived BSUVs in the test data. RESULTS: The trained U-Net yields BSUVs in close agreement with those obtained from manual delineation. Comparison of manually and automatically derived BSUVs shows excellent concordance: the mean relative BSUV difference was (mean +/- SD) = (- 0.5 +/- 2.2)% with a 95% confidence interval of [- 5.1,3.8]% and a total range of [- 10.0, 12.0]%. For four test cases, the derived ROIs were unusable (< 1 ml). CONCLUSION: CNNs are capable of performing robust automatic image-based BSUV determination. Integrating automatic BSUV derivation into PET data processing workflows will significantly facilitate SUR computation without increasing the workload in the clinical setting.

Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study

  • Nikolov, Stanislav
  • Blackwell, Sam
  • Zverovitch, Alexei
  • Mendes, Ruheena
  • Livne, Michelle
  • De Fauw, Jeffrey
  • Patel, Yojan
  • Meyer, Clemens
  • Askham, Harry
  • Romera-Paredes, Bernadino
  • Kelly, Christopher
  • Karthikesalingam, Alan
  • Chu, Carlton
  • Carnell, Dawn
  • Boon, Cheng
  • D'Souza, Derek
  • Moinuddin, Syed Ali
  • Garie, Bethany
  • McQuinlan, Yasmin
  • Ireland, Sarah
  • Hampton, Kiarna
  • Fuller, Krystle
  • Montgomery, Hugh
  • Rees, Geraint
  • Suleyman, Mustafa
  • Back, Trevor
  • Hughes, Cian Owen
  • Ledsam, Joseph R
  • Ronneberger, Olaf
J Med Internet Res 2021 Journal Article, cited 0 times
Website
BACKGROUND: Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE: Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS: The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS: We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS: Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.

A FRAMEWORK FOR AUTOMATIC COLORIZATION OF MEDICAL IMAGING

  • Nida, Nudrat
  • Sharif, Muhammad
  • Khan, Muhammad Usman Ghani
  • Yasmin, Mussarat
  • Fernandes, Steven Lawrence
IIOABJ 2016 Journal Article, cited 3 times
Website

Efficient Colorization of Medical Imaging based on Colour Transfer Method

  • Nida, Nudrat
  • Khan, Muhammad Usman Ghani
2016 Journal Article, cited 0 times
Website

Addition of MR imaging features and genetic biomarkers strengthens glioblastoma survival prediction in TCGA patients

  • Nicolasjilwan, Manal
  • Hu, Ying
  • Yan, Chunhua
  • Meerzaman, Daoud
  • Holder, Chad A
  • Gutman, David
  • Jain, Rajan
  • Colen, Rivka
  • Rubin, Daniel L
  • Zinn, Pascal O
  • Hwang, Scott N
  • Raghavan, Prashant
  • Hammoud, Dima A
  • Scarpace, Lisa M
  • Mikkelsen, Tom
  • Chen, James
  • Gevaert, Olivier
  • Buetow, Kenneth
  • Freymann, John
  • Kirby, Justin
  • Flanders, Adam E
  • Wintermark, Max
Journal of Neuroradiology 2014 Journal Article, cited 49 times
Website
PURPOSE: The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. METHODS: The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. RESULTS: The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679+/-0.068, Akaike's information criterion 566.7, P<0.001). CONCLUSION: A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.

Pulmonary nodule classification with deep residual networks

  • Nibali, Aiden
  • He, Zhen
  • Wollersheim, Dennis
International Journal of Computer Assisted Radiology and Surgery 2017 Journal Article, cited 19 times
Website
Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules.

An EffcientNet-encoder U-Net Joint Residual Refinement Module with Tversky–Kahneman Baroni–Urbani–Buser loss for biomedical image Segmentation

  • Nham, Do-Hai-Ninh
  • Trinh, Minh-Nhat
  • Nguyen, Viet-Dung
  • Pham, Van-Truong
  • Tran, Thi-Thao
Biomedical Signal Processing and Control 2023 Journal Article, cited 0 times
Quantitative analysis on biomedical images has been on increasing demand nowadays and for modern computer vision approaches. While recently advanced procedures have been enforced, there is still necessity in optimizing network architecture and loss functions. Inspired by the pretrained EfficientNet-B4 and the refinement module in boundary-aware problems, we propose a new two-stage network which is called EffcientNet-encoder U-Net Joint Residual Refinement Module and we create a novel loss function called the Tversky–Kahneman Baroni–Urbani–Buser loss function. The loss function is built on the basement of the Baroni–Urbani–Buser coefficient and the Jaccard–Tanimoto coefficient and reformulated in the Tversky–Kahneman probability-weighting function. We have evaluated our algorithm on the four popular datasets: the 2018 Data Science Bowl Cell Nucleus Segmentation dataset, the Brain Tumor LGG Segmentation dataset, the Skin Lesion ISIC 2018 dataset and the MRI cardiac ACDC dataset. Several comparisons have proved that our proposed approach is noticeably promising and some of the segmentation results provide new state-of-the-art results. The code is available at https://github.com/tswizzle141/An-EffcientNet-encoder-U-Net-Joint-Residual-Refinement-Module-with-TK-BUB-Loss.

Mortality Prediction Analysis among COVID-19 Inpatients Using Clinical Variables and Deep Learning Chest Radiography Imaging Features

  • Nguyen, X. V.
  • Dikici, E.
  • Candemir, S.
  • Ball, R. L.
  • Prevedello, L. M.
Tomography 2022 Journal Article, cited 0 times
Website
The emergence of the COVID-19 pandemic over a relatively brief interval illustrates the need for rapid data-driven approaches to facilitate clinical decision making. We examined a machine learning process to predict inpatient mortality among COVID-19 patients using clinical and chest radiographic data. Modeling was performed with a de-identified dataset of encounters prior to widespread vaccine availability. Non-imaging predictors included demographics, pre-admission clinical history, and past medical history variables. Imaging features were extracted from chest radiographs by applying a deep convolutional neural network with transfer learning. A multi-layer perceptron combining 64 deep learning features from chest radiographs with 98 patient clinical features was trained to predict mortality. The Local Interpretable Model-Agnostic Explanations (LIME) method was used to explain model predictions. Non-imaging data alone predicted mortality with an ROC-AUC of 0.87 +/- 0.03 (mean +/- SD), while the addition of imaging data improved prediction slightly (ROC-AUC: 0.91 +/- 0.02). The application of LIME to the combined imaging and clinical model found HbA1c values to contribute the most to model prediction (17.1 +/- 1.7%), while imaging contributed 8.8 +/- 2.8%. Age, gender, and BMI contributed 8.7%, 8.2%, and 7.1%, respectively. Our findings demonstrate a viable explainable AI approach to quantify the contributions of imaging and clinical data to COVID mortality predictions.

Ensemble of Convolutional Neural Networks for the Detection of Prostate Cancer in Multi-parametric MRI Scans

  • Nguyen, Quang H.
  • Gong, Mengnan
  • Liu, Tao
  • Youheng, Ou Yang
  • Nguyen, Binh P.
  • Chua, Matthew Chin Heng
2021 Book Section, cited 0 times
Website
Prostate MP-MRI scan is a non-invasive method of detecting early stage prostate cancer which is increasing in popularity. However, this imaging modality requires highly skilled radiologists to interpret the images which incurs significant time and cost. Convolutional neural networks may alleviate the workload of radiologists by discriminating between prostate tumor positive scans and negative ones, allowing radiologists to focus their attention on a subset of scans that are neither clearly positive nor negative. The major challenges of such a system are speed and accuracy. In order to address these two challenges, a new approach using ensemble learning of convolutional neural networks (CNNs) was proposed in this paper, which leverages different imaging modalities including T2 weight, B-value, ADC and Ktrans in a multi-parametric MRI clinical dataset with 330 samples of 204 patients for training and evaluation. The results of prostate tumor identification will display benign or malignant based on extracted features by the individual CNN models in seconds. The ensemble of the four individual CNN models for different image types improves the prediction accuracy to 92% with sensitivity at 94.28% and specificity at 86.67% among given 50 test samples. The proposed framework potentially provides rapid classification in high-volume quantitative prostate tumor samples.

Enhancing MRI Brain Tumor Segmentation with an Additional Classification Network

  • Nguyen, Hieu T.
  • Le, Tung T.
  • Nguyen, Thang V.
  • Nguyen, Nhan T.
2021 Book Section, cited 0 times
Brain tumor segmentation plays an essential role in medical image analysis. In recent studies, deep convolution neural networks (DCNNs) are extremely powerful to tackle tumor segmentation tasks. We propose in this paper a novel training method that enhances the segmentation results by adding an additional classification branch to the network. The whole network was trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. On the BraTS’s test set, it achieved an average Dice score of 80.57%, 85.67% and 82.00% , as well as Hausdorff distances (95%) of 14.22, 7.36 and 23.27, respectively for the enhancing tumor, the whole tumor and the tumor core.

Image-based assessment of extracellular mucin-to-tumor area predicts consensus molecular subtypes (CMS) in colorectal cancer

  • Nguyen, H. G.
  • Lundstrom, O.
  • Blank, A.
  • Dawson, H.
  • Lugli, A.
  • Anisimova, M.
  • Zlobec, I.
Mod Pathol 2022 Journal Article, cited 1 times
Website
The backbone of all colorectal cancer classifications including the consensus molecular subtypes (CMS) highlights microsatellite instability (MSI) as a key molecular pathway. Although mucinous histology (generally defined as >50% extracellular mucin-to-tumor area) is a "typical" feature of MSI, it is not limited to this subgroup. Here, we investigate the association of CMS classification and mucin-to-tumor area quantified using a deep learning algorithm, and the expression of specific mucins in predicting CMS groups and clinical outcome. A weakly supervised segmentation method was developed to quantify extracellular mucin-to-tumor area in H&E images. Performance was compared to two pathologists' scores, then applied to two cohorts: (1) TCGA (n = 871 slides/412 patients) used for mucin-CMS group correlation and (2) Bern (n = 775 slides/517 patients) for histopathological correlations and next-generation Tissue Microarray construction. TCGA and CPTAC (n = 85 patients) were used to further validate mucin detection and CMS classification by gene and protein expression analysis for MUC2, MUC4, MUC5AC and MUC5B. An excellent inter-observer agreement between pathologists' scores and the algorithm was obtained (ICC = 0.92). In TCGA, mucinous tumors were predominantly CMS1 (25.7%), CMS3 (24.6%) and CMS4 (16.2%). Average mucin in CMS2 was 1.8%, indicating negligible amounts. RNA and protein expression of MUC2, MUC4, MUC5AC and MUC5B were low-to-absent in CMS2. MUC5AC protein expression correlated with aggressive tumor features (e.g., distant metastases (p = 0.0334), BRAF mutation (p < 0.0001), mismatch repair-deficiency (p < 0.0001), and unfavorable 5-year overall survival (44% versus 65% for positive/negative staining). MUC2 expression showed the opposite trend, correlating with less lymphatic (p = 0.0096) and venous vessel invasion (p = 0.0023), no impact on survival.The absence of mucin-expressing tumors in CMS2 provides an important phenotype-genotype correlation. Together with MSI, mucinous histology may help predict CMS classification using only histopathology and should be considered in future image classifiers of molecular subtypes.

Synergy of Sex Differences in Visceral Fat Measured with CT and Tumor Metabolism Helps Predict Overall Survival in Patients with Renal Cell Carcinoma

  • Nguyen, Gerard K
  • Mellnick, Vincent M
  • Yim, Aldrin Kay-Yuen
  • Salter, Amber
  • Ippolito, Joseph E
RadiologyRadiology 2018 Journal Article, cited 1 times
Website

Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network

  • Newitt, David C
  • Malyarenko, Dariya
  • Chenevert, Thomas L
  • Quarles, C Chad
  • Bell, Laura
  • Fedorov, Andriy
  • Fennessy, Fiona
  • Jacobs, Michael A
  • Solaiyappan, Meiyappan
  • Hectors, Stefanie
  • Taouli, B.
  • Muzi, M.
  • Kinahan, P. E. E.
  • Schmainda, K. M.
  • Prah, M. A.
  • Taber, E. N.
  • Kroenke, C.
  • Huang, W., Arlinghaus, L.
  • Yankeelov, T. E.
  • Cao, Y.
  • Aryal, M.
  • Yen, Y.-F.
  • Kalpathy-Cramer, J.
  • Shukla-Dave, A.
  • Fung, M.
  • Liang, J.
  • Boss, M.
  • Hylton, N.
Journal of Medical Imaging 2017 Journal Article, cited 6 times
Website

An interpretable machine learning system for colorectal cancer diagnosis from pathology slides

  • Neto, P. C.
  • Montezuma, D.
  • Oliveira, S. P.
  • Oliveira, D.
  • Fraga, J.
  • Monteiro, A.
  • Monteiro, J.
  • Ribeiro, L.
  • Goncalves, S.
  • Reinhard, S.
  • Zlobec, I.
  • Pinto, I. M.
  • Cardoso, J. S.
NPJ Precis Oncol 2024 Journal Article, cited 0 times
Website
03 April 2024A Correction to this paper has been published: https://doi.org/10.1038/s41698-024-00581-2 Considering the profound transformation affecting pathology practice, we aimed to develop a scalable artificial intelligence (AI) system to diagnose colorectal cancer from whole-slide images (WSI). For this, we propose a deep learning (DL) system that learns from weak labels, a sampling strategy that reduces the number of training samples by a factor of six without compromising performance, an approach to leverage a small subset of fully annotated samples, and a prototype with explainable predictions, active learning features and parallelisation. Noting some problems in the literature, this study is conducted with one of the largest WSI colorectal samples dataset with approximately 10,500 WSIs. Of these samples, 900 are testing samples. Furthermore, the robustness of the proposed method is assessed with two additional external datasets (TCGA and PAIP) and a dataset of samples collected directly from the proposed prototype. Our proposed method predicts, for the patch-based tiles, a class based on the severity of the dysplasia and uses that information to classify the whole slide. It is trained with an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists through spatial annotations. The mixed-supervision scheme allowed for an intelligent sampling strategy effectively evaluated in several different scenarios without compromising the performance. On the internal dataset, the method shows an accuracy of 93.44% and a sensitivity between positive (low-grade and high-grade dysplasia) and non-neoplastic samples of 0.996. On the external test samples varied with TCGA being the most challenging dataset with an overall accuracy of 84.91% and a sensitivity of 0.996.

Big biomedical image processing hardware acceleration: A case study for K-means and image filtering

  • Neshatpour, Katayoun
  • Koohi, Arezou
  • Farahmand, Farnoud
  • Joshi, Rajiv
  • Rafatirad, Setareh
  • Sasan, Avesta
  • Homayoun, Houman
2016 Conference Paper, cited 7 times
Website
Most hospitals today are dealing with the big data problem, as they generate and store petabytes of patient records most of which in form of medical imaging, such as pathological images, CT scans and X-rays in their datacenters. Analyzing such large amounts of biomedical imaging data to enable discovery and guide physicians in personalized care is becoming an important focus of data mining and machine learning algorithms developed for biomedical Informatics (BMI). Algorithms that are developed for BMI heavily rely on complex and computationally intensive machine learning and data mining methods to learn from large data. The high processing demand of big biomedical imaging data has given rise to their implementation in high-end server platforms running software ecosystems that are optimized for dealing with large amount of data including Apache Hadoop and Apache Spark. However, efficient processing of such large amount of imaging data running computational intensive learning methods is becoming a challenging problem using state-of-the-art high performance computing server architectures. To address this challenge, in this paper, we introduce a scalable and efficient hardware acceleration method using low cost commodity FPGAs that is interfaced with a server architecture through a high speed interface. In this work we present a full end-to-end implementation of big data image processing and machine learning applications in a heterogeneous CPU+FPGA architecture. We develop the MapReduce implementation of K-means and Laplacian Filtering in Hadoop Streaming environment that allows developing mapper functions in non-Java based languages suited for interfacing with FPGA-based hardware accelerating environment. We accelerate the mapper functions through hardware+software (HW+SW) co-design. We do a full implementation of the HW+SW mappers on the Zynq FPGA platform. The results show promising kernel speedup of up to 27× for large image data sets. This translate to 7.8× and 1.8× speedup in an end-to-end Hadoop MapReduce implementation of K-mean s and Laplacian Filtering algorithm, respectively.

Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi

  • Nemoto, Takafumi
  • Futakami, Natsumi
  • Yagi, Masamichi
  • Kumabe, Atsuhiro
  • Takeda, Atsuya
  • Kunieda, Etsuo
  • Shigematsu, Naoyuki
Journal of Radiation Research 2020 Journal Article, cited 0 times
Website
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 x 128 x 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart Segmentation Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.

Chest CT Cinematic Rendering of SARS-CoV-2 Pneumonia

  • Necker, F. N.
  • Scholz, M.
RadiologyRadiology 2021 Journal Article, cited 0 times
Website
The SARS-Cov-2 pandemic has spread rapidly throughout the world since its first reported infection in Wuhan, China. Despite the introduction of vaccines for this important viral infection, there remains a significant public health risk to the population as this virus continues to mutate. While it remains unknown if these new mutations will evade the current vaccines, it is possible that we may be living with this infection for many years to come as it becomes endemic. Cinematic rendering of CT images is a new way to show the three dimensionality of the various densities contained in volumetric CT data. We show an example of PCR-positive SARS-Cov-2 pneumonia using this new technique (Figure; (Movie [online]). This case is from the RSNA-RICORD dataset (1, 2). It shows the typical presentation of SARS-Cov-2 pneumonia with ground-glass subpleural opacities that are clearly seen (Figure). The higher attenuation of lung tissue filled with fluid results in these areas appearing patchy or spongy.

Mediastinal Lymph Node Detection and Segmentation Using Deep Learning

  • Nayan, Al-Akhir
  • Kijsirikul, Boonserm
  • Iwahori, Yuji
IEEE Access 2022 Journal Article, cited 0 times
Automatic lymph node (LN) segmentation and detection for cancer staging are critical. In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal LNs. Despite its low contrast and variety in nodal size and form, LN segmentation remains a challenging task. Deep convolutional neural networks frequently segment items in medical photographs. Most state-of-the-art techniques destroy image’s resolution through pooling and convolution. As a result, the models provide unsatisfactory results. Keeping the issues in mind, a well-established deep learning technique UNet++ was modified using bilinear interpolation and total generalized variation (TGV) based upsampling strategy to segment and detect mediastinal lymph nodes. The modified UNet++ maintains texture discontinuities, selects noisy areas, searches appropriate balance points through backpropagation, and recreates image resolution. Collecting CT image data from TCIA, 5-patients, and ELCAP public dataset, a dataset was prepared with the help of experienced medical experts. The UNet++ was trained using those datasets, and three different data combinations were utilized for testing. Utilizing the proposed approach, the model achieved 94.8% accuracy, 91.9% Jaccard, 94.1% recall, and 93.1% precision on COMBO_3. The performance was measured on different datasets and compared with state-of-the-art approaches. The UNet++ model with hybridized strategy performed better than others.

Validation of Segmented Brain Tumor from MRI Images Using 3D Printing

  • Nayak, U. A.
  • Balachandra, M.
  • K, N. M.
  • Kurady, R.
Asian Pac J Cancer Prev 2021 Journal Article, cited 0 times
Website
BACKGROUND: Early diagnosis of a brain tumor is important for improving the treatment possibilities. Manually segmenting the tumor from the volumetric data is time-consuming, and the visualization of the tumor is rather challenging. METHODS: This paper proposes a user-guided brain tumour segmentation from MRI (Magnetic Resonance Imaging) images developed using Medical Imaging Interaction Toolkit (MITK) and printing the segmented object using the 3D printer for tumour quantification. The proposed method includes segmenting the tumour interactively using connected threshold method, then printing the physical object from the segmented volume of interest. Then the distance between two voxels was measured using electronic callipers on the 3D volume in a specific direction. And next, the same distance was measured in the same direction on the 3D printed object. RESULTS: The technique was tested with n=5 samples (20 readings) of brain MRI images from RIDER Neuro MRI dataset of National Cancer Institute. MITK provides various tools that enable image visualization, registration, and contouring. We were able to achieve the same measurements using both the approaches and this has been tested statistically with paired t-test method. Through this and the observer's opinion, the accuracy of the segmentation was proved. CONCLUSION: When the difference in measurement of tumor volume through the electronic calipers and with 3D printed object equates to zero, proves that the segmentation technique is accurate. This helps to delineate the tumor more accurately during radio therapy.

Breast MRI Registration Using Metaheuristic Algorithms

  • Nayak, Somen
  • Si, Tapas
  • Sarkar, Achyuth
2021 Conference Paper, cited 0 times
Website
Ten percent of the women in the whole world are suffering from breast cancer in their lives. Breast MRI registration is an important task to align MR images of pre-and post-contrast for diagnosis and classification of cancer type into benign and malignant using pharmacokinetic analysis. It is also very much essential to align images that are to be taken in various time intervals to isolate the lesion of small intervals. This registration technique is also useful to monitor various types of cancer therapies. The main enlightenment of algorithms used for image registration has also transferred from a control point for semi-automated techniques, to sophisticated voxel-based automated techniques which use mutual information as a resemblance measure. In this manuscript, breast MRI registration using Multi-verse optimization (MVO) algorithm and Student Phycology based optimization (SPBO) algorithm is proposed; MVO and SPBO are metaheuristics-based Optimization algorithm which we have applied to register breast MR images. We have considered 40 pairs of breast MR-images of pre and post-contrast. After that, images are registered using MVO and SPBO algorithms. The results of the SPBO-based breast MRI registration method are compared with that MVO-based registration method. The experimental results inferred that the SPBO-based registration method statistically outperforms the MVO-based registration method in the registration of breast MR images.

An augmented reality and high-speed optical tracking system for laparoscopic surgery

  • Nawawithan, Nati
  • Young, Jeff
  • Bettati, Patric
  • Rathgeb, Armand P.
  • Pruitt, Kelden T.
  • Frimpter, Jordan
  • Kim, Henry
  • Yu, Jonathan
  • Driver, Davis
  • Shiferaw, Amanuel
  • Chaudhari, Aditi
  • Johnson, Brett A.
  • Gahan, Jeffrey
  • Yu, James
  • Fei, Baowei
  • Rettmann, Maryam E.
  • Siewerdsen, Jeffrey H.
2024 Conference Paper, cited 0 times
Website
While minimally invasive laparoscopic surgery can help reduce blood loss, reduce hospital time, and shorten recovery time compared to open surgery, it has the disadvantages of limited field of view and difficulty in locating subsurface targets. Our proposed solution applies an augmented reality (AR) system to overlay pre-operative images, such as those from magnetic resonance imaging (MRI), onto the target organ in the user’s real-world environment. Our system can provide critical information regarding the location of subsurface lesions to guide surgical procedures in real time. An infrared motion tracking camera system was employed to obtain real-time position data of the patient and surgical instruments. To perform hologram registration, fiducial markers were used to track and map virtual coordinates to the real world. In this study, phantom models of each organ were constructed to test the reliability and accuracy of the AR-guided laparoscopic system. Root mean square error (RMSE) was used to evaluate the targeting accuracy of the laparoscopic interventional procedure. Our results demonstrated a registration error of 2.42 ± 0.79 mm and a procedural targeting error of 4.17 ± 1.63 mm using our AR-guided laparoscopic system that will be further refined for potential clinical procedures.

Adding features from the mathematical model of breast cancer to predict the tumour size

  • Nave, OPhir
International Journal of Computer Mathematics: Computer Systems Theory 2020 Journal Article, cited 0 times
Website
In this study, we combine a theoretical mathematical model with machine learning (ML) to predict tumour sizes in breast cancer. Our study is based on clinical data from 1869 women of various ages with breast cancer. To accurately predict tumour size for each woman individually, we solved our customized mathematical model for each woman, then added the solution vector of the dynamic variables in the model (in machine learning language, these are called features) to the clinical data and used a variety of machine learning algorithms. We compared the results obtained with and without the mathematical model and showed that by adding specific features from the mathematical model we were able to better predict tumour size for each woman.

Discrimination of Benign and Malignant Suspicious Breast Tumors Based on Semi-Quantitative DCE-MRI Parameters Employing Support Vector Machine

  • Navaei-Lavasani, Saeedeh
  • Fathi-Kazerooni, Anahita
  • Saligheh-Rad, Hamidreza
  • Gity, Masoumeh
Frontiers in Biomedical Technologies 2015 Journal Article, cited 4 times
Website

Automatic Classification of Brain MRI Images Using SVM and Neural Network Classifiers

  • Natteshan, NVS
  • Jothi, J Angel Arul
2015 Conference Paper, cited 8 times
Website
Computer Aided Diagnosis (CAD) is a technique where diagnosis is performed in an automatic way. This work has developed a CAD system for automatically classifying the given brain Magnetic Resonance Imaging (MRI) image into ‘tumor affected’ or ‘tumor not affected’. The input image is preprocessed using wiener filter and Contrast Limited Adaptive Histogram Equalization (CLAHE). The image is then quantized and aggregated to get a reduced image data. The reduced image is then segmented into four regions such as gray matter, white matter, cerebrospinal fluid and high intensity tumor cluster using Fuzzy C Means (FCM) algorithm. The tumor region is then extracted using the intensity metric. A contour is evolved over the identified tumor region using Active Contour model (ACM) to extract exact tumor segment. Thirty five features including Gray Level Co-occurrence Matrix (GLCM) features, Gray Level Run Length Matrix features (GLRL), statistical features and shape based features are extracted from the tumor region. Neural network and Support Vector Machine (SVM) classifiers are trained using these features. Results indicate that Support vector machine classifier with quadratic kernel function performs better than Radial Basis Function (RBF) kernel function and neural network classifier with fifty hidden nodes performs better than twenty five hidden nodes. It is also evident from the result that average running time of FCM is less when used on reduced image data.

Security of Multi-frame DICOM Images Using XOR Encryption Approach

  • Natsheh, QN
  • Li, B
  • Gale, AG
Procedia Computer Science 2016 Journal Article, cited 4 times
Website
Transferring medical images using networks is subjected to a wide variety of security risks. Hence, there is a need of a robust and secure mechanism to exchange medical images over the Internet. The Digital Image and Communication in Medicine (DICOM) standard provides attributes for the header data confidentiality but not for the pixel image data. In this paper, a simple and effective encryption approach for pixel data is provided for multi-frame DICOM medical images. The main goal of the proposed approach is to reduce the encryption and decryption time of these images, using Advanced Encryption Standard (AES) where only one image is encrypted and XOR cipher for encrypting the remaining multi-frame DICOM images. The proposed algorithm is evaluated using computational time, normalized correlation, entropy, Peak-Signal-to-Noise-Ratio (PSNR) and histogram analysis. The results show that the proposed approach can reduce the encryption and decryption time and is able to ensure image confidentiality.

The national lung screening trial: overview and study design

  • National
  • Lung
  • Screening
  • Trial
  • Research
  • Team
RadiologyRadiology 2011 Journal Article, cited 760 times
Website

Image Processing and Classification Techniques for Early Detection of Lung Cancer for Preventive Health Care: A Survey

  • Naresh, Prashant
  • Shettar, Rajashree
Int. J. of Recent Trends in Engineering & Technology 2014 Journal Article, cited 6 times
Website

Performance analysis of a computer-aided detection system for lung nodules in CT at different slice thicknesses

  • Narayanan, B. N.
  • Hardie, R. C.
  • Kebede, T. M.
J Med Imaging (Bellingham) 2018 Journal Article, cited 2 times
Website
We study the performance of a computer-aided detection (CAD) system for lung nodules in computed tomography (CT) as a function of slice thickness. In addition, we propose and compare three different training methodologies for utilizing nonhomogeneous thickness training data (i.e., composed of cases with different slice thicknesses). These methods are (1) aggregate training using the entire suite of data at their native thickness, (2) homogeneous subset training that uses only the subset of training data that matches each testing case, and (3) resampling all training and testing cases to a common thickness. We believe this study has important implications for how CT is acquired, processed, and stored. We make use of 192 CT cases acquired at a thickness of 1.25 mm and 283 cases at 2.5 mm. These data are from the publicly available Lung Nodule Analysis 2016 dataset. In our study, CAD performance at 2.5 mm is comparable with that at 1.25 mm and is much better than at higher thicknesses. Also, resampling all training and testing cases to 2.5 mm provides the best performance among the three training methods compared in terms of accuracy, memory consumption, and computational time.

Tumor image-derived texture features are associated with CD3 T-cell infiltration status in glioblastoma

  • Narang, Shivali
  • Kim, Donnie
  • Aithala, Sathvik
  • Heimberger, Amy B
  • Ahmed, Salmaan
  • Rao, Dinesh
  • Rao, Ganesh
  • Rao, Arvind
OncotargetOncotarget 2017 Journal Article, cited 1 times
Website

Automatic rectum limit detection by anatomical markers correlation

  • Namías, R
  • D’Amato, JP
  • Del Fresno, M
  • Vénere, M
Computerized Medical Imaging and Graphics 2014 Journal Article, cited 1 times
Website
Several diseases take place at the end of the digestive system. Many of them can be diagnosed by means of different medical imaging modalities together with computer aided detection (CAD) systems. These CAD systems mainly focus on the complete segmentation of the digestive tube. However, the detection of limits between different sections could provide important information to these systems. In this paper we present an automatic method for detecting the rectum and sigmoid colon limit using a novel global curvature analysis over the centerline of the segmented digestive tube in different imaging modalities. The results are compared with the gold standard rectum upper limit through a validation scheme comprising two different anatomical markers: the third sacral vertebra and the average rectum length. Experimental results in both magnetic resonance imaging (MRI) and computed tomography colonography (CTC) acquisitions show the efficacy of the proposed strategy in automatic detection of rectum limits. The method is intended for application to the rectum segmentation in MRI for geometrical modeling and as contextual information source in virtual colonoscopies and CAD systems. (C) 2014 Elsevier Ltd. All rights reserved.

Optimization of polyethylene glycol-based hydrogel rectal spacer for focal laser ablation of prostate peripheral zone tumor

  • Namakshenas, P.
  • Mojra, A.
Physica Medica 2021 Journal Article, cited 1 times
Website
PURPOSE: Focal Laser ablation therapy is a technique that exposes the prostate tumor to hyperthermia ablation and eradicates cancerous cells. However, due to the excessive heating generated by laser irradiation, there is a possibility of damage to the adjacent healthy tissues. This paper through in silico study presents a novel approach to reduce collateral effects due to heating by the placement of polyethylene glycol (PEG) spacer between the rectum and tumor during laser irradiation. The PEG spacer thickness is optimized to reduce the undesired damage at common laser power used in the clinical trials. Our study also encompasses novelty by conducting the thermal analysis based on the porous structure of prostate tumor. METHODS: The thermal parameters and two thermal phase lags between the temperature gradient and the heat flux, are determined by considering the vascular network of prostate tumor. The Nelder-Mead algorithm is applied to find the minimum thickness of the PEG spacer. RESULTS: In the absence of the spacer, the predicted results for the laser power of 4 W, 8 W, and 12 W show that the temperature of the rectum rises up to 58.6 degrees C, 80.4 degrees C, and 101.1 degrees C, while through the insertion of 2.59 mm, 4 mm, and 4.9 mm of the PEG spacer, it dramatically reduces below 42 degrees C. CONCLUSIONS: The results can be used as a guideline to ablate the prostate tumors while avoiding undesired damage to the rectal wall during laser irradiation, especially for the peripheral zone tumors.

Classification of brain tumor isocitrate dehydrogenase status using MRI and deep learning

  • Nalawade, S.
  • Murugesan, G. K.
  • Vejdani-Jahromi, M.
  • Fisicaro, R. A.
  • Bangalore Yogananda, C. G.
  • Wagner, B.
  • Mickey, B.
  • Maher, E.
  • Pinho, M. C.
  • Fei, B.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
J Med Imaging (Bellingham) 2019 Journal Article, cited 0 times
Website
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

Prediction of COVID-19 patients in danger of death using radiomic features of portable chest radiographs

  • Nakashima, M.
  • Uchiyama, Y.
  • Minami, H.
  • Kasai, S.
2022 Journal Article, cited 0 times
Website
INTRODUCTION: Computer-aided diagnostic systems have been developed for the detection and differential diagnosis of coronavirus disease 2019 (COVID-19) pneumonia using imaging studies to characterise a patient's current condition. In this radiomic study, we propose a system for predicting COVID-19 patients in danger of death using portable chest X-ray images. METHODS: In this retrospective study, we selected 100 patients, including ten that died and 90 that recovered from the COVID-19-AR database of the Cancer Imaging Archive. Since it can be difficult to analyse portable chest X-ray images of patients with COVID-19 because bone components overlap with the abnormal patterns of this disease, we employed a bone-suppression technique during pre-processing. A total of 620 radiomic features were measured in the left and right lung regions, and four radiomic features were selected using the least absolute shrinkage and selection operator technique. We distinguished death from recovery cases using a linear discriminant analysis (LDA) and a support vector machine (SVM). The leave-one-out method was used to train and test the classifiers, and the area under the receiver-operating characteristic curve (AUC) was used to evaluate discriminative performance. RESULTS: The AUCs for LDA and SVM were 0.756 and 0.959, respectively. The discriminative performance was improved when the bone-suppression technique was employed. When the SVM was used, the sensitivity for predicting disease severity was 90.9% (9/10), and the specificity was 95.6% (86/90). CONCLUSIONS: We believe that the radiomic features of portable chest X-ray images can predict COVID-19 patients in danger of death.

Regularized Three-Dimensional Generative Adversarial Nets for Unsupervised Metal Artifact Reduction in Head and Neck CT Images

  • Nakao, Megumi
  • Imanishi, Keiho
  • Ueda, Nobuhiro
  • Imai, Yuichiro
  • Kirita, Tadaaki
  • Matsuda, Tetsuya
IEEE Access 2020 Journal Article, cited 1 times
Website
The reduction of metal artifacts in computed tomography (CT) images, specifically for strongartifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Althoughthere have been some studies on supervised metal artifact reduction through the learning of synthesizedartifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomenathat may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methodsbased on an unsupervised volume-to-volume translation learned from clinical CT images. We constructthree-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multipledental fillings. The results of experiments using a CT volume database of 361 patients demonstrate that theproposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missingvoxels, while preserving the anatomical features of soft tissues and tooth structures from the original images.

Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis

  • Nakamoto, Takahiro
  • Takahashi, Wataru
  • Haga, Akihiro
  • Takahashi, Satoshi
  • Kiryu, Shigeru
  • Nawa, Kanabu
  • Ohta, Takeshi
  • Ozaki, Sho
  • Nozawa, Yuki
  • Tanaka, Shota
  • Mukasa, Akitake
  • Nakagawa, Keiichi
2019 Journal Article, cited 0 times
Website
We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon-Mann-Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 +/- 0.024 (95% CI (confidence interval), 0.873-0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 +/- 0.034 (95% CI, 0.705-0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas.

Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction

  • Nakai, H.
  • Nishio, M.
  • Yamashita, R.
  • Ono, A.
  • Nakao, K. K.
  • Fujimoto, K.
  • Togashi, K.
Acad Radiol 2019 Journal Article, cited 0 times
Website
Rationale and Objectives To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. Materials and Methods This study used 60 anonymized chest CT cases from a public database called “The Cancer Imaging Archive”. Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. Results The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0–3.5 versus 1.0–1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). Conclusion Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted. Key Words Convolutional neural network CNN Sparse-view CT Deep learning Abbreviations BN batch normalization CNN convolutional neural networks CTcomputed tomography dB decibel GGO ground glass opacity GPU graphics processing unit MSE the mean squared error PSNR peak signal to noise ratio ReLU rectified linear unit SSIM structural similarity index TCIA The Cancer Imaging Archive

Sampling strategies for learning-based 3D medical image compression

  • Nagoor, Omniah H.
  • Whittle, Joss
  • Deng, Jingjing
  • Mora, Benjamin
  • Jones, Mark W.
Machine Learning with Applications 2022 Journal Article, cited 0 times
Website
Recent achievements of sequence prediction models in numerous domains, including compression, provide great potential for novel learning-based codecs. In such models, the input sequence’s shape and size play a crucial role in learning the mapping function of the data distribution to the target output. This work examines numerous input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16-bit depth) losslessly. The main objective is to determine the optimal practice for enabling the proposed Long Short-Term Memory (LSTM) model to achieve high compression ratio and fast encoding–decoding performance. Our LSTM models are trained with 4-fold cross-validation on 12 high-resolution CT dataset while measuring model’s compression ratios and execution time. Several configurations of sequences have been evaluated, and our results demonstrate that pyramid-shaped sampling represents the best trade-off between performance and compression ratio (up to 3x). We solve a problem of non-deterministic environments that allow our models to run in parallel without much compression performance drop. Experimental evaluation was carried out on datasets acquired by different hospitals, representing different body segments, and distinct scanning modalities (CT and MRI). Our new methodology allows straightforward parallelisation that speeds-up the decoder by up to 37x compared to previous methods. Overall, the trained models demonstrate efficiency and generalisability for compressing 3D medical images losslessly while still outperforming well-known lossless methods by approximately 17% and 12%. To the best of our knowledge, this is the first study that focuses on voxel-wise predictions of volumetric medical imaging for lossless compression.

Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation

  • Nagassa, Ruth G
  • McMenamin, Paul G
  • Adams, Justin W
  • Quayle, Michelle R
  • Rosenfeld, Jeffrey V
3D Print Med 2019 Journal Article, cited 0 times
Website
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.

Automatic tumor segmentation in single-spectral MRI using a texture-based and contour-based algorithm

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Expert Systems with Applications 2017 Journal Article, cited 8 times
Website
Automatic detection of brain tumors in single-spectral magnetic resonance images is a challenging task. Existing techniques suffer from inadequate performance, dependence on initial assumptions, and, sometimes, the need for manual interference. The research reported in this paper seeks to reduce some of these shortcomings, and to remove others, achieving satisfactory performance at reasonable computational costs. The success of the system described here is explained by the synergy of the following aspects: (1) a broad choice of high-level features to characterize the image's texture, (2) an efficient mechanism to eliminate less useful features (3) a machine-learning technique to induce a classifier that signals the presence of a tumor-affected tissue, and (4) an improved version of the skippy greedy snake algorithm to outline the tumor's contours. The paper describes the system and reports experiments with synthetic as well as real data. (C) 2017 Elsevier Ltd. All rights reserved.

Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Computers & Electrical Engineering 2015 Journal Article, cited 85 times
Website
Automated recognition of brain tumors in magnetic resonance images (MRI) is a difficult procedure owing to the variability and complexity of the location, size, shape, and texture of these lesions. Because of intensity similarities between brain lesions and normal tissues, some approaches make use of multi-spectral anatomical MRI scans. However, the time and cost restrictions for collecting multi-spectral MRI scans and some other difficulties necessitate developing an approach that can detect tumor tissues using a single-spectral anatomical MRI images. In this paper, we present a fully automatic system, which is able to detect slices that include tumor and, to delineate the tumor area. The experimental results on single contrast mechanism demonstrate the efficacy of our proposed technique in successfully segmenting brain tumor tissues with high accuracy and low computational complexity. Moreover, we include a study evaluating the efficacy of statistical features over Gabor wavelet features using several classifiers. This contribution fills the gap in the literature, as is the first to compare these sets of features for tumor segmentation applications. (C) 2015 Elsevier Ltd. All rights reserved.

Automated Brain Lesion Detection and Segmentation Using Magnetic Resonance Images

  • Nabizadeh, Nooshin
2015 Thesis, cited 10 times
Website

Reciprocal change in Glucose metabolism of Cancer and Immune Cells mediated by different Glucose Transporters predicts Immunotherapy response

  • Na, Kwon Joong
  • Choi, Hongyoon
  • Oh, Ho Rim
  • Kim, Yoon Ho
  • Lee, Sae Bom
  • Jung, Yoo Jin
  • Koh, Jaemoon
  • Park, Samina
  • Lee, Hyun Joo
  • Jeon, Yoon Kyung
  • Chung, Doo Hyun
  • Paeng, Jin Chul
  • Park, In Kyu
  • Kang, Chang Hyun
  • Cheon, Gi Jeong
  • Kang, Keon Wook
  • Lee, Dong Soo
  • Kim, Young Tae
THERANOSTICS 2020 Journal Article, cited 0 times
Website
The metabolic properties of tumor microenvironment (TME) are dynamically dysregulated to achieve immune escape and promote cancer cell survival. However, in vivo properties of glucose metabolism in cancer and immune cells are poorly understood and their clinical application to development of a biomarker reflecting immune functionality is still lacking. Methods: We analyzed RNA-seq and fluorodeoxyglucose (FDG) positron emission tomography profiles of 63 lung squamous cell carcinoma (LUSC) specimens to correlate FDG uptake, expression of glucose transporters (GLUT) by RNA-seq and immune cell enrichment score (ImmuneScore). Single cell RNA-seq analysis in five lung cancer specimens was performed. We tested the GLUT3/GLUT1 ratio, the GLUT-ratio, as a surrogate representing immune metabolic functionality by investigating the association with immunotherapy response in two melanoma cohorts. Results: ImmuneScore showed a negative correlation with GLUT1 (r = -0.70, p < 0.01) and a positive correlation with GLUT3 (r = 0.39, p < 0.01) in LUSC. Single-cell RNA-seq showed GLUT1 and GLUT3 were mostly expressed in cancer and immune cells, respectively. In immune-poor LUSC, FDG uptake was positively correlated with GLUT1 (r = 0.27, p = 0.04) and negatively correlated with ImmuneScore (r = -0.28, p = 0.04). In immune-rich LUSC, FDG uptake was positively correlated with both GLUT3 (r = 0.78, p = 0.01) and ImmuneScore (r = 0.58, p = 0.10). The GLUT-ratio was higher in anti-PD1 responders than nonresponders (p = 0.08 for baseline; p = 0.02 for on-treatment) and associated with a progression-free survival in melanoma patients who treated with anti-CTLA4 (p = 0.04). Conclusions: Competitive uptake of glucose by cancer and immune cells in TME could be mediated by differential GLUT expression in these cells.

Tumor Metabolic Features Identified by (18)F-FDG PET Correlate with Gene Networks of Immune Cell Microenvironment in Head and Neck Cancer

  • Na, Kwon Joong
  • Choi, Hongyoon
2018 Journal Article, cited 18 times
Website
The importance of (18)F-FDG PET in imaging head and neck squamous cell carcinoma (HNSCC) has grown in recent decades. Because PET has prognostic values, and provides functional and molecular information in HNSCC, the genetic and biologic backgrounds associated with PET parameters are of great interest. Here, as a systems biology approach, we aimed to investigate gene networks associated with tumor metabolism and their biologic function using RNA sequence and (18)F-FDG PET data. Methods: Using RNA sequence data of HNSCC downloaded from The Cancer Genome Atlas data portal, we constructed a gene coexpression network. PET parameters including lesion-to-blood-pool ratio, metabolic tumor volume, and tumor lesion glycolysis were calculated. The Pearson correlation test was performed between module eigengene-the first principal component of modules' expression profile-and the PET parameters. The significantly correlated module was functionally annotated with gene ontology terms, and its hub genes were identified. Survival analysis of the significantly correlated module was performed. Results: We identified 9 coexpression network modules from the preprocessed RNA sequence data. A network module was significantly correlated with total lesion glycolysis as well as maximum and mean (18)F-FDG uptake. The expression profiles of hub genes of the network were inversely correlated with (18)F-FDG uptake. The significantly annotated gene ontology terms of the module were associated with immune cell activation and aggregation. The module demonstrated significant association with overall survival, and the group with higher module eigengene showed better survival than the other groups with statistical significance (P = 0.022). Conclusion: We showed that a gene network that accounts for immune cell microenvironment was associated with (18)F-FDG uptake as well as prognosis in HNSCC. Our result supports the idea that competition for glucose between cancer cell and immune cell plays an important role in cancer progression associated with hypermetabolic features. In the future, PET parameters could be used as a surrogate marker of HNSCC for estimating molecular status of immune cell microenvironment.

Transmission of radiology images over an Unsecure Network Using Hybrid Encryption Schemes

  • Rahul N.
  • Manjunath K.N.
  • Manuel Manuel
  • Rajendra Kurdy
2020 Conference Paper, cited 0 times
Website
In this paper., we propose a hybrid encryption scheme to transmit the medical image dataset securely in radiology networks. The proposed methodology uses the RSA (Rivest-Shamir-Adleman) encryption technique., XOR technique, and the digitally reconstructed Radiograph (DRR) image from the 3D volume of MRI scan images. As a first step, the volume of interest (VOI) was segmented and then computed the DRR image on the segmented volume in the sagittal direction. The pixels of the DRR image was XORed with all the image slices. All the images and the DRR image were encrypted separately using the RSA technique and transmitted. At the receiver, the XOR was applied to all the received images, the original slices were retained, VOI was segmented again, and the DRR was recomputed. Now, the received DRR and the recomputed DRR were compared for the changes in the image content through histogram comparison, MSE, and Mean absolute deviation. The data integrity violation was tested by adding an image, deleting an image, and modifying the pixels of the image before sending it. The method was applied to fifty (n=50) samples. In all the above test cases performed, the method identified the data integrity violation correctly.

Extended Modality Propagation: Image Synthesis of Pathological Cases

  • Cordier N
  • Delingette H
  • Le M
  • Ayache N
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 18 times
Website

Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs

  • Myronenko, Andriy
  • Hatamizadeh, Ali
2020 Book Section, cited 0 times
Multimodal brain tumor segmentation challenge (BraTS) brings together researchers to improve automated methods for 3D MRI brain tumor segmentation. Tumor segmentation is one of the fundamental vision tasks necessary for diagnosis and treatment planning of the disease. Previous years winning methods were all deep-learning based, thanks to the advent of modern GPUs, which allow fast optimization of deep convolutional neural network architectures. In this work, we explore best practices of 3D semantic segmentation, including conventional encoder-decoder architecture, as well combined loss functions, in attempt to further improve the segmentation accuracy. We evaluate the method on BraTS 2019 challenge.

Recommendations for Processing Head CT Data

  • Muschelli, J.
Frontiers in Neuroinformatics 2019 Journal Article, cited 0 times
Website
Many research applications of neuroimaging use magnetic resonance imaging (MRI). As such, recommendations for image analysis and standardized imaging pipelines exist. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Currently, there is only one image processing pipeline for head CT, which focuses mainly on head CT data with lesions. We present tools and a complete pipeline for processing CT data, focusing on open-source solutions, that focus on head CT but are applicable to most CT analyses. We describe going from raw DICOM data to a spatially normalized brain within CT presenting a full example with code. Overall, we recommend anonymizing data with Clinical Trials Processor, converting DICOM data to NIfTI using dcm2niix, using BET for brain extraction, and registration using a publicly-available CT template for analysis.

Multidimensional and Multiresolution Ensemble Networks for Brain Tumor Segmentation

  • Murugesan, Gowtham Krishnan
  • Nalawade, Sahil
  • Ganesh, Chandan
  • Wagner, Ben
  • Yu, Fang F.
  • Fei, Baowei
  • Madhuranthakam, Ananth J.
  • Maldjian, Joseph A.
2021 Book Section, cited 0 times
In this work, we developed multiple 2D and 3D segmentation models with multiresolution input to segment brain tumor components and then ensembled them to obtain robust segmentation maps. Ensembling reduced overfitting and resulted in a more generalized model. Multiparametric MR images of 335 subjects from the BRATS 2019 challenge were used for training the models. Further, we tested a classical machine learning algorithm with features extracted from the segmentation maps to classify subject survival range. Preliminary results on the BRATS 2019 validation dataset demonstrated excellent performance with DICE scores of 0.898, 0.784, 0.779 for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively and an accuracy of 34.5% for predicting survival. The Ensemble of multiresolution 2D networks achieved 88.75%, 83.28% and 79.34% dice for WT, TC, and ET respectively in a test dataset of 166 subjects.

Multidimensional and Multiresolution Ensemble Networks for Brain Tumor Segmentation

  • Murugesan, Gowtham Krishnan
  • Nalawade, Sahil
  • Ganesh, Chandan
  • Wagner, Ben
  • Yu, Fang F.
  • Fei, Baowei
  • Madhuranthakam, Ananth J.
  • Maldjian, Joseph A.
2020 Book Section, cited 0 times
In this work, we developed multiple 2D and 3D segmentation models with multiresolution input to segment brain tumor components and then ensembled them to obtain robust segmentation maps. Ensembling reduced overfitting and resulted in a more generalized model. Multiparametric MR images of 335 subjects from the BRATS 2019 challenge were used for training the models. Further, we tested a classical machine learning algorithm with features extracted from the segmentation maps to classify subject survival range. Preliminary results on the BRATS 2019 validation dataset demonstrated excellent performance with DICE scores of 0.898, 0.784, 0.779 for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively and an accuracy of 34.5% for predicting survival. The Ensemble of multiresolution 2D networks achieved 88.75%, 83.28% and 79.34% dice for WT, TC, and ET respectively in a test dataset of 166 subjects.

An Investigative Study of Shallow, Deep and Dense Learning Models for Breast Cancer Detection based on Microcalcifications

  • Murthy, D. Sudarsana
  • Prasad, V. Siva
  • Aman, K.
  • Kumar Reddy, Madduru Poojith
  • Madhavi, K. Reddy
  • Sunitha, Gurram
2022 Conference Paper, cited 0 times
Early cancer diagnosis, detection and treatment continues to be a mammoth task in because of many challenges such as socio and cultural myths, economic conditions, access to healthcare services, healthcare practices, availability of expert oncologists etc. Mammography is a successful screening method for the breast cancer detection. Mammography captures multiple features like masses, microcalcifications etc. Microcalcifications may indicate breast cancer in its early stages and are considered to play a crucial role in early breast cancer diagnosis. In this paper, we have undertaken an investigative study for breast cancer classification by automated learning from mammography images with microcalcifications. Three types of convolutional neural architectures – shallow (ResNet101), deep (VGG101) and dense (DenseNet101) learning models are employed in this investigative study towards contributing to the objective of rapid and early breast cancer diagnosis. To improve the accuracies of the learning models, the features extracted from microcalcifications have been fed to the learning models. We have experimented with varying hyperparameter setup and have recorded the optimal performances of the three models. It has been observed that among the three models, ResNet101 model demonstrated best performance of 94.2% in benign and malicious cancer classification and also demonstrated best performance in terms of time complexity. The dense model DenseNet101 was more sensitive and specific towards the classification of breast cancer using the microcalcifications. VGG101 performed well and has worked with nearly optimal results as that of ResNet 101 with a value of 93.6%.

Multi Modal Fusion for Radiogenomics Classification of Brain Tumor

  • Mun, Timothy Sum Hon
  • Doran, Simon
  • Huang, Paul
  • Messiou, Christina
  • Blackledge, Matthew
2022 Conference Paper, cited 0 times
Website
Glioblastomas are the most common and aggressive malignant primary tumor of the central nervous system in adults. The tumours are quite heterogeneous in its shape, texture, and histology. Patients that have been diagnosed with glioblastoma typically have low survival rates and it can take weeks to perform a genetic analysis of an extracted tissue sample. If an effective way to diagnose glioblastomas have been discovered through the use of imaging and AI techniques, this can lead to quality of life improvement for patients through better planning of therapy and surgery required. This work is part of the Brain Tumor Segmentation BraTS 2021 challenge. The challenge is to predict the MGMT promotor methylation status from multi-modal MRI data. We propose a multi-modal late fusion 3D classification network for brain tumor classification on 3D MRI images by using all 4 different modalities (T1w, T1wCE, T2w, FLAIR) and also can be extended to include radiomics features or other external features into the network. We also then compare it against 3D classification models trained on each image modality on its own and then ensembled together during inference.

Prognostic relevance of CSF and peri-tumoral edema volumes in glioblastoma

  • Mummareddy, Nishit
  • Salwi, Sanjana R
  • Kumar, Nishant Ganesh
  • Zhao, Zhiguo
  • Ye, Fei
  • Le, Chi H
  • Mobley, Bret C
  • Thompson, Reid C
  • Chambless, Lola B
  • Mistry, Akshitkumar M
Journal of Clinical Neuroscience 2021 Journal Article, cited 0 times
Website

Reliability as a Precondition for Trust-Segmentation Reliability Analysis of Radiomic Features Improves Survival Prediction

  • Muller-Franzes, G.
  • Nebelung, S.
  • Schock, J.
  • Haarburger, C.
  • Khader, F.
  • Pedersoli, F.
  • Schulze-Hagen, M.
  • Kuhl, C.
  • Truhn, D.
Diagnostics (Basel) 2022 Journal Article, cited 0 times
Website
Machine learning results based on radiomic analysis are often not transferrable. A potential reason for this is the variability of radiomic features due to varying human made segmentations. Therefore, the aim of this study was to provide comprehensive inter-reader reliability analysis of radiomic features in five clinical image datasets and to assess the association of inter-reader reliability and survival prediction. In this study, we analyzed 4598 tumor segmentations in both computed tomography and magnetic resonance imaging data. We used a neural network to generate 100 additional segmentation outlines for each tumor and performed a reliability analysis of radiomic features. To prove clinical utility, we predicted patient survival based on all features and on the most reliable features. Survival prediction models for both computed tomography and magnetic resonance imaging datasets demonstrated less statistical spread and superior survival prediction when based on the most reliable features. Mean concordance indices were C(mean) = 0.58 [most reliable] vs. C(mean) = 0.56 [all] (p < 0.001, CT) and C(mean) = 0.58 vs. C(mean) = 0.57 (p = 0.23, MRI). Thus, preceding reliability analyses and selection of the most reliable radiomic features improves the underlying model's ability to predict patient survival across clinical imaging modalities and tumor entities.

Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation

  • Muller-Franzes, G.
  • Muller-Franzes, F.
  • Huck, L.
  • Raaff, V.
  • Kemmer, E.
  • Khader, F.
  • Arasteh, S. T.
  • Lemainque, T.
  • Kather, J. N.
  • Nebelung, S.
  • Kuhl, C.
  • Truhn, D.
2023 Journal Article, cited 0 times
Website
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 +/- 0.069 versus 0.916 +/- 0.067, P < 0.001) and on the external testset (0.824 +/- 0.144 versus 0.864 +/- 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 +/- 2.856 versus 0.548 +/- 2.195, P = 0.001) and on the external testset (0.727 +/- 0.620 versus 0.584 +/- 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.

Radiological Reports Improve Pre-training for Localized Imaging Tasks on Chest X-Rays

  • Müller, Philip
  • Kaissis, Georgios
  • Zou, Congyu
  • Rueckert, Daniel
2022 Conference Proceedings, cited 0 times
Website

Robust chest CT image segmentation of COVID-19 lung infection based on limited data

  • Muller, D.
  • Soto-Rey, I.
  • Kramer, F.
Inform Med Unlocked 2021 Journal Article, cited 0 times
Website
Background: The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare. For quantitative assessment and disease monitoring medical imaging like computed tomography offers great potential as alternative to RT-PCR methods. For this reason, automated image segmentation is highly desired as clinical decision support. However, publicly available COVID-19 imaging data is limited which leads to overfitting of traditional approaches. Methods: To address this problem, we propose an innovative automated segmentation pipeline for COVID-19 infected regions, which is able to handle small datasets by utilization as variant databases. Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods and exploiting extensive data augmentation. For further reduction of the overfitting risk, we implemented a standard 3D U-Net architecture instead of new or computational complex neural network architectures. Results: Through a k-fold cross-validation on 20 CT scans as training and validation of COVID-19, we were able to develop a highly accurate as well as robust segmentation model for lungs and COVID-19 infected regions without overfitting on limited data. We performed an in-detail analysis and discussion on the robustness of our pipeline through a sensitivity analysis based on the cross-validation and impact on model generalizability of applied preprocessing techniques. Our method achieved Dice similarity coefficients for COVID-19 infection between predicted and annotated segmentation from radiologists of 0.804 on validation and 0.661 on a separate testing set consisting of 100 patients. Conclusions: We demonstrated that the proposed method outperforms related approaches, advances the state-of-the-art for COVID-19 segmentation and improves robust medical image analysis based on limited data.

Lung Nodule Segmentation for Explainable AI-Based Cancer Screening

  • Muley, Atharva
2021 Thesis, cited 0 times
Website
We present a novel approach for segmentation and identification of lung nodules in CT scans, for the purpose of Explainable AI assisted screening. Our segmentation approach combines the U-Net segmentation architecture with a graph-based connected component analysis for false positive nodule identification. CADe systems with high true nodule detection rate and low false positive nodules are desired. We also develop a 3D nodule dataset that can be used to build explainable classification model for nodule malignancy and biomarkers estimation. We train and evaluate the segmentation model based on its percentage of true nodules identified within the LIDC dataset which contains 1018 CT scans and nodule annotations marked by four board certified radiologists. We further present results of the segmentation and nodule filtering algorithm and description of 3D nodule dataset generated.

Multi-Class Classification Framework for Brain Tumor MR Image Classification by Using Deep CNN with Grid-Search Hyper Parameter Optimization Algorithm

  • Mukkapati, Naveen
  • Anbarasi, MS
2022 Journal Article, cited 0 times
Website
Histopathological analysis of biopsy specimens is still used for diagnosis and classifying the brain tumors today. The available procedures are intrusive, time consuming, and inclined to human error. To overcome these disadvantages, need of implementing a fully automated deep learning-based model to classify brain tumor into multiple classes. The proposed CNN model with an accuracy of 92.98 % for categorizing tumors into five classes such as normal tumor, glioma tumor, meningioma tumor, pituitary tumor, and metastatic tumor. Using the grid search optimization approach, all of the critical hyper parameters of suggested CNN framework were instantly assigned. Alex Net, Inception v3, Res Net -50, VGG -16, and Google - Net are all examples of cutting-edge CNN models that are compared to the suggested CNN model. Using huge, publicly available clinical datasets, satisfactory classification results were produced. Physicians and radiologists can use the suggested CNN model to confirm their first screening for brain tumor Multi-classification.

A shallow convolutional neural network predicts prognosis of lung cancer patients in multi-institutional computed tomography image datasets

  • Mukherjee, Pritam
  • Zhou, Mu
  • Lee, Edward
  • Schicht, Anne
  • Balagurunathan, Yoganand
  • Napel, Sandy
  • Gillies, Robert
  • Wong, Simon
  • Thieme, Alexander
  • Leung,Ann
  • Gevaert, Olivier
Nature Machine Intelligence 2020 Journal Article, cited 0 times
Website
Lung cancer is the most common fatal malignancy in adults worldwide, and non-small-cell lung cancer (NSCLC) accounts for 85% of lung cancer diagnoses. Computed tomography is routinely used in clinical practice to determine lung cancer treatment and assess prognosis. Here, we developed LungNet, a shallow convolutional neural network for predicting outcomes of patients with NSCLC. We trained and evaluated LungNet on four independent cohorts of patients with NSCLC from four medical centres: Stanford Hospital (n = 129), H. Lee Moffitt Cancer Center and Research Institute (n = 185), MAASTRO Clinic (n = 311) and Charité – Universitätsmedizin, Berlin (n = 84). We show that outcomes from LungNet are predictive of overall survival in all four independent survival cohorts as measured by concordance indices of 0.62, 0.62, 0.62 and 0.58 on cohorts 1, 2, 3 and 4, respectively. Furthermore, the survival model can be used, via transfer learning, for classifying benign versus malignant nodules on the Lung Image Database Consortium (n = 1,010), with improved performance (AUC = 0.85) versus training from scratch (AUC = 0.82). LungNet can be used as a non-invasive predictor for prognosis in patients with NSCLC and can facilitate interpretation of computed tomography images for lung cancer stratification and prognostication.

CT-based Radiomic Signatures for Predicting Histopathologic Features in Head and Neck Squamous Cell Carcinoma

  • Mukherjee, Pritam
  • Cintra, Murilo
  • Huang, Chao
  • Zhou, Mu
  • Zhu, Shankuan
  • Colevas, A Dimitrios
  • Fischbein, Nancy
  • Gevaert, Olivier
Radiol Imaging Cancer 2020 Journal Article, cited 0 times
Website
Purpose: To determine the performance of CT-based radiomic features for noninvasive prediction of histopathologic features of tumor grade, extracapsular spread, perineural invasion, lymphovascular invasion, and human papillomavirus status in head and neck squamous cell carcinoma (HNSCC). Materials and Methods: In this retrospective study, which was approved by the local institutional ethics committee, CT images and clinical data from patients with pathologically proven HNSCC from The Cancer Genome Atlas (n = 113) and an institutional test cohort (n = 71) were analyzed. A machine learning model was trained with 2131 extracted radiomic features to predict tumor histopathologic characteristics. In the model, principal component analysis was used for dimensionality reduction, and regularized regression was used for classification. Results: The trained radiomic model demonstrated moderate capability of predicting HNSCC features. In the training cohort and the test cohort, the model achieved a mean area under the receiver operating characteristic curve (AUC) of 0.75 (95% confidence interval [CI]: 0.68, 0.81) and 0.66 (95% CI: 0.45, 0.84), respectively, for tumor grade; a mean AUC of 0.64 (95% CI: 0.55, 0.62) and 0.70 (95% CI: 0.47, 0.89), respectively, for perineural invasion; a mean AUC of 0.69 (95% CI: 0.56, 0.81) and 0.65 (95% CI: 0.38, 0.87), respectively, for lymphovascular invasion; a mean AUC of 0.77 (95% CI: 0.65, 0.88) and 0.67 (95% CI: 0.15, 0.80), respectively, for extracapsular spread; and a mean AUC of 0.71 (95% CI: 0.29, 1.0) and 0.80 (95% CI: 0.65, 0.92), respectively, for human papillomavirus status. Conclusion: Radiomic CT models have the potential to predict characteristics typically identified on pathologic assessment of HNSCC.Supplemental material is available for this article.(c) RSNA, 2020.

Progesterone Receptor Status Analysis in Breast Cancer Patients using DCE- MR Images and Gabor Derived Anisotropy Index

  • Moyya, Priscilla Dinkar
  • Asaithambi, Mythili
  • Ramaniharan, Anandh Kilpattu
2022 Conference Paper, cited 0 times
Website
Hormone receptors play a key role in female breast cancers as predictive biomarkers. Breast cancer subtype with Progesterone receptor (PgR) expression is one of the important hormone receptors in predicting prognosis and evaluating the Neoadjuvant Chemotherapy (NAC) treatment response. PgR (-) breast cancers are associated with a higher response to NAC compared to PgR (+) breast cancer patients. Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is the widely used imaging modality in assessing the NAC response in patients. However, evaluating the treatment response of PgR breast cancers is complicated and challenging since breast cancer with positive receptor statuses will respond differently to NAC. Therefore, in this work, an attempt has been made to differentiate the PgR (+) and PgR (-) breast cancer patients due to NAC using Gabor derived Anisotropy Index (AI). A total of 50 PgR (+) and 63 PgR (-) DCE-MR images at 4 time points of NAC treatment are considered from the openly available I-SPY1 of the TCIA database. AI is calculated within the PgR status groups from Gabor energies that are acquired after designing the Gabor filter bank with 5 scales and 7 orientations. Results demonstrate that the AI values can significantly differentiate PgR (+) and PgR (-) breast cancer patients (p≤0.05) due to NAC. The mean AI values are observed to be high in PgR (+) patients (4.14E+10± 1.17E+ 11) than PgR (-) patients (1.95E+10±8.06E+10) . AI could statistically differentiate visit 1 & visit 4 of NAC treatment in both PgR status patients with a p-value of 0.0246 and 0.0387 respectively. Further, the percentage difference in the mean value of AI is observed to be high in PgR (-) between visit 1 V s 4, visit 2 V s 4, visit 1 V s 3, and visit 2 Vs 3 compared to PgR (+) subjects. Hence, AI could be used as a single index value in assessing the treatment response in both PgR (+) and PgR (-) subjects.

Quantitative Analysis of Breast Cancer NACT Response on DCE-MRI Using Gabor Filter Derived Radiomic Features

  • Moyya, Priscilla Dinkar
  • Asaithambi, Mythili
2022 Journal Article, cited 0 times
Website
In this work, an attempt has been made to quantify the treatment response due to Neoadjuvant Chemotherapy (NACT) on the publicly available QIN-Breast of TCIA database (N = 25) using Gabor filter derived radiomic features. The Gabor filter bank is constructed using 5 different scales and 7 different orientations. Different radiomic features were extracted from Gabor filtered Dynamic Contrast Enhanced Magnetic Resonance images (DCE-MRI) of patients having 3 different visits (Visit 1: before, Visit 2: after 1st cycle, and Visit 3: the last cycle of NACT). The extracted radiomic features were analyzed statistically and Area Under Receiver Operating Characteristic (AUROC) has been calculated. Results show that the Gabor derived radiomic features could differentiate the pathological differences among all three visits. Energy has shown a significant difference between all the three orientations particularly between Visits 2 & 3. However, Entropy from λ=2 and θ=300 between Visit 2 & 3, Skewness from λ=2 and θ=1200 between Visit 1 & 3 could differentiate the treatment response with high statistical significance of p=0.006 and 0.001 respectively. From the ROC analysis, the better predictors were Short Run Emphasis (SRE), Short Zone Emphasis (SZE), and Energy between Visit 1 & 3 by achieving an AUROC of 76.38%, 75.16%, and 71.10% respectively. Further, the results suggest that the radiomic features are capable of quantitatively compare the breast NACT prognosis that varies across multi-oriented Gabor filters.

Multi-channel auto-encoders for learning domain invariant representations enabling superior classification of histopathology images

  • Moyes, A.
  • Gault, R.
  • Zhang, K.
  • Ming, J.
  • Crookes, D.
  • Wang, J.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Domain shift is a problem commonly encountered when developing automated histopathology pipelines. The performance of machine learning models such as convolutional neural networks within automated histopathology pipelines is often diminished when applying them to novel data domains due to factors arising from differing staining and scanning protocols. The Dual-Channel Auto-Encoder (DCAE) model was previously shown to produce feature representations that are less sensitive to appearance variation introduced by different digital slide scanners. In this work, the Multi-Channel Auto-Encoder (MCAE) model is presented as an extension to DCAE which learns from more than two domains of data. Experimental results show that the MCAE model produces feature representations that are less sensitive to inter-domain variations than the comparative StaNoSA method when tested on a novel synthetic dataset. This was apparent when applying the MCAE, DCAE, and StaNoSA models to three different classification tasks from unseen domains. The results of this experiment show the MCAE model out performs the other models. These results show that the MCAE model is able to generalise better to novel data, including data from unseen domains, than existing approaches by actively learning normalised feature representations.

Optimization Methods for Medical Image Super Resolution Reconstruction

  • Moustafa, Marwa
  • Ebied, Hala M.
  • Helmy, Ashraf
  • Nazamy, Taymoor M.
  • Tolba, Mohamed F.
2016 Book Section, cited 0 times
Website
Super-resolution (SR) concentrates on constructing a high-resolution (HR) image of a scene from two or more sets of low-resolution (LR) images of the same scene. It is the process of combining a sequence of low-resolution (LR) noisy blurred images to produce a higher-resolution image. The reconstruction of high-resolution images is computationally expensive. SR is defined to be an inverse problem that is well-known as ill-conditioned. The SR problem has been reformulated using optimization techniques to define a solution that is a close approximation of the true scene and less sensitive to errors in the observed images. This paper reviews the optimized SR reconstruction approaches and highlights its challenges and limitations. An experiment has been done to compare between bicubic, iterative back-projection (IBP), projected onto convex sets (POCS), total variation (TV) and Gradient descent via sparse representation. The experimental results show that Gradient descent via sparse representation outperforms other optimization techniques.

Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs

  • Mota, A. M.
  • Clarkson, M. J.
  • Almeida, P.
  • Matela, N.
2022 Journal Article, cited 0 times
Website
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.

LibHip: An open-access hip joint model repository suitable for finite element method simulation

  • Moshfeghifar, Faezeh
  • Gholamalizadeh, Torkan
  • Ferguson, Zachary
  • Schneider, Teseo
  • Nielsen, Michael Bachmann
  • Panozzo, Daniele
  • Darkner, Sune
  • Erleben, Kenny
Computer Methods and Programs in Biomedicine 2022 Journal Article, cited 1 times
Website
Background and objective: population-based finite element analysis of hip joints allows us to understand the effect of inter-subject variability on simulation results. Developing large subject-specific population models is challenging and requires extensive manual effort. Thus, the anatomical representations are often subjected to simplification. The discretized geometries do not guarantee conformity in shared interfaces, leading to complications in setting up simulations. Additionally, these models are not openly accessible, challenging reproducibility. Our work provides multiple subject-specific hip joint finite element models and a novel semi-automated modeling workflow. Methods: we reconstruct 11 healthy subject-specific models, including the sacrum, the paired pelvic bones, the paired proximal femurs, the paired hip joints, the paired sacroiliac joints, and the pubic symphysis. The bones are derived from CT scans, and the cartilages are generated from the bone geometries. We generate the whole complex’s volume mesh with conforming interfaces. Our models are evaluated using both mesh quality metrics and simulation experiments. Results: the geometry of all the models are inspected by our clinical expert and show high-quality discretization with accurate geometries. The simulations produce smooth stress patterns, and the variance among the subjects highlights the effect of inter-subject variability and asymmetry in the predicted results. Conclusions: our work is one of the largest model repositories with respect to the number of subjects and regions of interest in the hip joint area. Our detailed research data, including the clinical images, the segmentation label maps, the finite element models, and software tools, are openly accessible on GitHub and the link is provided in Moshfeghifar et al.(2022)[1]. Our aim is to empower clinical researchers to have free access to verified and reproducible models. In future work, we aim to add additional structures to our models.

Machine Learning and Feature Selection Methods for EGFR Mutation Status Prediction in Lung Cancer

  • Morgado, Joana
  • Pereira, Tania
  • Silva, Francisco
  • Freitas, Cláudia
  • Negrão, Eduardo
  • de Lima, Beatriz Flor
  • da Silva, Miguel Correia
  • Madureira, António J.
  • Ramos, Isabel
  • Hespanhol, Venceslau
  • Costa, José Luis
  • Cunha, António
  • Oliveira, Hélder P.
Applied Sciences 2021 Journal Article, cited 0 times
Website
The evolution of personalized medicine has changed the therapeutic strategy fromclassical chemotherapy and radiotherapy to a genetic modification targeted therapy, and althoughbiopsy is the traditional method to genetically characterize lung cancer tumor, it is an invasive andpainful procedure for the patient. Nodule image features extracted from computed tomography(CT) scans have been used to create machine learning models that predict gene mutation status ina noninvasive, fast, and easy-to-use manner. However, recent studies have shown that radiomicfeatures extracted from an extended region of interest (ROI) beyond the tumor, might be morerelevant to predict the mutation status in lung cancer, and consequently may be used to significantlydecrease the mortality rate of patients battling this condition. In this work, we investigated therelation between image phenotypes and the mutation status of Epidermal Growth Factor Receptor(EGFR), the most frequently mutated gene in lung cancer with several approved targeted-therapies,using radiomic features extracted from the lung containing the nodule. A variety of linear, nonlinear,and ensemble predictive classification models, along with several feature selection methods, wereused to classify the binary outcome of wild-type or mutantEGFRmutation status. The resultsshow that a comprehensive approach using a ROI that included the lung with nodule can capturerelevant information and successfully predict theEGFRmutation status with increased performancecompared to local nodule analyses. Linear Support Vector Machine, Elastic Net, and LogisticRegression, combined with the Principal Component Analysis feature selection method implementedwith 70% of variance in the feature set, were the best-performing classifiers, reaching Area Underthe Curve (AUC) values ranging from 0.725 to 0.737. This approach that exploits a holistic analysisindicates that information from more extensive regions of the lung containing the nodule allows amore complete lung cancer characterization and should be considered in future radiogenomic studies.

Deep Learning For Brain Tumor Segmentation

  • Moreno Lopez, Marc
2017 Thesis, cited 393 times
Website
In this work, we present a novel method to segment brain tumors using deep learning. An accurate brain tumor segmentation is key for a patient to get the right treatment and for the doctor who must perform surgery. Due to the genetic differences that exist in different patients, even between the same kind of tumor, an accurate segmentation is crucial. To beat state-of-the-art methods, we want to use technology that has provided major breakthroughs in many different areas, including segmentation, deep learning, a new area of machine learning. It is a branch of machine learning that is attempting to model high level abstractions in data. We will be using Convolutional Neural Networks, CNNs, and we will evaluate the results that we obtain comparing our method against the best results obtained from the Brain Tumor Segmentation Challenge, BRATS.

Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma

  • Moradmand, Hajar
  • Aghamiri, Seyed Mahmoud Reza
  • Ghaderi, Reza
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
To investigate the effect of image preprocessing, in respect to intensity inhomogeneity correction and noise filtering, on the robustness and reproducibility of the radiomics features extracted from the Glioblastoma (GBM) tumor in multimodal MR images (mMRI). In this study, for each patient 1461 radiomics features were extracted from GBM subregions (i.e., edema, necrosis, enhancement, and tumor) of mMRI (i.e., FLAIR, T1, T1C, and T2) volumes for five preprocessing combinations (in total 116 880 radiomics features). The robustness and reproducibility of the radiomics features were assessed under four comparisons: (a) Baseline versus modified bias field; (b) Baseline versus modified bias field followed by noise filtering; (c) Baseline versus modified noise, and (d) Baseline versus modified noise followed bias field correction. The concordance correlation coefficient (CCC), dynamic range (DR), and interclass correlation coefficient (ICC) were used as metrics. Shape features and subsequently, local binary pattern (LBP) filtered images were highly stable and reproducible against bias field correction and noise filtering in all measurements. In all MRI modalities, necrosis regions (NC: n ~449/1461, 30%) had the highest number of highly robust features, with CCC and DR >= 0.9, in comparison with edema (ED: n ~296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor regions (TM: n ~254/1461, 17%). The necrosis regions (NC: n ~ 449/1461, 30%) had a higher number of highly robust features (CCC and DR >= 0.9) than edema (ED: n ~ 296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor (TM: n ~ 254/1461, 17%) regions across all modalities. Furthermore, our results identified that the percentage of high reproducible features with ICC >= 0.9 after bias field correction (23.2%), and bias field correction followed by noise filtering (22.4%) were higher in contrast with noise smoothing and also noise smoothing follow by bias correction. These preliminary findings imply that preprocessing sequences can also have a significant impact on the robustness and reproducibility of mMRI-based radiomics features and identification of generalizable and consistent preprocessing algorithms is a pivotal step before imposing radiomics biomarkers into the clinic for GBM patients.

Integration of Dynamic Multi-Atlas and Deep Learning Techniques to Improve Segmentation of the Prostate in MR Images

  • Moradi, Hamid
  • Foruzan, Amir Hossein
International Journal of Image and Graphics 2021 Journal Article, cited 0 times
Website
Accurate delineation of the prostate in MR images is an essential step for treatment planning and volume estimation of the organ. Prostate segmentation is a challenging task due to its variable size and shape. Moreover, neighboring tissues have a low-contrast with the prostate. We propose a robust and precise automatic algorithm to define the prostate's boundaries in MR images in this paper. First, we find the prostate's ROI by a deep neural network and decrease the input image's size. Next, a dynamic multi-atlas-based approach obtains the initial segmentation of the prostate. A watershed algorithm improves the initial segmentation at the next stage. Finally, an SSM algorithm keeps the result in the domain of allowable prostate shapes. The quantitative evaluation of 74 prostate volumes demonstrated that the proposed method yields a mean Dice coefficient of 0.83 +/- 0.05. In comparison with recent researches, our algorithm is robust against shape and size variations. Keywords: Prostate segmentation; deep learning; watershed segmentation; probabilistic atlas; statistical shape modeling.

Evaluation of TP53/PIK3CA mutations using texture and morphology analysis on breast MRI

  • Moon, W. K.
  • Chen, H. H.
  • Shin, S. U.
  • Han, W.
  • Chang, R. F.
Magn Reson Imaging 2019 Journal Article, cited 0 times
Website
PURPOSE: Somatic mutations in TP53 and PIK3CA genes, the two most frequent genetic alternations in breast cancer, are associated with prognosis and therapeutic response. This study predicted the presence of TP53 and PIK3CA mutations in breast cancer by using texture and morphology analyses on breast MRI. MATERIALS AND METHODS: A total of 107 breast cancers (dataset A) from The Cancer Imaging Archive (TCIA) consisting of 40 TP53 mutation cancer and 67 cancers without TP53 mutation; 35 PIK3CA mutations cancer and 72 without PIK3CA mutation. 122 breast cancer (dataset B) from Seoul National University Hospital containing 54 TP53 mutation cancer and 68 without mutations were used in this study. At first, the tumor area was segmented by a region growing method. Subsequently, gray level co-occurrence matrix (GLCM) texture features were extracted after ranklet transform, and a series of features including compactness, margin, and ellipsoid fitting model were used to describe the morphological characteristics of tumors. Lastly, a logistic regression was used to identify the presence of TP53 and PIK3CA mutations. The classification performances were evaluated by accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Taking into account the trade-offs of sensitivity and specificity, the overall performances were evaluated by using receiver operating characteristic (ROC) curve analysis. RESULTS: The GLCM texture feature based on ranklet transform is more capable of recognizing TP53 and PIK3CA mutations than morphological feature, especially for the TP53 mutation that achieves statistically significant. The area under the ROC curve (AUC) for TP53 mutation dataset A and dataset B achieved 0.78 and 0.81 respectively. For PIK3CA mutation, the AUC of ranklet texture feature was 0.70. CONCLUSION: Texture analysis of segmented tumor on breast MRI based on ranklet transform is potential in recognizing the presence of TP53 mutation and PIK3CA mutation.

Glioma Tumor Grading Using Radiomics on Conventional MRI: A Comparative Study of WHO 2021 and WHO 2016 Classification of Central Nervous Tumors

  • Moodi, F.
  • Khodadadi Shoushtari, F.
  • Ghadimi, D. J.
  • Valizadeh, G.
  • Khormali, E.
  • Salari, H. M.
  • Ohadi, M. A. D.
  • Nilipour, Y.
  • Jahanbakhshi, A.
  • Rad, H. S.
2023 Journal Article, cited 0 times
Website
BACKGROUND: Glioma grading transformed in World Health Organization (WHO) 2021 CNS tumor classification, integrating molecular markers. However, the impact of this change on radiomics-based machine learning (ML) classifiers remains unexplored. PURPOSE: To assess the performance of ML in classifying glioma tumor grades based on various WHO criteria. STUDY TYPE: Retrospective. SUBJECTS: A neuropathologist regraded gliomas of 237 patients into WHO 2016 and 2021 from 2007 criteria. FIELD STRENGTH/SEQUENCE: Multicentric 0.5 to 3 Tesla; pre- and post-contrast T1-weighted, T2-weighted, and fluid-attenuated inversion recovery. ASSESSMENT: Radiomic features were selected using random forest-recursive feature elimination. The synthetic minority over-sampling technique (SMOTE) was implemented for data augmentation. Stratified 10-fold cross-validation with and without SMOTE was used to evaluate 11 classifiers for 3-grade (2, 3, and 4; WHO 2016 and 2021) and 2-grade (low and high grade; WHO 2007 and 2021) classification. Additionally, we developed the models on data randomly divided into training and test sets (mixed-data analysis), or data divided based on the centers (independent-data analysis). STATISTICAL TESTS: We assessed ML classifiers using sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). Top performances were compared with a t-test and categorical data with the chi-square test using a significance level of P < 0.05. RESULTS: In the mixed-data analysis, Stacking Classifier without SMOTE achieved the highest accuracy (0.86) and AUC (0.92) in 3-grade WHO 2021 grouping. The results of WHO 2021 were significantly better than WHO 2016 (P-value<0.0001). In the 2-grade analysis, ML achieved 1.00 in all metrics. In the independent-data analysis, ML classifiers showed strong discrimination between grade 2 and 4, despite lower performance metrics than the mixed analysis. DATA CONCLUSION: ML algorithms performed better in glioma tumor grading based on WHO 2021 criteria. Nonetheless, the clinical use of ML classifiers needs further investigation. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 2.

CNN models discriminating between pulmonary micro-nodules and non-nodules from CT images

  • Monkam, Patrice
  • Qi, Shouliang
  • Xu, Mingjie
  • Han, Fangfang
  • Zhao, Xinzhuo
  • Qian, Wei
BioMedical Engineering OnLine 2018 Journal Article, cited 1 times
Website

Informatics in Radiology: An Open-Source and Open-Access Cancer Biomedical Informatics Grid Annotation and Image Markup Template Builder

  • Mongkolwat, Pattanasak
  • Channin, David S
  • Kleper, Vladimir
  • Rubin, Daniel L
Radiographics 2012 Journal Article, cited 15 times
Website
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

Development of automatic generation system for lung nodule finding descriptions

  • Momoki, Y.
  • Ichinose, A.
  • Nakamura, K.
  • Iwano, S.
  • Kamiya, S.
  • Yamada, K.
  • Naganawa, S.
PLoS One 2024 Journal Article, cited 0 times
Website
Worldwide, lung cancer is the leading cause of cancer-related deaths. To manage lung nodules, radiologists observe computed tomography images, review various imaging findings, and record these in radiology reports. The report contents should be of high quality and uniform regardless of the radiologist. Here, we propose an artificial intelligence system that automatically generates descriptions related to lung nodules in computed tomography images. Our system consists of an image recognition method for extracting contents-namely, bronchopulmonary segments and nodule characteristics from images-and a natural language processing method to generate fluent descriptions. To verify our system's clinical usefulness, we conducted an experiment in which two radiologists created nodule descriptions of findings using our system. Through our system, the similarity of the described contents between the two radiologists (p = 0.001) and the comprehensiveness of the contents (p = 0.025) improved, while the accuracy did not significantly deteriorate (p = 0.484).

Classification of malignant tumors by a non-sequential recurrent ensemble of deep neural network model

  • Moitra, D.
  • Mandal, R. K.
Multimed Tools Appl 2022 Journal Article, cited 0 times
Website
Many significant efforts have so far been made to classify malignant tumors by using various machine learning methods. Most of the studies have considered a particular tumor genre categorized according to its originating organ. This has enriched the domain-specific knowledge of malignant tumor prediction, we are devoid of an efficient model that may predict the stages of tumors irrespective of their origin. Thus, there is ample opportunity to study if a heterogeneous collection of tumor images can be classified according to their respective stages. The present research work has prepared a heterogeneous tumor dataset comprising eight different datasets from The Cancer Imaging Archives and classified them according to their respective stages, as suggested by the American Joint Committee on Cancer. The proposed model has been used for classifying 717 subjects comprising different imaging modalities and varied Tumor-Node-Metastasis stages. A new non-sequential deep hybrid model ensemble has been developed by exploiting branched and re-injected layers, followed by bidirectional recurrent layers to classify tumor images. Results have been compared with standard sequential deep learning models and notable recent studies. The training and validation accuracy along with the ROC-AUC scores have been found satisfactory over the existing models. No model or method in the literature could ever classify such a diversified mix of tumor images with such high accuracy. The proposed model may help radiologists by acting as an auxiliary decision support system and speed up the tumor diagnosis process.

Prediction of Non-small Cell Lung Cancer Histology by a Deep Ensemble of Convolutional and Bidirectional Recurrent Neural Network

  • Moitra, Dipanjan
  • Mandal, Rakesh Kumar
2020 Journal Article, cited 0 times

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Website
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics 2019 Journal Article, cited 0 times
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.

Classification of non-small cell lung cancer using one-dimensional convolutional neural network

  • Moitra, Dipanjan
  • Kr. Mandal, Rakesh
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
Non-Small Cell Lung Cancer (NSCLC) is a major lung cancer type. Proper diagnosis depends mainly on tumor staging and grading. Pathological prognosis often faces problems because of the limited availability of tissue samples. Machine learning methods may play a vital role in such cases. 2D or 3D Deep Neural Networks (DNNs) has been the predominant technology in this domain. Contemporary studies tried to classify NSCLC tumors as benign or malignant. The application of 1D CNN in automated staging and grading of NSCLC is not very frequent. The aim of the present study is to develop a 1D CNN model for automated staging and grading of NSCLC. The updated NSCLC Radiogenomics Collection from The Cancer Imaging Archive (TCIA) was used in the study. The segmented tumor images were fed into a hybrid feature detection and extraction model (MSER-SURF). The extracted features were clubbed with the clinical TNM stage and histopathological grade information and fed into the 1D CNN model. The performance of the proposed CNN model was satisfactory. The accuracy and ROC-AUC score were higher than the other leading machine learning methods. The study also did well compared to state-of-the-art studies. The proposed model shows that 1D CNN is equally useful in NSCLC prediction like a conventional 2D/3D CNN model. The model may further be refined by carrying out experiments with varied hyper-parameters. Further studies may be conducted by considering semi-supervised or unsupervised learning techniques.

IMAGE FUSION BASED LUNG NODULE DETECTION USING STRUCTURAL SIMILARITY AND MAX RULE

  • Mohana, P
  • Venkatesan, P
INTERNATIONAL JOURNAL OF ADVANCES IN SIGNAL AND IMAGE SCIENCES 2019 Journal Article, cited 0 times
Website
The uncontrollable cells in the lungs are the main cause of lung cancer that reduces the ability to breathe. In this study, fusion of Computed Tomography (CT) lung image and Positron Emission Tomography (PET) lung image using their structural similarity is presented. The fused image has more information compared to individual CT and PET lung images which helps radiologists to make decision quickly. Initially, the CT and PET images are divided into blocks of predefined size in an overlapping manner. The structural similarity between each block of CT and PET are computed for fusion. Image fusion is performed using a combination of structural similarity and MAX rule. If the structural similarity between CT and PET block is greater than a particular threshold, the MAX rule is applied; otherwise the pixel intensities in CT image are used. A simple thresholding approach is employed to detect the lung nodule from the fused image. The qualitative analyses show that the fusion approach provides more information with accurate detection of lung nodules.

Handcrafted Deep-Feature-Based Brain Tumor Detection and Classification Using MRI Images

  • Mohan, P.
  • Easwaramoorthy, S. V.
  • Subramani, N.
  • Subramanian, M.
  • Meckanzi, S.
2022 Journal Article, cited 0 times
An abnormal growth of cells in the brain, often known as a brain tumor, has the potential to develop into cancer. Carcinogenesis of glial cells in the brain and spinal cord is the root cause of gliomas, which are the most prevalent type of primary brain tumor. After receiving a diagnosis of glioblastoma, it is anticipated that the average patient will have a survival time of less than 14 months. Magnetic resonance imaging (MRI) is a well-known non-invasive imaging technology that can detect brain tumors and gives a variety of tissue contrasts in each imaging modality. Until recently, only neuroradiologists were capable of performing the tedious and time-consuming task of manually segmenting and analyzing structural MRI scans of brain tumors. This was because neuroradiologists have specialized training in this area. The development of comprehensive and automatic segmentation methods for brain tumors will have a significant impact on both the diagnosis and treatment of brain tumors. It is now possible to recognize tumors in photographs because of developments in computer-aided design (CAD), machine learning (ML), and deep learning (DL) approaches. The purpose of this study is to develop, through the application of MRI data, an automated model for the detection and classification of brain tumors based on deep learning (DLBTDC-MRI). Using the DLBTDC-MRI method, brain tumors can be detected and characterized at various stages of their progression. Preprocessing, segmentation, feature extraction, and classification are all included in the DLBTDC-MRI methodology that is supplied. The use of adaptive fuzzy filtering, often known as AFF, as a preprocessing technique for photos, results in less noise and higher-quality MRI scans. A method referred to as "chicken swarm optimization" (CSO) was used to segment MRI images. This method utilizes Tsallis entropy-based image segmentation to locate parts of the brain that have been injured. In addition to this, a Residual Network (ResNet) that combines handcrafted features with deep features was used to produce a meaningful collection of feature vectors. A classifier developed by combining DLBTDC-MRI and CSO can finally be used to diagnose brain tumors. To assess the enhanced performance of brain tumor categorization, a large number of simulations were run on the BRATS 2015 dataset. It would appear, based on the findings of these trials, that the DLBTDC-MRI method is superior to other contemporary procedures in many respects.

Predicting survival status of lung cancer patients using machine learning

  • Mohan, Aishwarya
2021 Thesis, cited 0 times
Website
5-year survival rate of patients with metastasized non-small cell lung cancer (NSCLC) who received chemotherapy was less than 5% (Kathryn C. Arbour, 2019). Our ability to provide survival status of a patient i.e. Alive or death at any time in future is important from at least two standpoints: a) from clinical standpoint it enables clinicians to provide optimal delivery of healthcare and b) from personal standpoint by providing patient’s family with opportunities to plan their life ahead and potentially cope with emotional aspect of loss of life. In this thesis, we investigate different approaches for predicting survival status of patients suffering from non-small cell lung cancer. In Chapter 2, we review background of machine learning and related work in cancer prediction followed by steps to follow before applying machine learning classifiers to training dataset. In chapter 3, we present different classifiers on which our analysis will be performed and later in the chapter we list evaluation metrics for measuring performance. In chapter 4, related dataset and results from different tests performed on training data will be discussed. In last chapter, we conclude our findings for this study and present suggestions for future work.

Lung cancer classification with Convolutional Neural Network Architectures

  • Mohammed, Shivan H. M.
  • Çinar, Ahmet
Qubahan Academic Journal 2021 Journal Article, cited 0 times
Website
One of the most common malignant tumors in the world today is lung cancer, and it is the primary cause of death from cancer. With the continuous advancement of urbanization and industrialization, the problem of air pollution has become more and more serious. The best treatment period for lung cancer is the early stage. However, the early stage of lung cancer often does not have any clinical symptoms and is difficult to be found. In this paper, lung nodule classification has been performed; the data have used of CT image is SPIE AAPM-Lung. In recent years, deep learning (DL) was a popular approach to the classification process. One of the DL approaches that have used is Transfer Learning (TL) to eliminate training costs from scratch and to train for deep learning with small training data. Nowadays, researchers have been trying various deep learning techniques to improve the efficiency of CAD (computer-aided system) with computed tomography in lung cancer screening. In this work, we implemented pre-trained CNN include: AlexNet, ResNet18, Googlenet, and ResNet50 models. These networks are used for training the network and CT image classification. CNN and TL are used to achieve high performance resulting and specify lung cancer detection on CT images. The evaluation of models is calculated by some matrices such as confusion matrix, precision, recall, specificity, and f1-score.

Quantifying T2-FLAIR Mismatch Using Geographically Weighted Regression and Predicting Molecular Status in Lower-Grade Gliomas

  • Mohammed, S.
  • Ravikumar, V.
  • Warner, E.
  • Patel, S. H.
  • Bakas, S.
  • Rao, A.
  • Jain, R.
AJNR Am J Neuroradiol 2022 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: The T2-FLAIR mismatch sign is a validated imaging sign of isocitrate dehydrogenase-mutant 1p/19q noncodeleted gliomas. It is identified by radiologists through visual inspection of preoperative MR imaging scans and has been shown to identify isocitrate dehydrogenase-mutant 1p/19q noncodeleted gliomas with a high positive predictive value. We have developed an approach to quantify the T2-FLAIR mismatch signature and use it to predict the molecular status of lower-grade gliomas. MATERIALS AND METHODS: We used multiparametric MR imaging scans and segmentation labels of 108 preoperative lower-grade glioma tumors from The Cancer Imaging Archive. Clinical information and T2-FLAIR mismatch sign labels were obtained from supplementary material of relevant publications. We adopted an objective analytic approach to estimate this sign through a geographically weighted regression and used the residuals for each case to construct a probability density function (serving as a residual signature). These functions were then analyzed using an appropriate statistical framework. RESULTS: We observed statistically significant (P value = .05) differences between the averages of residual signatures for an isocitrate dehydrogenase-mutant 1p/19q noncodeleted class of tumors versus other categories. Our classifier predicts these cases with area under the curve of 0.98 and high specificity and sensitivity. It also predicts the T2-FLAIR mismatch sign within these cases with an under the curve of 0.93. CONCLUSIONS: On the basis of this retrospective study, we show that geographically weighted regression-based residual signatures are highly informative of the T2-FLAIR mismatch sign and can identify isocitrate dehydrogenase-mutation and 1p/19q codeletion status with high predictive power. The utility of the proposed quantification of the T2-FLAIR mismatch sign can be potentially validated through a prospective multi-institutional study.

Tumor radiogenomics in gliomas with Bayesian layered variable selection

  • Mohammed, S.
  • Kurtek, S.
  • Bharath, K.
  • Rao, A.
  • Baladandayuthapani, V.
Med Image Anal 2023 Journal Article, cited 0 times
Website
We propose a statistical framework to analyze radiological magnetic resonance imaging (MRI) and genomic data to identify the underlying radiogenomic associations in lower grade gliomas (LGG). We devise a novel imaging phenotype by dividing the tumor region into concentric spherical layers that mimics the tumor evolution process. MRI data within each layer is represented by voxel-intensity-based probability density functions which capture the complete information about tumor heterogeneity. Under a Riemannian-geometric framework these densities are mapped to a vector of principal component scores which act as imaging phenotypes. Subsequently, we build Bayesian variable selection models for each layer with the imaging phenotypes as the response and the genomic markers as predictors. Our novel hierarchical prior formulation incorporates the interior-to-exterior structure of the layers, and the correlation between the genomic markers. We employ a computationally-efficient Expectation-Maximization-based strategy for estimation. Simulation studies demonstrate the superior performance of our approach compared to other approaches. With a focus on the cancer driver genes in LGG, we discuss some biologically relevant findings. Genes implicated with survival and oncogenesis are identified as being associated with the spherical layers, which could potentially serve as early-stage diagnostic markers for disease monitoring, prior to routine invasive approaches. We provide a R package that can be used to deploy our framework to identify radiogenomic associations.

Refinement of ensemble strategy for acute lymphoblastic leukemia microscopic images using hybrid CNN-GRU-BiLSTM and MSVM classifier

  • Mohammed, Kamel K.
  • Hassanien, Aboul Ella
  • Afify, Heba M.
Neural Computing and Applications 2023 Journal Article, cited 0 times
Website
Acute lymphocytic leukemia (ALL) is a common serious cancer in white blood cells (WBC) that advances quickly and produces abnormal cells in the bone marrow. Cancerous cells associated with ALL lead to impairment of body systems. Microscopic examination of ALL in a blood sample is applied manually by hematologists with many defects. Computer-aided leukemia image detection is used to avoid human visual recognition and to provide a more accurate diagnosis. This paper employs the ensemble strategy to detect ALL cells versus normal WBCs using three stages automatically. Firstly, image pre-processing is applied to handle the unbalanced database through the oversampling process. Secondly, deep spatial features are generated using a convolution neural network (CNN). At the same time, the gated recurrent unit (GRU)-bidirectional long short-term memory (BiLSTM) architecture is utilized to extract long-distance dependent information features or temporal features to obtain active feature learning. Thirdly, a softmax function and the multiclass support vector machine (MSVM) classifier are used for the classification mission. The proposed strategy has the resilience to classify the C-NMC 2019 database into two categories by using splitting the entire dataset into 90% as training and 10% as testing datasets. The main motivation of this paper is the novelty of the proposed framework for the purposeful and accurate diagnosis of ALL images. The proposed CNN-GRU-BiLSTM-MSVM is simply stacked by existing tools. However, the empirical results on C-NMC 2019 database show that the proposed framework is useful to the ALL image recognition problem compared to previous works. The DenseNet-201 model yielded an F1-score of 96.23% and an accuracy of 96.29% using the MSVM classifier in the test dataset. The findings exhibited that the proposed strategy can be employed as a complementary diagnostic tool for ALL cells. Further, this proposed strategy will encourage researchers to augment the rare database, such as blood microscopic images by creating powerful applications in terms of combining machine learning with deep learning algorithms.

Deep MammoNet: Early Diagnosis of Breast Cancer Using Multi-layer Hierarchical Features of Deep Transfer Learned Convolutional Neural Network

  • Mohamed Aarif, K. O.
  • Sivakumar, P.
  • Mohamed Yousuff, Caffiyar
  • Mohammed Hashim, B. A.
Advanced Machine Learning Approaches in Cancer Prognosis 2021 Journal Article, cited 0 times
Website
Deep Convolutional Neural Network (CNN) comprises of multiple convolutional layers which learn feature form input image with different levels of abstraction; In this work, we address the issue to improvise the recognition accuracy of CNN for classification of breast cancer from mammogram images. To achieve optimize classification we propose a multilayer hierarchical convolutional feature integration in deep transfer learned CNN. In Deep CNN the last layer learns significant features that are highly invariant but their spatial resolutions are too stiff to exactly confine the target. In contrast, features from earlier layers offer more exact localization and hold more fine-grained spatial subtleties for exact confinement but are less invariant. This observation recommends that reasoning with multiple layers of CNN features for breast cancer detection from mammogram images is of great importance. In this work, we proposed to integrate the features extracted from the earlier layer and the last layer of deep CNN to train and improvise the classification accuracy of breast cancer detection in the mammogram image. We also discussed that consistent improvement in accuracy is obtained by using mammogram augmentation and different weight learning factors across different layers.

Multimodality annotated hepatocellular carcinoma data set including pre- and post-TACE with imaging segmentation

  • Moawad, Ahmed W.
  • Morshid, Ali
  • Khalaf, Ahmed M.
  • Elmohr, Mohab M.
  • Hazle, John D.
  • Fuentes, David
  • Badawy, Mohamed
  • Kaseb, Ahmed O.
  • Hassan, Manal
  • Mahvash, Armeen
  • Szklaruk, Janio
  • Qayyum, Aliyya
  • Abusaif, Abdelrahman
  • Bennett, William C.
  • Nolan, Tracy S.
  • Camp, Brittney
  • Elsayes, Khaled M.
Scientific data 2023 Journal Article, cited 0 times
Multimodality annotated hepatocellular carcinoma data set including pre- and post-TACE with imaging segmentation

Computed tomography image reconstruction using stacked U-Net

  • Mizusawa, S.
  • Sei, Y.
  • Orihara, R.
  • Ohsuga, A.
Comput Med Imaging Graph 2021 Journal Article, cited 0 times
Website
Since the development of deep learning methods, many researchers have focused on image quality improvement using convolutional neural networks. They proved its effectivity in noise reduction, single-image super-resolution, and segmentation. In this study, we apply stacked U-Net, a deep learning method, for X-ray computed tomography image reconstruction to generate high-quality images in a short time with a small number of projections. It is not easy to create highly accurate models because medical images have few training images due to patients' privacy issues. Thus, we utilize various images from the ImageNet, a widely known visual database. Results show that a cross-sectional image with a peak signal-to-noise ratio of 27.93db and a structural similarity of 0.886 is recovered for a 512x512 image using 360-degree rotation, 512 detectors, and 64 projections, with a processing time of 0.11s on the GPU. Therefore, the proposed method has a shorter reconstruction time and better image quality than the existing methods.

Trafne: A Training Framework for Non-expert Annotators with Auto Validation and Expert Feedback

  • Miyata, Shugo
  • Chang, Chia-Ming
  • Igarashi, Takeo
2022 Conference Proceedings, cited 0 times
Large-scale datasets play an important role in the application of deep learning methods to various practical tasks. Many crowdsourcing tools have been proposed for annotation tasks; however, these tasks are relatively easy. Non-obvious annotation tasks require professional knowledge (e.g., medical image annotation) and non-expert annotators need to be trained to perform such tasks. In this paper, we propose Trafne, a framework for the effective training of non-expert annotators by combining feedback from the system (auto validation) and human experts (expert validation). Subsequently, we present a prototype implementation designed for brain tumor image annotation. We perform a user study to evaluate the effectiveness of our framework compared to a traditional training method. The results demonstrate that our proposed approach can help non-expert annotators to complete a non-obvious annotation more accurately than the traditional method. In addition, we discuss the requirements of non-expert training on a non-obvious annotation and potential applications of the framework.

Volumetric brain tumour detection from MRI using visual saliency

  • Mitra, Somosmita
  • Banerjee, Subhashis
  • Hayashi, Yoichi
PLoS One 2017 Journal Article, cited 2 times
Website
Medical image processing has become a major player in the world of automatic tumour region detection and is tantamount to the incipient stages of computer aided design. Saliency detection is a crucial application of medical image processing, and serves in its potential aid to medical practitioners by making the affected area stand out in the foreground from the rest of the background image. The algorithm developed here is a new approach to the detection of saliency in a three dimensional multi channel MR image sequence for the glioblastoma multiforme (a form of malignant brain tumour). First we enhance the three channels, FLAIR (Fluid Attenuated Inversion Recovery), T2 and T1C (contrast enhanced with gadolinium) to generate a pseudo coloured RGB image. This is then converted to the CIE L*a*b* color space. Processing on cubes of sizes k = 4, 8, 16, the L*a*b* 3D image is then compressed into volumetric units; each representing the neighbourhood information of the surrounding 64 voxels for k = 4, 512 voxels for k = 8 and 4096 voxels for k = 16, respectively. The spatial distance of these voxels are then compared along the three major axes to generate the novel 3D saliency map of a 3D image, which unambiguously highlights the tumour region. The algorithm operates along the three major axes to maximise the computation efficiency while minimising loss of valuable 3D information. Thus the 3D multichannel MR image saliency detection algorithm is useful in generating a uniform and logistically correct 3D saliency map with pragmatic applicability in Computer Aided Detection (CADe). Assignment of uniform importance to all three axes proves to be an important factor in volumetric processing, which helps in noise reduction and reduces the possibility of compromising essential information. The effectiveness of the algorithm was evaluated over the BRATS MICCAI 2015 dataset having 274 glioma cases, consisting both of high grade and low grade GBM. The results were compared with that of the 2D saliency detection algorithm taken over the entire sequence of brain data. For all comparisons, the Area Under the receiver operator characteristic (ROC) Curve (AUC) has been found to be more than 0.99 ± 0.01 over various tumour types, structures and locations.

A Two-Stage Atrous Convolution Neural Network for Brain Tumor Segmentation and Survival Prediction

  • Miron, Radu
  • Albert, Ramona
  • Breaban, Mihaela
2021 Book Section, cited 0 times
Glioma is a type of heterogeneous tumor originating in the brain, characterized by the coexistence of multiple subregions with different phenotypic characteristics, which further determine heterogeneous profiles, likely to respond variably to treatment. Identifying spatial variations of gliomas is necessary for targeted therapy. The current paper proposes a neural network composed of heterogeneous building blocks to identify the different histologic sub-regions of gliomas in multi-parametric MRIs and further extracts radiomic features to estimate a patient’s prognosis. The model is evaluated on the BraTS 2020 dataset. Notes 1. https://github.com/IBM/pytorch-large-model-support. 2. https://github.com/maduriron/BraTS2020.

Prostate Cancer Classifier based on Three-Dimensional Magnetic Resonance Imaging and Convolutional Neural Networks

  • Minda, Ana-Maria
  • Albu, Adriana
2023 Journal Article, cited 0 times
Website
The main reason for this research is the worldwide existence of a large number of prostate cancers. This article underlines how necessary medical imaging is, in association with artificial intelligence, in early detection of this medical condition. The diagnosis of a patient with prostate cancer is conventionally made based on multiple biopsies, histopathologic tests and other procedures that are time consuming and directly dependent on the experience level of the radiologist. The deep learning algorithms reduce the investigation time and could help medical staff. This work proposes a binary classification algorithm which uses convolutional neural networks to predict whether a 3D MRI scan contains a malignant lesion or not. The provided result can be a starting point in the diagnosis phase. The investigation, however, should be finalized by a human expert.

Transcription elongation factors represent in vivo cancer dependencies in glioblastoma

  • Miller, Tyler E
  • Liau, Brian B
  • Wallace, Lisa C
  • Morton, Andrew R
  • Xie, Qi
  • Dixit, Deobrat
  • Factor, Daniel C
  • Kim, Leo J Y
  • Morrow, James J
  • Wu, Qiulian
  • Mack, Stephen C
  • Hubert, Christopher G
  • Gillespie, Shawn M
  • Flavahan, William A
  • Hoffmann, Thomas
  • Thummalapalli, Rohit
  • Hemann, Michael T
  • Paddison, Patrick J
  • Horbinski, Craig M
  • Zuber, Johannes
  • Scacheri, Peter C
  • Bernstein, Bradley E
  • Tesar, Paul J
  • Rich, Jeremy N
Nature 2017 Journal Article, cited 41 times
Website
Glioblastoma is a universally lethal cancer with a median survival time of approximately 15 months. Despite substantial efforts to define druggable targets, there are no therapeutic options that notably extend the lifespan of patients with glioblastoma. While previous work has largely focused on in vitro cellular models, here we demonstrate a more physiologically relevant approach to target discovery in glioblastoma. We adapted pooled RNA interference (RNAi) screening technology for use in orthotopic patient-derived xenograft models, creating a high-throughput negative-selection screening platform in a functional in vivo tumour microenvironment. Using this approach, we performed parallel in vivo and in vitro screens and discovered that the chromatin and transcriptional regulators needed for cell survival in vivo are non-overlapping with those required in vitro. We identified transcription pause-release and elongation factors as one set of in vivo-specific cancer dependencies, and determined that these factors are necessary for enhancer-mediated transcriptional adaptations that enable cells to survive the tumour microenvironment. Our lead hit, JMJD6, mediates the upregulation of in vivo stress and stimulus response pathways through enhancer-mediated transcriptional pause-release, promoting cell survival specifically in vivo. Targeting JMJD6 or other identified elongation factors extends survival in orthotopic xenograft mouse models, suggesting that targeting transcription elongation machinery may be an effective therapeutic strategy for glioblastoma. More broadly, this study demonstrates the power of in vivo phenotypic screening to identify new classes of 'cancer dependencies' not identified by previous in vitro approaches, and could supply new opportunities for therapeutic intervention.

Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography

  • Mikhael, P. G.
  • Wohlwend, J.
  • Yala, A.
  • Karstens, L.
  • Xiang, J.
  • Takigami, A. K.
  • Bourgouin, P. P.
  • Chan, P.
  • Mrah, S.
  • Amayri, W.
  • Juan, Y. H.
  • Yang, C. T.
  • Wan, Y. L.
  • Lin, G.
  • Sequist, L. V.
  • Fintelmann, F. J.
  • Barzilay, R.
J Clin Oncol 2023 Journal Article, cited 5 times
Website
PURPOSE: Low-dose computed tomography (LDCT) for lung cancer screening is effective, although most eligible people are not being screened. Tools that provide personalized future cancer risk assessment could focus approaches toward those most likely to benefit. We hypothesized that a deep learning model assessing the entire volumetric LDCT data could be built to predict individual risk without requiring additional demographic or clinical data. METHODS: We developed a model called Sybil using LDCTs from the National Lung Screening Trial (NLST). Sybil requires only one LDCT and does not require clinical data or radiologist annotations; it can run in real time in the background on a radiology reading station. Sybil was validated on three independent data sets: a heldout set of 6,282 LDCTs from NLST participants, 8,821 LDCTs from Massachusetts General Hospital (MGH), and 12,280 LDCTs from Chang Gung Memorial Hospital (CGMH, which included people with a range of smoking history including nonsmokers). RESULTS: Sybil achieved area under the receiver-operator curves for lung cancer prediction at 1 year of 0.92 (95% CI, 0.88 to 0.95) on NLST, 0.86 (95% CI, 0.82 to 0.90) on MGH, and 0.94 (95% CI, 0.91 to 1.00) on CGMH external validation sets. Concordance indices over 6 years were 0.75 (95% CI, 0.72 to 0.78), 0.81 (95% CI, 0.77 to 0.85), and 0.80 (95% CI, 0.75 to 0.86) for NLST, MGH, and CGMH, respectively. CONCLUSION: Sybil can accurately predict an individual's future lung cancer risk from a single LDCT scan to further enable personalized screening. Future study is required to understand Sybil's clinical applications. Our model and annotations are publicly available.

Improved Predictive Sparse Decomposition Method with Densenet for Prediction of Lung Cancer

  • Mienye, Ibomoiye Domor
  • Sun, Yanxia
  • Wang, Zenghui
International Journal of Computing 2020 Journal Article, cited 0 times
Website
Lung cancer is the second most common form of cancer in both men and women. It is responsible for at least 25% of all cancer-related deaths in the United States alone. Accurate and early diagnosis of this form of cancer can increase the rate of survival. Computed tomography (CT) imaging is one of the most accurate techniques for diagnosing the disease. In order to improve the classification accuracy of pulmonary lesions indicating lung cancer,this paper presents an improved method for training a densely connected convolutionalnetwork (DenseNet). The optimized setting ensures that code prediction error and reconstruction error within hidden layers are simultaneously minimized. To achieve this and improve the classification accuracy of the DenseNet, we propose an improved predictive sparse decomposition (PSD) approach for extracting sparse features from the medical images. The sparse decomposition is achieved by using a linear combination of basis functions over the L2 norm. The effect of dropout and hidden layer expansion on the classification accuracy of the DenseNet is also investigated. CT scans of human lung samples are obtained from The Cancer Imaging Archive (TCIA) hosted by the University of Arkansas for Medical Sciences (UAMS). The proposed method outperforms seven other neural network architectures and machine learning algorithms with a classification accuracy of 95%.

Improved Machine Learning Algorithms with Application to Medical Diagnosis

  • Mienye, Ibomoiye Domor
2021 Thesis, cited 0 times
Website
Medical data generated from hospitals are an increasing source of information for automatic medical diagnosis. These data contain latent patterns and correlations that can result in better diagnosis when appropriately processed. Most applications of machine learning (ML) to these patient records have focused on utilizing the ML algorithms directly, which usually results in suboptimal performance as most medical datasets are quite imbalanced. Also, labelling the enormous medical data is a challenging and expensive task. In order to solve these problems, recent research has focused on the development of improved ML methods, mainly preprocessing pipelines and feature learning methods. This thesis presents four machine learning approaches aimed at improving the medical diagnosis performance using publicly available datasets.·Firstly, a method was proposed to predict heart disease risk using an unsupervised sparse autoencoder (SAE) and artificial neural network.·Secondly, a method was developed by stacking multiple SAEs to achieve improved representation learning, combined with a softmax classifier utilized for the classification task. ·Thirdly, an approach was developed for the classification of pulmonary lesions indicating lung cancer using animproved predictive sparse decomposition (PSD) method to achieve unsupervised feature learning and a densely connected convolutional network (DenseNet) for classification. ·Lastly, an enhanced ensemble learning method was developed to predict heart disease effectively. The proposed methods obtained better performance compared to other ML algorithms and some techniques available in recent literature. This research has also shown that ML algorithms tend to achieve improved performance when trained with relevant data. Also, the study further demonstrates the effectiveness of an enhanced ensemble learning method in disease prediction.This thesis also provides direction for future research.

Prediction of prostate cancer grade using fractal analysis of perfusion MRI: retrospective proof-of-principle study

  • Michallek, F.
  • Huisman, H.
  • Hamm, B.
  • Elezkurtaj, S.
  • Maxeiner, A.
  • Dewey, M.
Eur Radiol 2021 Journal Article, cited 1 times
Website
OBJECTIVES: Multiparametric MRI has high diagnostic accuracy for detecting prostate cancer, but non-invasive prediction of tumor grade remains challenging. Characterizing tumor perfusion by exploiting the fractal nature of vascular anatomy might elucidate the aggressive potential of a tumor. This study introduces the concept of fractal analysis for characterizing prostate cancer perfusion and reports about its usefulness for non-invasive prediction of tumor grade. METHODS: We retrospectively analyzed the openly available PROSTATEx dataset with 112 cancer foci in 99 patients. In all patients, histological grading groups specified by the International Society of Urological Pathology (ISUP) were obtained from in-bore MRI-guided biopsy. Fractal analysis of dynamic contrast-enhanced perfusion MRI sequences was performed, yielding fractal dimension (FD) as quantitative descriptor. Two-class and multiclass diagnostic accuracy was analyzed using area under the curve (AUC) receiver operating characteristic analysis, and optimal FD cutoffs were established. Additionally, we compared fractal analysis to conventional apparent diffusion coefficient (ADC) measurements. RESULTS: Fractal analysis of perfusion allowed accurate differentiation of non-significant (group 1) and clinically significant (groups 2-5) cancer with a sensitivity of 91% (confidence interval [CI]: 83-96%) and a specificity of 86% (CI: 73-94%). FD correlated linearly with ISUP groups (r(2) = 0.874, p < 0.001). Significant groupwise differences were obtained between low, intermediate, and high ISUP group 1-4 (p </= 0.001) but not group 5 tumors. Fractal analysis of perfusion was significantly more reliable than ADC in predicting non-significant and clinically significant cancer (AUCFD = 0.97 versus AUCADC = 0.77, p < 0.001). CONCLUSION: Fractal analysis of perfusion MRI accurately predicts prostate cancer grading in low-, intermediate-, and high-, but not highest-grade, tumors. KEY POINTS: * In 112 prostate carcinomas, fractal analysis of MR perfusion imaging accurately differentiated low-, intermediate-, and high-grade cancer (ISUP grade groups 1-4). * Fractal analysis detected clinically significant prostate cancer with a sensitivity of 91% (83-96%) and a specificity of 86% (73-94%). * Fractal dimension of perfusion at the tumor margin may provide an imaging biomarker to predict prostate cancer grading.

Accuracy of fractal analysis and PI-RADS assessment of prostate magnetic resonance imaging for prediction of cancer grade groups: a clinical validation study

  • Michallek, F.
  • Huisman, H.
  • Hamm, B.
  • Elezkurtaj, S.
  • Maxeiner, A.
  • Dewey, M.
Eur Radiol 2021 Journal Article, cited 1 times
Website
OBJECTIVES: Multiparametric MRI with Prostate Imaging Reporting and Data System (PI-RADS) assessment is sensitive but not specific for detecting clinically significant prostate cancer. This study validates the diagnostic accuracy of the recently suggested fractal dimension (FD) of perfusion for detecting clinically significant cancer. MATERIALS AND METHODS: Routine clinical MR imaging data, acquired at 3 T without an endorectal coil including dynamic contrast-enhanced sequences, of 72 prostate cancer foci in 64 patients were analyzed. In-bore MRI-guided biopsy with International Society of Urological Pathology (ISUP) grading served as reference standard. Previously established FD cutoffs for predicting tumor grade were compared to measurements of the apparent diffusion coefficient (25th percentile, ADC25) and PI-RADS assessment with and without inclusion of the FD as separate criterion. RESULTS: Fractal analysis allowed prediction of ISUP grade groups 1 to 4 but not 5, with high agreement to the reference standard (kappaFD = 0.88 [CI: 0.79-0.98]). Integrating fractal analysis into PI-RADS allowed a strong improvement in specificity and overall accuracy while maintaining high sensitivity for significant cancer detection (ISUP > 1; PI-RADS alone: sensitivity = 96%, specificity = 20%, area under the receiver operating curve [AUC] = 0.65; versus PI-RADS with fractal analysis: sensitivity = 95%, specificity = 88%, AUC = 0.92, p < 0.001). ADC25 only differentiated low-grade group 1 from pooled higher-grade groups 2-5 (kappaADC = 0.36 [CI: 0.12-0.59]). Importantly, fractal analysis was significantly more reliable than ADC25 in predicting non-significant and clinically significant cancer (AUCFD = 0.96 versus AUCADC = 0.75, p < 0.001). Diagnostic accuracy was not significantly affected by zone location. CONCLUSIONS: Fractal analysis is accurate in noninvasively predicting tumor grades in prostate cancer and adds independent information when implemented into PI-RADS assessment. This opens the opportunity to individually adjust biopsy priority and method in individual patients. KEY POINTS: * Fractal analysis of perfusion is accurate in noninvasively predicting tumor grades in prostate cancer using dynamic contrast-enhanced sequences (kappaFD = 0.88). * Including the fractal dimension into PI-RADS as a separate criterion improved specificity (from 20 to 88%) and overall accuracy (AUC from 0.86 to 0.96) while maintaining high sensitivity (96% versus 95%) for predicting clinically significant cancer. * Fractal analysis was significantly more reliable than ADC25 in predicting clinically significant cancer (AUCFD = 0.96 versus AUCADC = 0.75).

A Neck-Thyroid Phantom with Small Sizes of Thyroid Remnants for Postsurgical I-123 and I-131 SPECT/CT Imaging

  • Michael, K.
  • Hadjiconstanti, A.
  • Lontos, A.
  • Demosthenous, G.
  • Frangos, S.
  • Parpottas, Y.
2023 Journal Article, cited 0 times
Website
Post-surgical I-123 and I-131 SPECT/CT imaging can provide information on the presence and sizes of thyroid remnants and/or metastasis for an accurate re-staging of disease to apply an individualized radioiodine therapy. The purpose of this study was to develop and validate a neck-thyroid phantom with small sizes of thyroid remnants to be utilized for the optimization of post-surgical SPECT/CT imaging. 3D printing and molding techniques were used to develop the hollow human-shaped and -sized phantom which enclosed the trachea, esophagus, cervical spine, clavicle, and multiple detachable sections with different sizes of thyroid remnant in clinically relevant positions. CT images were acquired to evaluate the morphology of the phantom and the sizes of remnants. Triple-energy window scattered and attenuation corrected SPECT images were acquired for this phantom and for a modified RS-542 commercial solid neck-thyroid phantom. The response and sensitivity of the SPECT modality for different administered I-123 and I-131 activities within the equal-size remnants of both phantoms were calculated. When we compared the phantoms, using the same radiopharmaceutical and similar activities, we found that the measured sensitivities were comparable. In all cases, the I-123 counting rate was higher than the I-131 one. This phantom with capabilities to insert different small sizes of remnants and simulate different background-to-remnants activity ratios can be utilized to evaluate postsurgical thyroid SPECT/CT imaging procedures.

Deep learning-based quantification of temporalis muscle has prognostic value in patients with glioblastoma

  • Mi, E.
  • Mauricaite, R.
  • Pakzad-Shahabi, L.
  • Chen, J.
  • Ho, A.
  • Williams, M.
Br J Cancer 2022 Journal Article, cited 1 times
Website
BACKGROUND: Glioblastoma is the commonest malignant brain tumour. Sarcopenia is associated with worse cancer survival, but manually quantifying muscle on imaging is time-consuming. We present a deep learning-based system for quantification of temporalis muscle, a surrogate for skeletal muscle mass, and assess its prognostic value in glioblastoma. METHODS: A neural network for temporalis segmentation was trained with 366 MRI head images from 132 patients from 4 different glioblastoma data sets and used to quantify muscle cross-sectional area (CSA). Association between temporalis CSA and survival was determined in 96 glioblastoma patients from internal and external data sets. RESULTS: The model achieved high segmentation accuracy (Dice coefficient 0.893). Median age was 55 and 58 years and 75.6 and 64.7% were males in the in-house and TCGA-GBM data sets, respectively. CSA was an independently significant predictor for survival in both the in-house and TCGA-GBM data sets (HR 0.464, 95% CI 0.218-0.988, p = 0.046; HR 0.466, 95% CI 0.235-0.925, p = 0.029, respectively). CONCLUSIONS: Temporalis CSA is a prognostic marker in patients with glioblastoma, rapidly and accurately assessable with deep learning. We are the first to show that a head/neck muscle-derived sarcopenia metric generated using deep learning is associated with oncological outcomes and one of the first to show deep learning-based muscle quantification has prognostic value in cancer.

Detection of Lung Cancer Nodule on CT scan Images by using Region Growing Method

  • Mhetre, Rajani R
  • Sache, Rukhsana G
International Journal of Current Trends in Engineering & Research 2016 Journal Article, cited 0 times
Website

Phase I trial of preoperative chemoradiation plus sorafenib for high-risk extremity soft tissue sarcomas with dynamic contrast-enhanced MRI correlates

  • Meyer, Janelle M
  • Perlewitz, Kelly S
  • Hayden, James B
  • Doung, Yee-Cheen
  • Hung, Arthur Y
  • Vetto, John T
  • Pommier, Rodney F
  • Mansoor, Atiya
  • Beckett, Brooke R
  • Tudorica, Alina
Clinical Cancer Research 2013 Journal Article, cited 41 times
Website

Domain adaptation for segmentation of critical structures for prostate cancer therapy

  • Meyer, A.
  • Mehrtash, A.
  • Rak, M.
  • Bashkanov, O.
  • Langbein, B.
  • Ziaei, A.
  • Kibel, A. S.
  • Tempany, C. M.
  • Hansen, C.
  • Tokuda, J.
2021 Journal Article, cited 0 times
Website
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.

Segmentation of Pulmonary Nodules in Computed Tomography Using a Regression Neural Network Approach and its Application to the Lung Image Database Consortium and Image Database Resource Initiative Dataset

  • Messay, Temesguen
  • Hardie, Russell C
  • Tuinstra, Timothy R
Medical Image Analysis 2015 Journal Article, cited 55 times
Website

Efficient Embedding Network for 3D Brain Tumor Segmentation

  • Messaoudi, Hicham
  • Belaid, Ahror
  • Allaoui, Mohamed Lamine
  • Zetout, Ahcene
  • Allili, Mohand Said
  • Tliba, Souhil
  • Ben Salem, Douraied
  • Conze, Pierre-Henri
2021 Book Section, cited 0 times
3D medical image processing with deep learning greatly suffers from a lack of data. Thus, studies carried out in this field are limited compared to works related to 2D natural image analysis, where very large datasets exist. As a result, powerful and efficient 2D convolutional neural networks have been developed and trained. In this paper, we investigate a way to transfer the performance of a two-dimensional classification network for the purpose of three-dimensional semantic segmentation of brain tumors. We propose an asymmetric U-Net network by incorporating the EfficientNet model as part of the encoding branch. As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network. Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.

Computer-aided diagnosis of hepatocellular carcinoma fusing imaging and structured health data

  • Menegotto, A. B.
  • Becker, C. D. L.
  • Cazella, S. C.
Health Inf Sci Syst 2021 Journal Article, cited 0 times
Website
Introduction: Hepatocellular carcinoma is the prevalent primary liver cancer, a silent disease that killed 782,000 worldwide in 2018. Multimodal deep learning is the application of deep learning techniques, fusing more than one data modality as the model's input. Purpose: A computer-aided diagnosis system for hepatocellular carcinoma developed with multimodal deep learning approaches could use multiple data modalities as recommended by clinical guidelines, and enhance the robustness and the value of the second-opinion given to physicians. This article describes the process of creation and evaluation of an algorithm for computer-aided diagnosis of hepatocellular carcinoma developed with multimodal deep learning techniques fusing preprocessed computed-tomography images with structured data from patient Electronic Health Records. Results: The classification performance achieved by the proposed algorithm in the test dataset was: accuracy = 86.9%, precision = 89.6%, recall = 86.9% and F-Score = 86.7%. These classification performance metrics are closer to the state-of-the-art in this area and were achieved with data modalities which are cheaper than traditional Magnetic Resonance Imaging approaches, enabling the use of the proposed algorithm by low and mid-sized healthcare institutions. Conclusion: The classification performance achieved with the multimodal deep learning algorithm is higher than human specialists diagnostic performance using only CT for diagnosis. Even though the results are promising, the multimodal deep learning architecture used for hepatocellular carcinoma prediction needs more training and test processes using different datasets before the use of the proposed algorithm by physicians in real healthcare routines. The additional training aims to confirm the classification performance achieved and enhance the model's robustness.

More accurate and efficient segmentation of organs‐at‐risk in radiotherapy with Convolutional Neural Networks Cascades

  • Men, Kuo
  • Geng, Huaizhi
  • Cheng, Chingyun
  • Zhong, Haoyu
  • Huang, Mi
  • Fan, Yong
  • Plastaras, John P
  • Lin, Alexander
  • Xiao, Ying
Medical Physics 2018 Journal Article, cited 0 times
Website

Comparison of Automatic Seed Generation Methods for Breast Tumor Detection Using Region Growing Technique

  • Melouah, Ahlem
2015 Book Section, cited 7 times
Website

Database Acquisition for the Lung Cancer Computer Aided Diagnostic Systems

  • Meldo, Anna
  • Utkin, Lev
  • Lukashin, Aleksey
  • Muliukha, Vladimir
  • Zaborovsky, Vladimir
2019 Conference Paper, cited 0 times
Website
Most of the used computer aided diagnostic (CAD) systems based on applying the deep learning algorithms are similar from the point of view of data processing stages. The main typical stages are the training data acquisition, pre-processing, segmentation and classification. Homogeneity of a training dataset structure and its completeness are very important for minimizing inaccuracies in the development of the CAD systems. The main difficulties in the medical training data acquisition are concerned with their heterogeneity and incompleteness. Another problem is a lack of a sufficient large amount of data for training deep neural networks which are a basis of the CAD systems. In order to overcome these problems in the lung cancer CAD systems, a new methodology of the dataset acquisition is proposed by using as an example the database called LIRA which has been applied to training the intellectual lung cancer CAD system called by Dr. AIzimov. One of the important peculiarities of the dataset LIRA is the morphological confirmation of diseases. Another peculiarity is taking into account and including “atypical” cases from the point of view of radiographic features. The database development is carried out in the interdisciplinary collaboration of radiologists and data scientists developing the CAD system.

Challenging Current Semi-supervised Anomaly Segmentation Methods for Brain MRI

  • Meissen, Felix
  • Kaissis, Georgios
  • Rueckert, Daniel
2022 Book Section, cited 0 times
In this work, we tackle the problem of Semi-Supervised Anomaly Segmentation (SAS) in Magnetic Resonance Images (MRI) of the brain, which is the task of automatically identifying pathologies in brain images. Our work challenges the effectiveness of current Machine Learning (ML) approaches in this application domain by showing that thresholding Fluid-attenuated inversion recovery (FLAIR) MR scans provides better anomaly segmentation maps than several different ML-based anomaly detection models. Specifically, our method achieves better Dice similarity coefficients and Precision-Recall curves than the competitors on various popular evaluation data sets for the segmentation of tumors and multiple sclerosis lesions. (Code available under: https://github.com/FeliMe/brain_sas_baseline)

AutoSeg - Steering the Inductive Biases for Automatic Pathology Segmentation

  • Meissen, Felix
  • Kaissis, Georgios
  • Rueckert, Daniel
2021 Conference Paper, cited 0 times
Website
In medical imaging, un-, semi-, or self-supervised pathology detection is often approached with anomaly- or out-of-distribution detection methods, whose inductive biases are not intentionally directed towards detecting pathologies, and are therefore sub-optimal for this task. To tackle this problem, we propose AutoSeg, an engine that can generate diverse artificial anomalies that resemble the properties of real-world pathologies. Our method can accurately segment unseen artificial anomalies and outperforms existing methods for pathology detection on a challenging real-world dataset of Chest X-ray images. We experimentally evaluate our method on the Medical Out-of-Distribution Analysis Challenge 2021 (Code available under: https://github.com/FeliMe/autoseg).

Computer-aided diagnosis of prostate cancer using multiparametric MRI and clinical features: A patient-level classification framework

  • Mehta, P.
  • Antonelli, M.
  • Ahmed, H. U.
  • Emberton, M.
  • Punwani, S.
  • Ourselin, S.
Med Image Anal 2021 Journal Article, cited 1 times
Website
Computer-aided diagnosis (CAD) of prostate cancer (PCa) using multiparametric magnetic resonance imaging (mpMRI) is actively being investigated as a means to provide clinical decision support to radiologists. Typically, these systems are trained using lesion annotations. However, lesion annotations are expensive to obtain and inadequate for characterizing certain tumor types e.g. diffuse tumors and MRI invisible tumors. In this work, we introduce a novel patient-level classification framework, denoted PCF, that is trained using patient-level labels only. In PCF, features are extracted from three-dimensional mpMRI and derived parameter maps using convolutional neural networks and subsequently, combined with clinical features by a multi-classifier support vector machine scheme. The output of PCF is a probability value that indicates whether a patient is harboring clinically significant PCa (Gleason score >/=3+4) or not. PCF achieved mean area under the receiver operating characteristic curves of 0.79 and 0.86 on the PICTURE and PROSTATEx datasets respectively, using five-fold cross-validation. Clinical evaluation over a temporally separated PICTURE dataset cohort demonstrated comparable sensitivity and specificity to an experienced radiologist. We envision PCF finding most utility as a second reader during routine diagnosis or as a triage tool to identify low-risk patients who do not require a clinical read.

Bolus arrival time and its effect on tissue characterization with dynamic contrast-enhanced magnetic resonance imaging

  • Mehrtash, Alireza
  • Gupta, Sandeep N
  • Shanbhag, Dattesh
  • Miller, James V
  • Kapur, Tina
  • Fennessy, Fiona M
  • Kikinis, Ron
  • Fedorov, Andriy
Journal of Medical Imaging 2016 Journal Article, cited 6 times
Website
Matching the bolus arrival time (BAT) of the arterial input function (AIF) and tissue residue function (TRF) is necessary for accurate pharmacokinetic (PK) modeling of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We investigated the sensitivity of volume transfer constant ([Formula: see text]) and extravascular extracellular volume fraction ([Formula: see text]) to BAT and compared the results of four automatic BAT measurement methods in characterization of prostate and breast cancers. Variation in delay between AIF and TRF resulted in a monotonous change trend of [Formula: see text] and [Formula: see text] values. The results of automatic BAT estimators for clinical data were all comparable except for one BAT estimation method. Our results indicate that inaccuracies in BAT measurement can lead to variability among DCE-MRI PK model parameters, diminish the quality of model fit, and produce fewer valid voxels in a region of interest. Although the selection of the BAT method did not affect the direction of change in the treatment assessment cohort, we suggest that BAT measurement methods must be used consistently in the course of longitudinal studies to control measurement variability.

Content-Based Image Retrieval System for Pulmonary Nodules Using Optimal Feature Sets and Class Membership-Based Retrieval

  • Mehre, Shrikant A
  • Dhara, Ashis Kumar
  • Garg, Mandeep
  • Kalra, Naveen
  • Khandelwal, Niranjan
  • Mukhopadhyay, Sudipta
2018 Journal Article, cited 0 times
Website

A Cascaded Deep Learning-Based Artificial Intelligence Algorithm for Automated Lesion Detection and Classification on Biparametric Prostate Magnetic Resonance Imaging

  • Mehralivand, Sherif
  • Yang, Dong
  • Harmon, Stephanie A
  • Xu, Daguang
  • Xu, Ziyue
  • Roth, Holger
  • Masoudi, Samira
  • Sanford, Thomas H
  • Kesani, Deepak
  • Lay, Nathan S
  • Merino, Maria J
  • Wood, Bradford J
  • Pinto, Peter A
  • Choyke, Peter L
  • Turkbey, Baris
Acad Radiol 2021 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.

Deep learning-based artificial intelligence for prostate cancer detection at biparametric MRI

  • Mehralivand, S.
  • Yang, D.
  • Harmon, S. A.
  • Xu, D.
  • Xu, Z.
  • Roth, H.
  • Masoudi, S.
  • Kesani, D.
  • Lay, N.
  • Merino, M. J.
  • Wood, B. J.
  • Pinto, P. A.
  • Choyke, P. L.
  • Turkbey, B.
2022 Journal Article, cited 0 times
Website
PURPOSE: To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS: MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer >/= ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS: 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION: We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.

Classification of 1p/19q Status in Low-Grade Gliomas: Experiments with Radiomic Features and Ensemble-Based Machine Learning Methods

  • Medeiros, Tony Alexandre
  • Saraiva Junior, Raimundo Guimarães
  • Cassia, Guilherme de Souza e
  • Nascimento, Francisco Assis de Oliveira
  • Carvalho, João Luiz Azevedo de
2023 Journal Article, cited 0 times
Gliomas comprise the vast majority of all malignant brain tumors. Low-grade glioma patients with combined whole-arm losses of 1p and 19q chromosomes were shown to have significantly better overall survival rates compared to non-deleted patients. This work evaluates several approaches for assessment of 1p/19q status from T2-weighted magnetic resonance images, using radiomics and machine learning. Experiments were performed using images from a public database (102 codeleted, 57 non-deleted). We experimented with sets of 68 and 100 radiomic features, and with several classifiers, including support vector machine, k-nearest neighbors, stochastic gradient descent, logistic regression, decision tree, Gaussian naive Bayes, and linear discriminant analysis. We also experimented with several ensemble-based methods, including four boosting-based classifiers, random forest, extra-trees, and bagging. The performance of these methods was compared using various metrics. Our best results were achieved using a bagging ensemble estimator based on the decision tree classifier, using only texture-based radiomics features. Compared to other works that used the same database, this approach provided higher sensitivity. It also achieved higher sensitivity than that provided by neurosurgeons and neuroradiologists analyzing the same images. We also show that including radiomic features associated with first order statistics and shape does not improve the performance of the classifiers, and in many cases worsens it. The molecular assessment of brain tumors through biopsies is an invasive procedure, and is subject to sampling errors. Therefore, the techniques presented in this work have strong potential for aiding in better clinical, surgical, and therapeutic decision-making.

Demystifying the results of RTOG 0617: Identification of dose sensitive cardiac sub-regions associated with overall survival

  • McWilliam, A.
  • Abravan, A.
  • Banfill, K.
  • Faivre-Finn, C.
  • van Herk, M.
2023 Journal Article, cited 0 times
Website
INTRODUCTION: The RTOG 0617 trial presented worse survival for patients with lung cancer treated in the high-dose (74Gy) arm. In multivariable models, radiation level and whole heart volumetric dose parameters were associated with survival. In this work, we consider heart sub-regions to explain the observed survival difference between radiation levels. METHODS: Voxel-based analysis identified anatomical regions where dose was associated with survival. Bootstrapping clinical and dosimetric variables into an elastic-net model selected variables associated with survival. Multivariable Cox regression survival models assessed significance of dose to the heart sub-region, compared to whole heart v5 and v30. Finally, trial outcome was assessed following propensity score matching of patients on lung dose, heart sub-region dose, and tumour volume. RESULTS: 458 patients were eligible for voxel-based analysis. A significance region (p<0.001) was identified in the base of the heart. Bootstrapping selected mean lung dose, radiation level, log tumour volume, and heart region dose. Multivariable Cox model showed dose to the heart region (p=0.02), and tumour volume (p=0.03) significantly associated with survival, radiation level was not significant (p=0.07). Models showed whole heart v5 and v30 were not associated with survival, with radiation level significant (p<0.05). In the matched cohort, no significant survival difference was seen between radiation levels. CONCLUSION: Dose to the base of the heart is associated with overall survival, partly removing the radiation level effect, and explaining that worse survival in the high dose arm is due in part to heart subregion dose. By defining a heart avoidance region, future dose escalation trials may be feasible.

Determining the variability of lesion size measurements from ct patient data sets acquired under “no change” conditions

  • McNitt-Gray, Michael F
  • Kim, Grace Hyun
  • Zhao, Binsheng
  • Schwartz, Lawrence H
  • Clunie, David
  • Cohen, Kristin
  • Petrick, Nicholas
  • Fenimore, Charles
  • Lu, ZQ John
  • Buckler, Andrew J
Transl OncolTranslational oncology 2015 Journal Article, cited 0 times

Triplanar Ensemble of 3D-to-2D CNNs with Label-Uncertainty for Brain Tumor Segmentation

  • McKinley, Richard
  • Rebsamen, Michael
  • Meier, Raphael
  • Wiest, Roland
2020 Book Section, cited 0 times
We introduce a modification of our previous 3D-to-2D fully convolutional architecture, DeepSCAN, replacing batch normalization with instance normalization, and adding a lightweight local attention mechanism. These networks are trained using a previously described loss function which models label noise and uncertainty. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2019.

Uncertainty-Driven Refinement of Tumor-Core Segmentation Using 3D-to-2D Networks with Label Uncertainty

  • McKinley, Richard
  • Rebsamen, Micheal
  • Dätwyler, Katrin
  • Meier, Raphael
  • Radojewski, Piotr
  • Wiest, Roland
2021 Book Section, cited 0 times
The BraTS dataset contains a mixture of high-grade and low-grade gliomas, which have a rather different appearance: previous studies have shown that performance can be improved by separated training on low-grade gliomas (LGGs) and high-grade gliomas (HGGs), but in practice this information is not available at test time to decide which model to use. By contrast with HGGs, LGGs often present no sharp boundary between the tumor core and the surrounding edema, but rather a gradual reduction of tumor-cell density. Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which ranked highly in the 2019 BraTS challenge and was trained using an uncertainty-aware loss, we separate cases into those with a confidently segmented core, and those with a vaguely segmented or missing core. Since by assumption every tumor has a core, we reduce the threshold for classification of core tissue in those cases where the core, as segmented by the classifier, is vaguely defined or missing. We then predict survival of high-grade glioma patients using a fusion of linear regression and random forest classification, based on age, number of distinct tumor components, and number of distinct tumor cores. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on the testing set, where the method achieved 4th place in Segmentation, 1st place in uncertainty estimation, and 1st place in Survival prediction.

Using neural networks to extend cropped medical images for deformable registration among images with differing scan extents

  • McKenzie, E. M.
  • Tong, N.
  • Ruan, D.
  • Cao, M.
  • Chin, R. K.
  • Sheng, K.
Med Phys 2021 Journal Article, cited 1 times
Website
PURPOSE: Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS: Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS: We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS: Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.

A Neural Network Approach to Deformable Image Registration

  • McKenzie, Elizabeth MaryAnn
2021 Thesis, cited 0 times
Website
Deformable image registration (DIR) is an important component of a patient’s radiation therapy treatment. During the planning stage it combines complementary information from different imaging modalities and time points. During treatment, it aligns the patient to a reproducible position for accurate dose delivery. As the treatment progresses, it can inform clinicians of important changes in anatomy which trigger plan adjustment. And finally, after the treatment is complete, registering images at subsequent time points can help to monitor the patient’s health. The body’s natural non-rigid motion makes DIR a complex challenge. Recently neural networks have shown impressive improvements in image processing and have been leveraged for DIR tasks. This thesis is a compilation of neural network-based approaches addressing lingering issues in medical DIR, namely 1) multi-modality registration, 2) registration with different scan extents, and 3) modeling large motion in registration. For the first task we employed a cycle consistentgenerative adversarial network to translate images in the MRI domain to the CT domain, such that the moving and target images were in a common domain. DIR could then proceed as a synthetically bridged mono-modality registration. The second task used advances in network based inpainting to artificially extend images beyond their scan extent. The third task leveraged axial self-attention networks’ ability to learn long range interactions to predict the deformation in the presence of large motion. For all these studies we used images from the head and neck, which exhibit all of these challenges, although these results can be generalized to other parts of the anatomy. The results of our experiments yielded networks that showed significant improvements in multi modal DIR relative to traditional methods. We also produced network which can successfully predict missing tissue and demonstrated a DIR workflow that is independent of scan length. Finally, we trained a network whose accuracy is a balance between large and small motion prediction, and which opens the door to non-convolution-based DIR. By leveraging the power of artificial intelligence, we demonstrate a new paradigm in deformable image registration. Neural networks learn new patterns and connections in imaging data which go beyond the hand-crafted features of traditional image processing. This thesis shows how each step of registration, from the image pre-processing to the registration itself, can benefit from this exciting and cutting-edge approach.

2D Dense-UNet: A Clinically Valid Approach to Automated Glioma Segmentation

  • McHugh, Hugh
  • Talou, Gonzalo Maso
  • Wang, Alan
2021 Book Section, cited 0 times
Brain tumour segmentation is a requirement of many quantitative MRI analyses involving glioma. This paper argues that 2D slice-wise approaches to brain tumour segmentation may be more compatible with current MRI acquisition protocols than 3D methods because clinical MRI is most commonly a slice-based modality. A 2D Dense-UNet segmentation model was trained on the BraTS 2020 dataset. Mean Dice values achieved on the test dataset were: 0.859 (WT), 0.788 (TC) and 0.766 (ET). Median test data Dice values were: 0.902 (WT), 0.887 (TC) and 0.823 (ET). Results were comparable to previous high performing BraTS entries. 2D segmentation may have advantages over 3D methods in clinical MRI datasets where volumetric sequences are not universally available.

EQUIPMENT TO ADDRESS INFRASTRUCTURE AND HUMAN RESOURCE CHALLENGES FOR RADIOTHERAPY IN LOW-RESOURCE SETTINGS

  • McCarroll, Rachel
2018 Thesis, cited 0 times
Website
Millions of people in low- and middle- income countries (LMICs) are without access to radiation therapy and as rate of population growth in these regions increase and lifestyle factors which are indicative of cancer increase; the cancer burden will only rise. There are a multitude of reasons for lack of access but two themes among them are the lack of access to affordable and reliable teletherapy units and insufficient properly trained staff to deliver high quality care. The purpose of this work was to investigate to two proposed efforts to improve access to radiotherapy in low-resource areas; an upright radiotherapy chair (to facilitate low-cost treatment devices) and a fully automated treatment planning strategy. A fixed-beam patient treatment device would allow for reduced upfront and ongoing cost of teletherapy machines. The enabling technology for such a device is the immobilization chair. A rotating seated patient not only allows for a low-cost fixed treatment machine but also has dosimetric and comfort advantages. We examined the inter- and intra- fraction setup reproducibility, and showed they are less than 3mm, similar to reports for the supine position. The head-and-neck treatment site, one of the most challenging treatment planning, greatly benefits from the use of advanced treatment planning strategies. These strategies, however, require time consuming normal tissue and target contouring and complex plan optimization strategies. An automated treatment planning approach could reduce the additional number of medical physicists (the primary treatment planners) in LMICs by up to half. We used in-house algorithms including mutli-atlas contouring and quality assurance checks, combined with tools in the Eclipse Treatment Planning System®, to automate every step of the treatment planning process for head-and-neck cancers. Requiring only the patient CT scan, patient details including dose and fractionation, and contours of the gross tumor volume, high quality treatment plans can be created in less than 40 minutes.

Quantitative Multiparametric MRI Features and PTEN Expression of Peripheral Zone Prostate Cancer: A Pilot Study

  • McCann, Stephanie M
  • Jiang, Yulei
  • Fan, Xiaobing
  • Wang, Jianing
  • Antic, Tatjana
  • Prior, Fred
  • VanderWeele, David
  • Oto, Aytekin
American Journal of Roentgenology 2016 Journal Article, cited 11 times
Website

Homomorphic-Encrypted Volume Rendering

  • Mazza, Sebastian
  • Patel, Daniel
  • Viola, Ivan
IEEE Trans Vis Comput Graph 2020 Journal Article, cited 0 times
Website
Computationally demanding tasks are typically calculated in dedicated data centers, and real-time visualizations also follow this trend. Some rendering tasks, however, require the highest level of confidentiality so that no other party, besides the owner, can read or see the sensitive data. Here we present a direct volume rendering approach that performs volume rendering directly on encrypted volume data by using the homomorphic Paillier encryption algorithm. This approach ensures that the volume data by using the homomorphic Paillier encryption algorithm. This approach ensures that the volume data and rendered image are uninterpretable to the rendering server. Our volume rendering pipeline introduces novel approaches for encrypted-data compositing, interpolation, and opacity modulation, as well as simple transfer function design, where each of these routines maintains the highest level of privacy. We present performance and memory overhead analysis that is associated with our privacy-preserving scheme. Our approach is open and secure by design, as opposed to secure through obscurity. Owners of the data only have to keep their secure key confidential to guarantee the privacy of their volume data and the rendered images. Our work is, to our knowledge, the first privacy-preserving remote volume-rendering approach that does not require that any server involved be trustworthy; even in cases when the server is compromised, no sensitive data will be leaked to a foreign party.

Computer-extracted MR imaging features are associated with survival in glioblastoma patients

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Peters, Katherine B
  • Hobbs, Hasan
Journal of Neuro-Oncology 2014 Journal Article, cited 33 times
Website
Automatic survival prognosis in glioblastoma (GBM) could result in improved treatment planning for the patient. The purpose of this research is to investigate the association of survival in GBM patients with tumor features in pre-operative magnetic resonance (MR) images assessed using a fully automatic computer algorithm. MR imaging data for 68 patients from two US institutions were used in this study. The images were obtained from the Cancer Imaging Archive. A fully automatic computer vision algorithm was applied to segment the images and extract eight imaging features from the MRI studies. The features included tumor side, proportion of enhancing tumor, proportion of necrosis, T1/FLAIR ratio, major axis length, minor axis length, tumor volume, and thickness of enhancing margin. We constructed a multivariate Cox proportional hazards regression model and used a likelihood ratio test to establish whether the imaging features are prognostic of survival. We also evaluated the individual prognostic value of each feature through multivariate analysis using the multivariate Cox model and univariate analysis using univariate Cox models for each feature. We found that the automatically extracted imaging features were predictive of survival (p = 0.031). Multivariate analysis of individual features showed that two individual features were predictive of survival: proportion of enhancing tumor (p = 0.013), and major axis length (p = 0.026). Univariate analysis indicated the same two features as significant (p = 0.021, and p = 0.017 respectively). We conclude that computer-extracted MR imaging features can be used for survival prognosis in GBM patients.

Radiogenomic Analysis of Breast Cancer: Luminal B Molecular Subtype Is Associated with Enhancement Dynamics at MR Imaging

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Grimm, Lars J
  • Yoon, Sora C
  • Silber, James I
RadiologyRadiology 2014 Journal Article, cited 88 times
Website
Purpose: To investigate associations between breast cancer molecular subtype and semiautomatically extracted magnetic resonance (MR) imaging features. Materials and Methods: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive for 48 patients with breast cancer from four institutions in the United States were used in this institutional review board approval-exempt study. Computer vision algorithms were applied to extract 23 imaging features from lesions indicated by a breast radiologist on MR images. Morphologic, textural, and dynamic features were extracted. Molecular subtype was determined on the basis of genomic analysis. Associations between the imaging features and molecular subtype were evaluated by using logistic regression and likelihood ratio tests. The analysis controlled for the age of the patients, their menopausal status, and the orientation of the MR images (sagittal vs axial). Results: There is an association (P = .0015) between the luminal B subtype and a dynamic contrast material-enhancement feature that quantifies the relationship between lesion enhancement and background parenchymal enhancement. Cancers with a higher ratio of lesion enhancement rate to background parenchymal enhancement rate are more likely to be luminal B subtype. Conclusion: The luminal B subtype of breast cancer is associated with MR imaging features that relate the enhancement dynamics of the tumor and the background parenchyma. (C) RSNA, 2014

Imaging descriptors improve the predictive power of survival models for glioblastoma patients

  • Mazurowski, Maciej Andrzej
  • Desjardins, Annick
  • Malof, Jordan Milton
2013 Journal Article, cited 62 times
Website
BACKGROUND: Because effective prediction of survival time can be highly beneficial for the treatment of glioblastoma patients, the relationship between survival time and multiple patient characteristics has been investigated. In this paper, we investigate whether the predictive power of a survival model based on clinical patient features improves when MRI features are also included in the model. METHODS: The subjects in this study were 82 glioblastoma patients for whom clinical features as well as MR imaging exams were made available by The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). Twenty-six imaging features in the available MR scans were assessed by radiologists from the TCGA Glioma Phenotype Research Group. We used multivariate Cox proportional hazards regression to construct 2 survival models: one that used 3 clinical features (age, gender, and KPS) as the covariates and 1 that used both the imaging features and the clinical features as the covariates. Then, we used 2 measures to compare the predictive performance of these 2 models: area under the receiver operating characteristic curve for the 1-year survival threshold and overall concordance index. To eliminate any positive performance estimation bias, we used leave-one-out cross-validation. RESULTS: The performance of the model based on both clinical and imaging features was higher than the performance of the model based on only the clinical features, in terms of both area under the receiver operating characteristic curve (P < .01) and the overall concordance index (P < .01). CONCLUSIONS: Imaging features assessed using a controlled lexicon have additional predictive value compared with clinical features when predicting survival time in glioblastoma patients.

Predicting outcomes in glioblastoma patients using computerized analysis of tumor shape: preliminary data

  • Mazurowski, Maciej A
  • Czarnek, Nicholas M
  • Collins, Leslie M
  • Peters, Katherine B
  • Clark, Kal L
2016 Conference Proceedings, cited 6 times
Website
Glioblastoma (GBM) is the most common primary brain tumor characterized by very poor survival. However, while some patients survive only a few months, some might live for multiple years. Accurate prognosis of survival and stratification of patients allows for making more personalized treatment decisions and moves treatment of GBM one step closer toward the paradigm of precision medicine. While some molecular biomarkers are being investigated, medical imaging remains significantly underutilized for prognostication in GBM. In this study, we investigated whether computer analysis of tumor shape can contribute toward accurate prognosis of outcomes. Specifically, we implemented applied computer algorithms to extract 5 shape features from magnetic resonance imaging (MRI) for 22 GBM patients. Then, we determined whether each one of the features can accurately distinguish between patients with good and poor outcomes. We found that that one of the 5 analyzed features showed prognostic value of survival. The prognostic feature describes how well the 3D tumor shape fills its minimum bounding ellipsoid. Specifically, for low values (less or equal than the median) the proportion of patients that survived more than a year was 27% while for high values (higher than median) the proportion of patients with survival of more than 1 year was 82%. The difference was statistically significant (p < 0.05) even though the number of patients analyzed in this pilot study was low. We concluded that computerized, 3D analysis of tumor shape in MRI may strongly contribute to accurate prognostication and stratification of patients for therapy in GBM.

Radiogenomics of lower-grade glioma: algorithmically-assessed tumor shape is associated with tumor genomic subtypes and patient outcomes in a multi-institutional study with The Cancer Genome Atlas data

  • Mazurowski, Maciej A
  • Clark, Kal
  • Czarnek, Nicholas M
  • Shamsesfandabadi, Parisa
  • Peters, Katherine B
  • Saha, Ashirbani
Journal of Neuro-Oncology 2017 Journal Article, cited 8 times
Website
Recent studies identified distinct genomic subtypes of lower-grade gliomas that could potentially be used to guide patient treatment. This study aims to determine whether there is an association between genomics of lower-grade glioma tumors and patient outcomes using algorithmic measurements of tumor shape in magnetic resonance imaging (MRI). We analyzed preoperative imaging and genomic subtype data from 110 patients with lower-grade gliomas (WHO grade II and III) from The Cancer Genome Atlas. Computer algorithms were applied to analyze the imaging data and provided five quantitative measurements of tumor shape in two and three dimensions. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. Patient outcomes were quantified by overall survival. We found that there is a strong association between angular standard deviation (ASD), which measures irregularity of the tumor boundary, and the IDH-1p/19q subtype (p < 0.0017), RNASeq cluster (p < 0.0002), DNA copy number cluster (p < 0.001), and the cluster of clusters (p < 0.0002). The RNASeq cluster was also associated with bounding ellipsoid volume ratio (p < 0.0005). Tumors in the IDH wild type cluster and R2 RNASeq cluster which are associated with much poorer outcomes generally had higher ASD reflecting more irregular shape. ASD also showed association with patient overall survival (p = 0.006). Shape features in MRI were strongly associated with genomic subtypes and patient outcomes in lower-grade glioma.

Fully automatic MRI brain tumor segmentation using efficient spatial attention convolutional networks with composite loss

  • Mazumdar, Indrajit
  • Mukherjee, Jayanta
Neurocomputing 2022 Journal Article, cited 0 times
Automatically segmenting tumors from brain magnetic resonance imaging scans is crucial for diagnosis and planning treatment. However, brain tumors are highly diverse in location, contrast, size, and shape, making automatic segmentation extremely challenging. Recent techniques for segmenting brain tumors are mostly built using convolutional neural networks (CNNs). However, most of these existing techniques are inefficient, having slow inference speed and high parameter count. To reduce the diagnostic time, we present an accurate and efficient CNN model having fast inference speed and low parameter count for fully automatic brain tumor segmentation. Our novel CNN, the efficient spatial attention network (ESA-Net), is an improved variant of the popular U-Net. ESA-Net was built using our proposed efficient spatial attention (ESA) blocks containing depthwise separable convolution layers and a lightweight spatial attention module. The ESA blocks significantly improve both efficiency and segmentation accuracy. We also proposed a new composite loss function by combining Dice, focal, and Hausdorff distance (HD) losses to significantly improve the segmentation accuracy by tackling extreme class imbalance and directly optimizing the Dice score and HD. The effectiveness of the proposed network and loss function was evaluated by performing extensive experiments on the BraTS 2021 benchmark dataset. ESA-Net significantly outperformed U-Net in segmentation accuracy while having four times faster inference speed and eight times fewer parameters. In addition, the composite loss outperformed other loss functions. The proposed model achieved significantly better segmentation accuracy than other efficient models while having faster inference speed and fewer parameters. Moreover, it obtained competitive segmentation accuracy against state-of-the-art models. The proposed system segments a patient’s brain in 2.7 s using a GPU and has 157 times faster inference speed and 177 times fewer parameters than other state-of-the-art systems.

Prostate tumor eccentricity predicts Gleason score better than prostate tumor volume

  • Mayer, R.
  • Simone, C. B., 2nd
  • Turkbey, B.
  • Choyke, P.
Quant Imaging Med Surg 2022 Journal Article, cited 0 times
Website
BACKGROUND: Prostate tumor volume predicts biochemical recurrence, metastases, and tumor proliferation. A recent study showed that prostate tumor eccentricity (elongation or roundness) correlated with Gleason score. No studies examined the relationship among the prostate tumor's shape, volume, and potential aggressiveness. METHODS: Of the 26 patients that were analyzed, 18 had volumes >1 cc for the histology-based study, and 25 took up contrast material for the MRI portion of this study. This retrospective study quantitatively compared tumor eccentricity and volume measurements from pathology assessment sectioned wholemount prostates and multi-parametric MRI to Gleason scores. Multi-parametric MRI (T1, T2, diffusion, dynamic contrast-enhanced images) were resized, translated, and stitched to form spatially registered multi-parametric cubes. Multi-parametric signatures that characterize prostate tumors were inserted into a target detection algorithm (Adaptive Cosine Estimator, ACE). Various detection thresholds were applied to discriminate tumor from normal tissue. Pixel-based blobbing, and labeling were applied to digitized pathology slides and threshold ACE images. Tumor volumes were measured by counting voxels within the blob. Eccentricity calculation used moments of inertia from the blobs. RESULTS: From wholemount prostatectomy slides, fitting two sets of independent variables, prostate tumor eccentricity (largest blob eccentricity, weighted eccentricity, filtered weighted eccentricity) and tumor volume (largest blob volume, average blob volume, filtered average blob volume) to Gleason score in a multivariate analysis, yields correlation coefficient R=0.798 to 0.879 with P<0.01. The eccentricity t-statistic exceeded the volume t-statistic. Fitting histology-based total prostate tumor volume against Gleason score yields R=0.498, P=0.0098. From multi-parametric MRI, the correlation coefficient R between the Gleason score and the largest blob eccentricity for varying thresholds (0.30 to 0.55) ranged from -0.51 to -0.672 (P<0.01). For varying thresholds (0.60 to 0.80) for MRI detection, the R between the largest blob volume eccentricity against the Gleason score ranged from 0.46 to 0.50 (P<0.03). Combining tumor eccentricity and tumor volume in multivariate analysis failed to increase Gleason score prediction. CONCLUSIONS: Prostate tumor eccentricity, determined by histology or MRI, more accurately predicted Gleason score than prostate tumor volume. Combining tumor eccentricity with volume from histology-based analysis enhanced Gleason score prediction, unlike MRI.

Development and testing quantitative metrics from multi-parametric magnetic resonance imaging that predict Gleason score for prostate tumors

  • Mayer, R.
  • Simone, C. B., 2nd
  • Turkbey, B.
  • Choyke, P.
Quant Imaging Med Surg 2022 Journal Article, cited 0 times
Website
Background: Radiologists currently subjectively examine multi-parametric magnetic resonance imaging (MRI) to detect possible clinically significant lesions using the Prostate Imaging Reporting and Data System (PI-RADS) protocol. The assessment of imaging, however, relies on the experience and judgement of radiologists creating opportunity for inter-reader variability. Quantitative metrics, such as z-score and signal to clutter ratio (SCR), are therefore needed. Methods: Multi-parametric MRI (T1, T2, diffusion, dynamic contrast-enhanced images) were resampled, rescaled, translated, and stitched to form spatially registered multi-parametric cubes for patients undergoing radical prostatectomy. Multi-parametric signatures that characterize prostate tumors were inserted into z-score and SCR. The multispectral covariance matrix was computed for the outlined normal prostate. The z-score from each MRI image was computed and summed. To reduce noise in the covariance matrix, following matrix decomposition, the noisy eigenvectors were removed. Also, regularization and modified regularization was applied to the covariance matrix by minimizing the discrimination score. The filtered and regularized covariance matrices were inserted into the SCR calculation. The z-score and SCR were quantitatively compared to Gleason scores from clinical pathology assessment of the histology of sectioned wholemount prostates. Results: Twenty-six consecutive patients were enrolled in this retrospective study. Median patient age was 60 years (range, 49 to 75 years), median prostate-specific antigen (PSA) was 5.8 ng/mL (range, 2.3 to 23.7 ng/mL), and median Gleason score was 7 (range, 6 to 9). A linear fit of the summed z-score against Gleason score found a correlation of R=0.48 and a P value of 0.015. A linear fit of the SCR from regularizing covariance matrix against Gleason score found a correlation of R=0.39 and a P value of 0.058. The SCR employing the modified regularizing covariance matrix against Gleason score found a correlation of R=0.52 and a P value of 0.007. A linear fit of the SCR from filtering out 3 and 4 eigenvectors from the covariance matrix against Gleason score found correlations of R=0.50 and 0.44, respectively, and P values of 0.011 and 0.027, respectively. Conclusions: Z-score and SCR using filtered and regularized covariance matrices derived from spatially registered multi-parametric MRI correlates with Gleason score with highly significant P values.

Pilot study for supervised target detection applied to spatially registered multiparametric MRI in order to non-invasively score prostate cancer

  • Mayer, Rulon
  • Simone, Charles B
  • Skinner, William
  • Turkbey, Baris
  • Choyke, Peter
2018 Journal Article, cited 0 times
Website
BACKGROUND: Gleason Score (GS) is a validated predictor of prostate cancer (PCa) disease progression and outcomes. GS from invasive needle biopsies suffers from significant inter-observer variability and possible sampling error, leading to underestimating disease severity ("underscoring") and can result in possible complications. A robust non-invasive image-based approach is, therefore, needed. PURPOSE: Use spatially registered multi-parametric MRI (MP-MRI), signatures, and supervised target detection algorithms (STDA) to non-invasively GS PCa at the voxel level. METHODS AND MATERIALS: This study retrospectively analyzed 26MP-MRI from The Cancer Imaging Archive. The MP-MRI (T2, Diffusion Weighted, Dynamic Contrast Enhanced) were spatially registered to each other, combined into stacks, and stitched together to form hypercubes. Multi-parametric (or multi-spectral) signatures derived from a training set of registered MP-MRI were transformed using statistics-based Whitening-Dewhitening (WD). Transformed signatures were inserted into STDA (having conical decision surfaces) applied to registered MP-MRI determined the tumor GS. The MRI-derived GS was quantitatively compared to the pathologist's assessment of the histology of sectioned whole mount prostates from patients who underwent radical prostatectomy. In addition, a meta-analysis of 17 studies of needle biopsy determined GS with confusion matrices and was compared to the MRI-determined GS. RESULTS: STDA and histology determined GS are highly correlated (R=0.86, p<0.02). STDA more accurately determined GS and reduced GS underscoring of PCa relative to needle biopsy as summarized by meta-analysis (p<0.05). CONCLUSION: This pilot study found registered MP-MRI, STDA, and WD transforms of signatures shows promise in non-invasively GS PCa and reducing underscoring with high spatial resolution.

Correlation of prostate tumor eccentricity and Gleason scoring from prostatectomy and multi-parametric-magnetic resonance imaging

  • Mayer, Rulon
  • Simone, Charles B
  • II, Baris Turkbey
  • Choyke, Peter
Quantitative Imaging in Medicine and Surgery 2021 Journal Article, cited 0 times
Website

Algorithms applied to spatially registered multi-parametric MRI for prostate tumor volume measurement

  • Mayer, Rulon
  • Simone, Charles B
  • II, Baris Turkbey
  • Choyke, Peter
Quantitative Imaging in Medicine and Surgery 2021 Journal Article, cited 0 times
Website

“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI

  • Mayer, Rulon
2020 Patent, cited 0 times
Website

A fully automated deep learning pipeline to assess muscle mass in brain tumor patients

  • Mauricaite, Radvile
  • Mi, Ella
  • Chen, Jiarong
  • Ho, Andrew
  • Pakzad-Shahabi, Lillie
  • Williams, Matt
2021 Conference Paper, cited 0 times
Website
Background: Brain tumors are the leading cause of cancer death in the under-40s. The commonest malignant brain tumor is a Glioblastoma multiforme (GBM) with less than 5% 5-year survival. Low skeletal muscle mass is associated with poor survival in cancer and is measurable on routine imaging, but manual muscle mass quantification is time-consuming and susceptible to interrater inconsistency. In patients with brain tumors, measurement of the thickness of the temporalis muscle acts as a proxy. We present a fully-automated deep learning-based system for temporalis muscle quantification, a skeletal muscle mass surrogate on MRI head. Methods: MRI scans of 330 patients were obtained from four different datasets. Two 2D U-Nets were trained, one to segment the eyeballs and the other to segment the temporalis muscle, and used to quantify the cross-sectional area of the eyeballs and temporalis muscle. The eyeball segmentation was used to chose a consistent level on which to assess temporalis muscle mass. We assessed accuracy of segmentation using DICE and Hausdorff scores, and assessed the performance of the system to choose the correct slice on the MRI by comparing it with a manual choice of MRI slice. Results: The set models predict eyeball and temporalis muscle segmentations with good accuracy. Mean Dice scores is 0.90±0.03 and 0.90±0.05 and Hausdorff distances are 2.88±0.60 and 1.89±0.35, respectively. The automatic pipeline chooses slices for segmentation that are either identical or close to the manual choice in 96.1% of cases. Conclusions: We have developed an end-to-end system that uses two independently trained U-Nets to first segment the eyeball and then use that as a reference point to pick the correct slice of the MRI on which to use the second U-Net to measure temporalis cross-sectional area. This allows for the automated processing of head MRI scans to measure temporalis muscle mass, which has been previously shown to correlate with body muscle mass and survival in several cancer types.

Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A
  • Davidzon, Guido A
  • Benson, Jalen
  • Leung, Ann N C
  • Vasanawala, Minal
  • Horng, George
  • Shrager, Joseph B
  • Napel, Sandy
  • Nair, Viswam S.
RadiologyRadiology 2019 Journal Article, cited 0 times
Website
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.

[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A.
  • Davidzon, Guido A.
  • Bakr, Shaimaa
  • Echegaray, Sebastian
  • Leung, Ann N. C.
  • Vasanawala, Minal
  • Horng, George
  • Napel, Sandy
  • Nair, Viswam S.
Tomography (Ann Arbor, Mich.) 2019 Journal Article, cited 0 times
Website
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.

Automated Classification of Lung Diseases in Computed Tomography Images Using a Wavelet Based Convolutional Neural Network

  • Matsuyama, Eri
  • Tsai, Du-Yih
Journal of Biomedical Science and Engineering 2018 Journal Article, cited 0 times
Website

Deep Learning-Based Time-to-Death Prediction Model for COVID-19 Patients Using Clinical Data and Chest Radiographs

  • Matsumoto, T.
  • Walston, S. L.
  • Walston, M.
  • Kabata, D.
  • Miki, Y.
  • Shiba, M.
  • Ueda, D.
J Digit Imaging 2022 Journal Article, cited 0 times
Website
Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.

Bone suppression for chest X-ray image using a convolutional neural filter

  • Matsubara, N.
  • Teramoto, A.
  • Saito, K.
  • Fujita, H.
Australas Phys Eng Sci Med 2019 Journal Article, cited 0 times
Website
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.

High-dose hypofractionated pencil beam scanning carbon ion radiotherapy for lung tumors: Dosimetric impact of different spot sizes and robustness to interfractional uncertainties

  • Mastella, Edoardo
  • Mirandola, Alfredo
  • Russo, Stefania
  • Vai, Alessandro
  • Magro, Giuseppe
  • Molinelli, Silvia
  • Barcellini, Amelia
  • Vitolo, Viviana
  • Orlandi, Ester
  • Ciocca, Mario
Physica Medica 2021 Journal Article, cited 0 times
Website

Robustness Evaluation of a Deep Learning Model on Sagittal and Axial Breast DCE-MRIs to Predict Pathological Complete Response to Neoadjuvant Chemotherapy

  • Massafra, Raffaella
  • Comes, Maria Colomba
  • Bove, Samantha
  • Didonna, Vittorio
  • Gatta, Gianluca
  • Giotta, Francesco
  • Fanizzi, Annarita
  • La Forgia, Daniele
  • Latorre, Agnese
  • Pastena, Maria Irene
  • Pomarico, Domenico
  • Rinaldi, Lucia
  • Tamborra, Pasquale
  • Zito, Alfredo
  • Lorusso, Vito
  • Paradiso, Angelo Virgilio
Journal of Personalized Medicine 2022 Journal Article, cited 0 times
To date, some artificial intelligence (AI) methods have exploited Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) to identify finer tumor properties as potential earlier indicators of pathological Complete Response (pCR) in breast cancer patients undergoing neoadjuvant chemotherapy (NAC). However, they work either for sagittal or axial MRI protocols. More flexible AI tools, to be used easily in clinical practice across various institutions in accordance with its own imaging acquisition protocol, are required. Here, we addressed this topic by developing an AI method based on deep learning in giving an early prediction of pCR at various DCE-MRI protocols (axial and sagittal). Sagittal DCE-MRIs refer to 151 patients (42 pCR; 109 non-pCR) from the public I-SPY1 TRIAL database (DB); axial DCE-MRIs are related to 74 patients (22 pCR; 52 non-pCR) from a private DB provided by Istituto Tumori “Giovanni Paolo II” in Bari (Italy). By merging the features extracted from baseline MRIs with some pre-treatment clinical variables, accuracies of 84.4% and 77.3% and AUC values of 80.3% and 78.0% were achieved on the independent tests related to the public DB and the private DB, respectively. Overall, the presented method has shown to be robust regardless of the specific MRI protocol.

Multimodal 3D ultrasound and CT in image-guided spinal surgery: public database and new registration algorithms

  • Masoumi, N.
  • Belasso, C. J.
  • Ahmad, M. O.
  • Benali, H.
  • Xiao, Y.
  • Rivaz, H.
Int J Comput Assist Radiol Surg 2021 Journal Article, cited 0 times
Website
PURPOSE: Accurate multimodal registration of intraoperative ultrasound (US) and preoperative computed tomography (CT) is a challenging problem. Construction of public datasets of US and CT images can accelerate the development of such image registration techniques. This can help ensure the accuracy and safety of spinal surgeries using image-guided surgery systems where an image registration is employed. In addition, we present two algorithms to register US and CT images. METHODS: We present three different datasets of vertebrae with corresponding CT, US, and simulated US images. For each of the two latter datasets, we also provide 16 landmark pairs of matching structures between the CT and US images and performed fiducial registration to acquire a silver standard for assessing image registration. Besides, we proposed two patch-based rigid image registration algorithms, one based on normalized cross-correlation (NCC) and the other based on correlation ratio (CR) to register misaligned CT and US images. RESULTS: The CT and corresponding US images of the proposed database were pre-processed and misaligned with different error intervals, resulting in 6000 registration problems solved using both NCC and CR methods. Our results show that the methods were successful in aligning the pre-processed CT and US images by decreasing the warping index. CONCLUSIONS: The database provides a resource for evaluating image registration techniques. The simulated data have two applications. First, they provide the gold standard ground-truth which is difficult to obtain with ex vivo and in vivo data for validating US-CT registration methods. Second, the simulated US images can be used to validate real-time US simulation methods. Besides, the proposed image registration techniques can be useful for developing methods in clinical application.

Computer-Assisted Decision Support System in Pulmonary Cancer Detection and Stage Classification on CT Images

  • Masood, Anum
  • Sheng, Bin
  • Li, Ping
  • Hou, Xuhong
  • Wei, Xiaoer
  • Qin, Jing
  • Feng, Dagan
Journal of Biomedical Informatics 2018 Journal Article, cited 10 times
Website

Quantitative cone-beam computed tomography reconstruction for radiotherapy planning

  • Mason, Jonathan Hugh
2018 Thesis, cited 0 times
Website
Radiotherapy planning involves the calculation of dose deposition throughout the patient, based upon quantitative electron density images from computed tomography (CT) scans taken before treatment. Cone beam CT (CBCT), consisting of a point source and flat panel detector, is often built onto radiotherapy delivery machines and used during a treatment session to ensure alignment of the patient to the plan. If the plan could be recalculated throughout the course of treatment, then margins of uncertainty and toxicity to healthy tissues could be reduced. CBCT reconstructions are normally too poor to be used as the basis of planning however, due to their insufficient sampling, beam hardening and high level of scatter. In this work, we investigate reconstruction techniques to enable dose calculation from CBCT. Firstly, we develop an iterative method for directly inferring electron density from the raw X-ray measurements, which is robust to both low doses and polyenergetic artefacts from hard bone and metallic implants. Secondly, we supplement this with a fast integrated scatter model, also able to take into account the polyenergetic nature of the diagnostic X-ray source. Finally, we demonstrate the ability to provide accurate dose calculation using our methodology from numerical and physical experiments. Not only does this unlock the capability to perform CBCT radiotherapy planning, offering more targeted and less toxic treatment, but the developed techniques are also applicable and beneficial for many other CT applications.

Technical validation of multi-section robotic bronchoscope with first person view control for transbronchial biopsies of peripheral lung

  • Masaki, F.
  • King, F.
  • Kato, T.
  • Tsukada, H.
  • Colson, Y. L.
  • Hata, N.
IEEE Trans Biomed Eng 2021 Journal Article, cited 0 times
Website
This study aims to validate the advantage of the new engineering method to maneuver multi-section robotic bronchoscope with first person view control in transbronchial biopsy. Six physician operators were recruited and tasked to operate a manual and a robotic bronchoscope to the peripheral area placed in patient-derived lung phantoms. The metrics collected were the furthest generation count of the airway the bronchoscope reached, force incurred to the phantoms, and NASA-Task Load Index. The furthest generation count of the airway the physicians reached using the manual and the robotic bronchoscopes were 6.6 +/- 1.2th and 6.7 +/- 0.8th. Robotic bronchoscopes successfully reached the 5th generation count into the peripheral area of the airway, while the manual bronchoscope typically failed earlier in the 3rd generation. More force was incurred to the airway when the manual bronchoscope was used (0.24 +/- 0.20 [N]) than the robotic bronchoscope was applied (0.18 +/- 0.22 [N], p<0.05). The manual bronchoscope imposed more physical demand than the robotic bronchoscope by NASA-TLX score (55 +/- 24 vs 19 +/- 16, p<0.05). These results indicate that a robotic bronchoscope facilitates the advancement of the bronchoscope to the peripheral area with less physical demand to physician operators. The metrics collected in this study would expect to be used as a benchmark for the future development of robotic bronchoscopes.

Tumor Growth in the Brain: Complexity and Fractality

  • Martín-Landrove, Miguel
  • Brú, Antonio
  • Rueda-Toicen, Antonio
  • Torres-Hoyos, Francisco
2016 Book Section, cited 1 times
Website
Tumor growth is a complex process characterized by uncontrolled cell proliferation and invasion of neighboring tissues. The understanding of these phenomena is of vital importance to establish appropriate diagnosis and therapy strategies and starts with the evaluation of their complexity with suitable descriptors produced by scaling analysis. There has been considerable effort in the evaluation of fractal dimension as a suitable parameter to describe differences between normal and pathological tissues, and it has been used for brain tumor grading with great success. In the present work, several contributions, which exploit scaling analysis in the context of brain tumors, are reviewed. These include very promising results in tumor segmentation, grading, and therapy monitoring. Emphasis is done on scaling analysis techniques applicable to multifractal systems, proposing new descriptors to advance the understanding of tumor growth dynamics in brain. These techniques serve as a starting point to develop innovative practical growth models for therapy simulation and optimization, drug delivery, and the evaluation of related neurological disorders.

Lesion detection in digital breast tomosynthesis: method, experiences and results of participating to the DBTex challenge

  • Martí, Robert
  • del Campo, Pablo G.
  • Vidal, Joel
  • Cufí, Xavier
  • Martí, Joan
  • Chevalier, Margarita
  • Freixenet, Jordi
  • Bosmans, Hilde
  • Marshall, Nicholas
  • Van Ongeval, Chantal
2022 Conference Paper, cited 0 times
Website
The paper presents a framework for the detection of mass-like lesions in 3D digital breast tomosynthesis. It consists of several steps, including pre and post-processing, and a main detection block based on a Faster RCNN deep learning network. In addition to the framework, the paper describes different training steps to achieve better performance, including transfer learning using both mammographic and DBT data. The presented approach obtained third place in the recent DBT Lesion detection Challenge, DBTex, being the top approach without using an ensemble based method.

MRI Brain Tumor Segmentation Using a 2D-3D U-Net Ensemble

  • Marti Asenjo, Jaime
  • Martinez-Larraz Solís, Alfonso
2021 Book Section, cited 0 times
Three 2D networks, one for each patient-plane (axial, sagittal and coronal) plus a 3-D network were ensemble for tumor segmentation over MRI images, with final Dice scores of 0.75 for the enhancing tumor (ET), 0.81 whole tumor (WT) and 0.78 for tumor core (TC). A survival prediction model was design on Matlab, based on features extracted from the automatic segmentation. Gross tumor size and location seem to play a major role on survival prediction. A final accuracy of 0.617 was achieved.

A multi-task CNN approach for lung nodule malignancy classification and characterization

  • Marques, Sónia
  • Schiavo, Filippo
  • Ferreira, Carlos A.
  • Pedrosa, João
  • Cunha, António
  • Campilho, Aurélio
Expert Systems with Applications 2021 Journal Article, cited 0 times
Website
Lung cancer is the type of cancer with highest mortality worldwide. Low-dose computerized tomography is the main tool used for lung cancer screening in clinical practice, allowing the visualization of lung nodules and the assessment of their malignancy. However, this evaluation is a complex task and subject to inter-observer variability, which has fueled the need for computer-aided diagnosis systems for lung nodule malignancy classification. While promising results have been obtained with automatic methods, it is often not straightforward to determine which features a given model is basing its decisions on and this lack of explainability can be a significant stumbling block in guaranteeing the adoption of automatic systems in clinical scenarios. Though visual malignancy assessment has a subjective component, radiologists strongly base their decision on nodule features such as nodule spiculation and texture, and a malignancy classification model should thus follow the same rationale. As such, this study focuses on the characterization of lung nodules as a means for the classification of nodules in terms of malignancy. For this purpose, different model architectures for nodule characterization are proposed and compared, with the final goal of malignancy classification. It is shown that models that combine direct malignancy prediction with specific branches for nodule characterization have a better performance than the remaining models, achieving an Area Under the Curve of 0.783. The most relevant features for malignancy classification according to the model were lobulation, spiculation and texture, which is found to be in line with current clinical practice.

Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM

  • Maqsood, S.
  • Damasevicius, R.
  • Maskeliunas, R.
2022 Journal Article, cited 15 times
Website
Background and Objectives: Clinical diagnosis has become very significant in today's health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.

Hessian-MRLoG: Hessian information and multi-scale reverse LoG filter for pulmonary nodule detection

  • Mao, Q.
  • Zhao, S.
  • Tong, D.
  • Su, S.
  • Li, Z.
  • Cheng, X.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
Computer-aided detection (CADe) of pulmonary nodules is an effective approach for early detection of lung cancer. However, due to the low contrast of lung computed tomography (CT) images, the interference of blood vessels and classifications, CADe has the problems of low detection rate and high false-positive rate (FPR). To solve these problems, a novel method using Hessian information and multi-scale reverse Laplacian of Gaussian (LoG) (Hessian-MRLoG) is proposed and developed in this work. Also, since the intensity distribution of the LoG operator and the lung nodule in CT images are inconsistent, and their shapes are mismatched, a multi-scale reverse Laplacian of Gaussian (MRLoG) is constructed. In addition, in order to enhance the effectiveness of target detection, the second-order partial derivatives of MRLoG are partially adjusted by introducing an adjustment factor. On this basis, the Hessian-MRLoG model is developed, and a novel elliptic filter is designed. Ultimately, in this study, the method of Hessian-MRLoG filtering is proposed and developed for pulmonary nodule detection. To verify its effectiveness and accuracy, the proposed method was used to analyze the LUNA16 dataset. The experimental results revealed that the proposed method had an accuracy of 93.6% and produced 1.0 false positives per scan (FPs/scan), indicating that the proposed method can improve the detection rate and significantly reduce the FPR. Therefore, the proposed method has the potential for application in the detection, localization and labeling of other lesion areas.

Intelligent immune clonal optimization algorithm for pulmonary nodule classification

  • Mao, Q.
  • Zhao, S.
  • Ren, L.
  • Li, Z.
  • Tong, D.
  • Yuan, X.
  • Li, H.
Math Biosci Eng 2021 Journal Article, cited 0 times
Website
Computer-aided diagnosis (CAD) of pulmonary nodules is an effective approach for early detection of lung cancers, and pulmonary nodule classification is one of the key issues in the CAD system. However, CAD has the problems of low accuracy and high false-positive rate (FPR) on pulmonary nodule classification. To solve these problems, a novel method using intelligent immune clonal selection and classification algorithm is proposed and developed in this work. First, according to the mechanism and characteristics of chaotic motion with a logistic mapping, the proposed method utilizes the characteristics of chaotic motion and selects the control factor of the optimal chaotic state, to generate an initial population with randomness and ergodicity. The singleness problem of the initial population of the immune algorithm was solved by the proposed method. Second, considering on the characteristics of Gaussian mutation operator (GMO) with a small scale, and Cauchy mutation operator (CMO) with a big scale, an intelligent mutation strategy is developed, and a novel control factor of the mutation is designed. Therefore, a Gauss-Cauchy hybrid mutation operator is designed. Ultimately, in this study, the intelligent immune clonal optimization algorithm is proposed and developed for pulmonary nodule classification. To verify its accuracy, the proposed method was used to analyze 90 CT scans with 652 nodules. The experimental results revealed that the proposed method had an accuracy of 97.87% and produced 1.52 false positives per scan (FPs/scan), indicating that the proposed method has high accuracy and low FPR.

Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications

  • Manouchehri, Narges
  • Bouguila, Nizar
  • Fan, Wentao
Pattern Analysis and Applications 2021 Journal Article, cited 1 times
Website
Thanks to the significant developments in healthcare industries, various types of medical data are generated. Analysing such valuable resources aid healthcare experts to understand the illnesses more precisely and provide better clinical services. Machine learning as one of the capable tools could assist healthcare experts in achieving expressive interpretation and making proper decisions. As annotation of medical data is a costly and sensitive task that can be performed just by healthcare professionals, label-free methods could be significantly promising. Interpretability and evidence-based decision are other concerns in medicine. These needs were our motivators to propose a novel clustering method based on hierarchical Dirichlet process mixtures of multivariate Beta distributions. To learn it, we applied batch and online variational methods for finding the proper number of clusters as well as estimating model parameters at the same time. The effectiveness of the proposed models is evaluated on three medical real applications, namely oropharyngeal carcinoma diagnosis, osteosarcoma analysis, and white blood cell counting.

Domain-Based Analysis of Colon Polyp in CT Colonography Using Image-Processing Techniques

  • Manjunath, K N
  • Siddalingaswamy, PC
  • Prabhu, GK
Asian Pacific Journal of Cancer Prevention 2019 Journal Article, cited 0 times
Website
Background: The purpose of the research was to improve the polyp detection accuracy in CT Colonography (CTC)through effective colon segmentation, removal of tagged fecal matter through Electronic Cleansing (EC), and measuringthe smaller polyps. Methods: An improved method of boundary-based semi-automatic colon segmentation with theknowledge of colon distension, an adaptive multistep method for the virtual cleansing of segmented colon based onthe knowledge of Hounsfield Units, and an automated method of smaller polyp measurement using skeletonizationtechnique have been implemented. Results: The techniques were evaluated on 40 CTC dataset. The segmentationmethod was able to delineate the colon wall accurately. The submerged colonic structures were preserved withoutsoft tissue erosion, pseudo enhanced voxels were corrected, and the air-contrast layer was removed without losingthe adjacent tissues. The smaller polyp of size less than validated qualitatively and quantitatively. Segmented colons were validated through volumetric overlap computation,and accuracy of 95.826±0.6854% was achieved. In polyp measurement, the paired t-test method was applied to comparethe difference with ground truth and at α=5%, t=0.9937 and p=0.098 was achieved. The statistical values of TPR=90%,TNR=82.3% and accuracy=88.31% were achieved. Conclusion: An automated system of polyp measurement has beendeveloped starting from colon segmentation to improve the existing CTC solutions. The analysis of domain-basedapproach of polyp has given good results. A prototype software, which can be used as a low-cost polyp diagnosis tool,has been developed.

A quantitative validation of segmented colon in virtual colonoscopy using image moments

  • Manjunath, K. N.
  • Prabhu, G. K.
  • Siddalingaswamy, P. C.
Biomedical Journal 2020 Journal Article, cited 1 times
Website
Background: Evaluation of segmented colon is one of the challenges in Computed Tomography Colonography (CTC). The objective of the study was to measure the segmented colon accurately using image processing techniques. Methods: This was a retrospective study, and the Institutional Ethical clearance was obtained for the secondary dataset. The technique was tested on 85 CTC dataset. The CTCdataset of 100 - 120 kVp, 100 mA, and ST (Slice Thickness) of 1.25 and 2.5 mm were used for empirical testing. The initial results of the work appear in the conference proceedings. Post colon segmentation, three distance measurement techniques, and one volumetric overlap computation were applied in Euclidian space in which the distances were measured on MPR views of the segmented and unsegmented colons and the volumetric overlap calculation between these two volumes. Results: The key finding was that the measurements on both the segmented and the unsegmented volumes remain same without much difference noticed. This was statistically proved. The results were validated quantitatively on 2D MPR images. An accuracy of 95.265 ± 0.4551% was achieved through volumetric overlap computation. Through paired t-test, at alpha = 5% ; statistical values were p = 0.6769, and t = 0.4169 which infer that there was no much significant difference. Conclusion: The combination of different validation techniques was applied to check the robustness of colon segmentation method, and good results were achieved with this approach. Through quantitative validation, the results were accepted at alpha =5%.

Measurement of smaller colon polyp in CT colonography images using morphological image processing

  • Manjunath, KN
  • Siddalingaswamy, PC
  • Prabhu, GK
International Journal of Computer Assisted Radiology and Surgery 2017 Journal Article, cited 1 times
Website

Automatic Electronic Cleansing in Computed Tomography Colonography Images using Domain Knowledge

  • Manjunath, KN
  • Siddalingaswamy, PC
  • Prabhu, GK
Asian Pacific Journal of Cancer Prevention 2015 Journal Article, cited 0 times

An improved method of colon segmentation in computed tomography colonography images using domain knowledge

  • Manjunath, KN
  • Siddalingaswamy, PC
  • Gopalakrishna Prabhu, K
2016 Journal Article, cited 0 times

A Systematic Approach of Data Collection and Analysis in Medical Imaging Research

  • Manjunath, K.
  • Manuel, C.
  • Hegde, G.
  • Kulkarni, A.
  • Kurady, R.
  • K, M.
Asian Pac J Cancer Prev 2021 Journal Article, cited 0 times
Website
BACKGROUND: Obtaining the right image dataset for the medical image research systematically is a tedious task. Anatomy segmentation is the key step before extracting the radiomic features from these images. OBJECTIVE: The purpose of the study was to segment the 3D colon from CT images and to measure the smaller polyps using image processing techniques. This require huge number of samples for statistical analysis. Our objective was to systematically classify and arrange the dataset based on the parameters of interest so that the empirical testing becomes easier in medical image research. MATERIALS AND METHODS: This paper discusses a systematic approach of data collection and analysis before using it for empirical testing. In this research the image were considered from National Cancer Institute (NCI). TCIA from NCI has a vast collection of diagnostic quality images for the research community. These datasets were classified before empirical testing of the research objectives. The images in the TCIA collection were acquired as per the standard protocol defined by the American College of Radiology. Patients in the age group of 50-80 years were involved in various clinical trials (multicenter). The dataset collection has more than 10 billion of DICOM images of various anatomies. In this study, the number of samples considered for empirical testing was 300 (n) acquired from both supine and prone positions. The datasets were classified based on the parameters of interest. The classified dataset makes the dataset selection easier during empirical testing. The images were validated for the data completeness as per the DICOM standard of the 2020b version. A case study of CT Colonography dataset is discussed. CONCLUSION: With this systematic approach of data collection and classification, analysis will be become more easier during empirical testing.<br />.

Scale-Space DCE-MRI Radiomics Analysis Based on Gabor Filters for Predicting Breast Cancer Therapy Response

  • Manikis, Georgios C.
  • Venianaki, Maria
  • Skepasianos, Iraklis
  • Papadakis, Georgios Z.
  • Maris, Thomas G.
  • Agelaki, Sofia
  • Karantanas, Apostolos
  • Marias, Kostas
2019 Conference Paper, cited 0 times
Website
Radiomics-based studies have created an unprecedented momentum in computational medical imaging over the last years by significantly advancing and empowering correlational and predictive quantitative studies in numerous clinical applications. An important element of this exciting field of research especially in oncology is multi-scale texture analysis since it can effectively describe tissue heterogeneity, which is highly informative for clinical diagnosis and prognosis. There are however, several concerns regarding the plethora of radiomics features used in the literature especially regarding their performance consistency across studies. Since many studies use software packages that yield multi-scale texture features it makes sense to investigate the scale-space performance of texture candidate biomarkers under the hypothesis that significant texture markers may have a more persistent scale-space performance. To this end, this study proposes a methodology for the extraction of Gabor multi-scale and orientation texture DCE-MRI radiomics for predicting breast cancer complete response to neoadjuvant therapy. More specifically, a Gabor filter bank was created using four different orientations and ten different scales and then first-order and second-order texture features were extracted for each scale-orientation data representation. The performance of all these features was evaluated under a generalized repeated cross-validation framework in a scale-space fashion using extreme gradient boosting classifiers.

Lung Cancer Detection using CT Scan Images

  • Makaju, Suren
  • Prasad, PWC
  • Alsadoon, Abeer
  • Singh, AK
  • Elchouemi, A
Procedia Computer Science 2018 Journal Article, cited 5 times
Website

Differentiation of invasive ductal and lobular carcinoma of the breast using MRI radiomic features: a pilot study

  • Maiti, S.
  • Nayak, S.
  • Hebbar, K. D.
  • Pendem, S.
2024 Journal Article, cited 0 times
Website
BACKGROUND: Breast cancer (BC) is one of the main causes of cancer-related mortality among women. For clinical management to help patients survive longer and spend less time on treatment, early and precise cancer identification and differentiation of breast lesions are crucial. To investigate the accuracy of radiomic features (RF) extracted from dynamic contrast-enhanced Magnetic Resonance Imaging (DCE MRI) for differentiating invasive ductal carcinoma (IDC) from invasive lobular carcinoma (ILC). METHODS: This is a retrospective study. The IDC of 30 and ILC of 28 patients from Dukes breast cancer MRI data set of The Cancer Imaging Archive (TCIA), were included. The RF categories such as shape based, Gray level dependence matrix (GLDM), Gray level co-occurrence matrix (GLCM), First order, Gray level run length matrix (GLRLM), Gray level size zone matrix (GLSZM), NGTDM (Neighbouring gray tone difference matrix) were extracted from the DCE-MRI sequence using a 3D slicer. The maximum relevance and minimum redundancy (mRMR) was applied using Google Colab for identifying the top fifteen relevant radiomic features. The Mann-Whitney U test was performed to identify significant RF for differentiating IDC and ILC. Receiver Operating Characteristic (ROC) curve analysis was performed to ascertain the accuracy of RF in distinguishing between IDC and ILC. RESULTS: Ten DCE MRI-based RFs used in our study showed a significant difference (p <0.001) between IDC and ILC. We noticed that DCE RF, such as Gray level run length matrix (GLRLM) gray level variance (sensitivity (SN) 97.21%, specificity (SP) 96.2%, area under curve (AUC) 0.998), Gray level co-occurrence matrix (GLCM) difference average (SN 95.72%, SP 96.34%, AUC 0.983), GLCM interquartile range (SN 95.24%, SP 97.31%, AUC 0.968), had the strongest ability to differentiate IDC and ILC. CONCLUSIONS: MRI-based RF derived from DCE sequences can be used in clinical settings to differentiate malignant lesions of the breast, such as IDC and ILC, without requiring intrusive procedures.

Metal Artifacts Reduction in CT Scans using Convolutional Neural Network with Ground Truth Elimination

  • Mai, Q.
  • Wan, J. W. L.
Annu Int Conf IEEE Eng Med Biol Soc 2020 Journal Article, cited 0 times
Website
Metal artifacts are very common in CT scans since metal insertion or replacement is performed for enhancing certain functionality or mechanism of patient's body. These streak artifacts could degrade CT image quality severely, and consequently, they could influence clinician's diagnosis. Many existing supervised learning methods approaching this problem assume the availability of clean images data, images free of metal artifacts, at the part with metal implant. However, in clinical practices, those clean images do not usually exist. Therefore, there is no support for the existing supervised learning based methods to work clinically. We focus on reducing the streak artifacts on the hip scans and propose a convolutional neural network based method to eliminate the need of the clean images at the implant part during model training. The idea is to use the scans of the parts near the hip for model training. Our method is able to suppress the artifacts in corrupted images, highly improve the image quality, and preserve the details of surrounding tissues, without using any clean hip scans. We apply our method on clinical CT hip scans from multiple patients and obtain artifact-free images with high image quality.

Machine Learning approach to classify and predict different Osteosarcoma types

  • Mahore, Sanket
  • Bhole, Kalyani
  • Rathod, Shashikant
2021 Conference Paper, cited 0 times
Website
Family physicians rarely see a malignant bone cancer because it is hard to find, and most of the time, bone cancer is benign. It is very time-consuming and complicated for the pathologist to classify Osteosarcoma histopathological images. Typically Osteosarcoma classifies into viable, Non-viable, and Non-tumor classes, but intra-class variation and inter-class similarity are complex tasks. This paper used the Random Forest(RF) machine learning algorithm, which efficiently and accurately classifies Osteosarcoma into Viable, Non-viable, and Non-tumor classes. The Random Forest method gives a classification accuracy of 92.40%, a sensitivity of 85.44%, and specificity 93.38% with AUC=0.95.

Deep Learning and Domain-Specific Knowledge to Segment the Liver from Synthetic Dual Energy CT Iodine Scans

  • Mahmood, U.
  • Bates, D. D. B.
  • Erdi, Y. E.
  • Mannelli, L.
  • Corrias, G.
  • Kanan, C.
Diagnostics (Basel) 2022 Journal Article, cited 2 times
Website
We map single energy CT (SECT) scans to synthetic dual-energy CT (synth-DECT) material density iodine (MDI) scans using deep learning (DL) and demonstrate their value for liver segmentation. A 2D pix2pix (P2P) network was trained on 100 abdominal DECT scans to infer synth-DECT MDI scans from SECT scans. The source and target domain were paired with DECT monochromatic 70 keV and MDI scans. The trained P2P algorithm then transformed 140 public SECT scans to synth-DECT scans. We split 131 scans into 60% train, 20% tune, and 20% held-out test to train four existing liver segmentation frameworks. The remaining nine low-dose SECT scans tested system generalization. Segmentation accuracy was measured with the dice coefficient (DSC). The DSC per slice was computed to identify sources of error. With synth-DECT (and SECT) scans, an average DSC score of 0.93+/-0.06 (0.89+/-0.01) and 0.89+/-0.01 (0.81+/-0.02) was achieved on the held-out and generalization test sets. Synth-DECT-trained systems required less data to perform as well as SECT-trained systems. Low DSC scores were primarily observed around the scan margin or due to non-liver tissue or distortions within ground-truth annotations. In general, training with synth-DECT scans resulted in improved segmentation performance with less data.

Quality control of radiomic features using 3D-printed CT phantoms

  • Mahmood, U.
  • Apte, A.
  • Kanan, C.
  • Bates, D. D. B.
  • Corrias, G.
  • Manneli, L.
  • Oh, J. H.
  • Erdi, Y. E.
  • Nguyen, J.
  • O'Deasy, J.
  • Shukla-Dave, A.
J Med Imaging (Bellingham) 2021 Journal Article, cited 0 times
Website
Purpose: The lack of standardization in quantitative radiomic measures of tumors seen on computed tomography (CT) scans is generally recognized as an unresolved issue. To develop reliable clinical applications, radiomics must be robust across different CT scan modes, protocols, software, and systems. We demonstrate how custom-designed phantoms, imprinted with human-derived patterns, can provide a straightforward approach to validating longitudinally stable radiomic signature values in a clinical setting. Approach: Described herein is a prototype process to design an anatomically informed 3D-printed radiomic phantom. We used a multimaterial, ultra-high-resolution 3D printer with voxel printing capabilities. Multiple tissue regions of interest (ROIs), from four pancreas tumors, one lung tumor, and a liver background, were extracted from digital imaging and communication in medicine (DICOM) CT exam files and were merged together to develop a multipurpose, circular radiomic phantom (18 cm diameter and 4 cm width). The phantom was scanned 30 times using standard clinical CT protocols to test repeatability. Features that have been found to be prognostic for various diseases were then investigated for their repeatability and reproducibility across different CT scan modes. Results: The structural similarity index between the segment used from the patients' DICOM image and the phantom CT scan was 0.71. The coefficient variation for all assessed radiomic features was < 1.0 % across 30 repeat scans of the phantom. The percent deviation (pDV) from the baseline value, which was the mean feature value determined from repeat scans, increased with the application of the lung convolution kernel, changes to the voxel size, and increases in the image noise. Gray level co-occurrence features, contrast, dissimilarity, and entropy were particularly affected by different scan modes, presenting with pDV > +/- 15 % . Conclusions: Previously discovered prognostic and popular radiomic features are variable in practice and need to be interpreted with caution or excluded from clinical implementation. Voxel-based 3D printing can reproduce tissue morphology seen on CT exams. We believe that this is a flexible, yet practical, way to design custom phantoms to validate and compare radiomic metrics longitudinally, over time, and across systems.

FRoG dose computation meets Monte Carlo accuracy for proton therapy dose calculation in lung

  • Magro, G.
  • Mein, S.
  • Kopp, B.
  • Mastella, E.
  • Pella, A.
  • Ciocca, M.
  • Mairani, A.
Phys Med 2021 Journal Article, cited 0 times
Website
PURPOSE: To benchmark and evaluate the clinical viability of novel analytical GPU-accelerated and CPU-based Monte Carlo (MC) dose-engines for spot-scanning intensity-modulated-proton-therapy (IMPT) towards the improvement of lung cancer treatment. METHODS: Nine patient cases were collected from the CNAO clinical experience and The Cancer Imaging Archive-4D-Lung-Database for in-silico study. All plans were optimized with 2 orthogonal beams in RayStation (RS) v.8. Forward calculations were performed with FRoG, an independent dose calculation system using a fast robust approach to the pencil beam algorithm (PBA), RS-MC (CPU for v.8) and general-purpose MC (gp-MC). Dosimetric benchmarks were acquired via irradiation of a lung-like phantom and ionization chambers for both a single-field-uniform-dose (SFUD) and IMPT plans. Dose-volume-histograms, dose-difference and gamma-analyses were conducted. RESULTS: With respect to reference gp-MC, the average dose to the GTV was 1.8% and 2.3% larger for FRoG and the RS-MC treatment planning system (TPS). FRoG and RS-MC showed a local gamma-passing rate of ~96% and ~93%. Phantom measurements confirmed FRoG's high accuracywith a deviation < 0.1%. CONCLUSIONS: Dose calculation performance using the GPU-accelerated analytical PBA, MC-TPS and gp-MC code were well within clinical tolerances. FRoG predictions were in good agreement with both the full gp-MC and experimental data for proton beams optimized for thoracic dose calculations. GPU-accelerated dose-engines like FRoG may alleviate current issues related to deficiencies in current commercial analytical proton beam models. The novel approach to the PBA implemented in FRoG is suitable for either clinical TPS or as an auxiliary dose-engine to support clinical activity for lung patients.

Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

  • Magdy, Eman
  • Zayed, Nourhan
  • Fakhr, Mahmoud
International Journal of Biomedical Imaging 2015 Journal Article, cited 6 times
Website
Computer-aided diagnostic (CAD) systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients' lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM) method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR) for classification step. Finally, K-nearest neighbour (KNN), support vector machine (SVM), naive Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.

A Framework Based on Metabolic Networks and Biomedical Images Data to Discriminate Glioma Grades

  • Maddalena, Lucia
  • Granata, Ilaria
  • Manipur, Ichcha
  • Manzo, Mario
  • Guarracino, Mario R.
2021 Conference Paper, cited 0 times
Collecting and integrating information from different data sources is a successful approach to investigate complex biological phenomena and to address tasks such as disease subtyping, biomarker prediction, target, and mechanisms identification. Here, we describe an integrative framework, based on the combination of transcriptomics data, metabolic networks, and magnetic resonance images, to classify different grades of glioma, one of the most common types of primary brain tumors arising from glial cells. The framework is composed of three main blocks for feature sorting, choosing the best number of sorted features, and classification model building. We investigate different methods for each of the blocks, highlighting those that lead to the best results. Our approach demonstrates how the integration of molecular and imaging data achieves better classification performance than using the individual data-sets, also comparing results with state-of-the-art competitors. The proposed framework can be considered as a starting point for a clinically relevant grading system, and the related software made available lays the foundations for future comparisons.

Harmonizing the pixel size in retrospective computed tomography radiomics studies

  • Mackin, Dennis
  • Fave, Xenia
  • Zhang, Lifei
  • Yang, Jinzhong
  • Jones, A Kyle
  • Ng, Chaan S
PLoS One 2017 Journal Article, cited 19 times
Website

Opportunities and challenges to utilization of quantitative imaging: Report of the AAPM practical big data workshop

  • Mackie, Thomas R
  • Jackson, Edward F
  • Giger, Maryellen
Medical Physics 2018 Journal Article, cited 1 times
Website

LungVISX explaining lung nodule malignancy classification

  • Maas, K.W.H.
2021 Thesis, cited 0 times
Website
Lung cancer is diagnosed through the detection and interpretation of (pulmonary) lung nodules, small masses of tissues, in a patient’s lung. In order to determine a patient’s risk of lung cancer, radiologists assess each of these nodules’ malignancy risk based on their characteristics, such as location, size and shape. The task of lung nodule malignancy classification has been shown to be successfully solved by deep learning models, but these models are still susceptible to over-confident or wrong predictions. It is difficult to understand the reasoning behind these predictions because of the models’ black-box nature. As a result, medical experts lack trust in these models, which hinders the adaptation of the models in practice. This lack of trust of experts can be addressed through the field of explainable AI (XAI) as well as visual analytics (VA). Explainable AI addresses the reasoning about the decisions of a machine learning models through several explainability techniques. Visual analytics, on the other hand, focuses on the transparent communication of the predictions of the model as well through solving complex analysis tasks. We propose LungVISX, a system designed to explain lung nodule malignancy classification by implementing explainability techniques in a visual analytics tool to enable experts to explore and analyze the predictions of a nodule malignancy classification model. We address explainability through a model that incorporates the nodule characteristics in its decisions. Moreover, ensembles, which provide the uncertainty of predictions, and attribution methods, which provide location based information for these predictions, are used to explain the model’s decisions. The visual analytics tool of the system allows for complex analysis of the explanations of the models. A nodule can be compared to its cohort in terms of characteristics and malignancy both for the prediction score and uncertainty. Moreover, detection and analysis of important and uncertain areas of a nodule, related to characteristic and malignancy predictions, can be performed. To our knowledge, no tool has been proposed that provides such an exploration of explainable methods in the context of lung nodule malignancy classification. The value of the proposed system has been assessed based on use cases, model performance and a user study with three radiologists. The use cases explore and illustrate the capabilities of the visual tool. The model performance and model interpretability face a trade-off, as incorporating characteristics predictions in the model led to a lower performance. However, the radiologists evaluated the final system as interpretable and effective, highlighting the potential of the tool for explaining the reasoning of a lung cancer malignancy classification model.

A Deep Supervision CNN Network for Brain Tumor Segmentation

  • Ma, Shiqiang
  • Zhang, Zehua
  • Ding, Jiaqi
  • Li, Xuejian
  • Tang, Jijun
  • Guo, Fei
2021 Book Section, cited 0 times
The brain tumor segmentation is essential for diagnosis and treatment of brain diseases. However, most of current 3D deep learning technologies require large number of magnetic resonance images (MRIs). In order to make full use of small dataset like BraTS 2020, we propose a deep supervision-based 2D residual U-net for efficient and automatic brain tumor segmentation. In our network, residual blocks are used to alleviate the gradient dispersion caused by excessive depth of network, while multiple deep supervision branches are used as the regularization of the network, they can improve the training stability and enable the encoder to extract richer visual features. The CBICA’s IPP’s evaluation of the segmentation results verifies the effectiveness of our method. The average Dice of ET, WT and TC are 0.7593, 0.8726 and 0.7879 respectively.

SCRDN: Residual dense network with self-calibrated convolutions for low dose CT image denoising

  • Ma, Limin
  • Xue, Hengzhi
  • Yang, Guangtong
  • Zhang, Zitong
  • Li, Chen
  • Yao, Yudong
  • Teng, Yueyang
2023 Journal Article, cited 0 times
Website
Low-dose computed tomography (LDCT) can reduce the X-ray radiation dose that the patients receive, up to 86%, which decreases the potential hazards and expands its application scope. However, LDCT images contain a lot of noise and artifacts, which brings great difficulties to doctors’ diagnosis. Recently, methods based on deep learning have obtained great success in noise reducing of LDCT images. In this paper, we propose a novel residual dense network with self-calibrated convolution (SCRDN) for LDCT images denoising. Compared with the basic CNN, SCRDN includes jump connection, dense connection and self-calibrated convolution instead of traditional convolution. It makes full use of the hierarchical features of original images to obtain the reconstructed images with more details. It also obtains a larger receptive field without introducing new parameters. The experimental results show that the proposed method can achieve performance improvements over most state-of-the-art methods used in CT denoising.

Automatic lung nodule classification with radiomics approach

  • Ma, Jingchen
  • Wang, Qian
  • Ren, Yacheng
  • Hu, Haibo
  • Zhao, Jun
2016 Conference Proceedings, cited 10 times
Website
Lung cancer is the first killer among the cancer deaths. Malignant lung nodules have extremely high mortality while some of the benign nodules don't need any treatment .Thus, the accuracy of diagnosis between benign or malignant nodules diagnosis is necessary. Notably, although currently additional invasive biopsy or second CT scan in 3 months later may help radiologists to make judgments, easier diagnosis approaches are imminently needed. In this paper, we propose a novel CAD method to distinguish the benign and malignant lung cancer from CT images directly, which can not only improve the efficiency of rumor diagnosis but also greatly decrease the pain and risk of patients in biopsy collecting process. Briefly, according to the state-of-the-art radiomics approach, 583 features were used at the first step for measurement of nodules' intensity, shape, heterogeneity and information in multi-frequencies. Further, with Random Forest method, we distinguish the benign nodules from malignant nodules by analyzing all these features. Notably, our proposed scheme was tested on all 79 CT scans with diagnosis data available in The Cancer Imaging Archive (TCIA) which contain 127 nodules and each nodule is annotated by at least one of four radiologists participating in the project. Satisfactorily, this method achieved 82.7% accuracy in classification of malignant primary lung nodules and benign nodules. We believe it would bring much value for routine lung cancer diagnosis in CT imaging and provide improvement in decision-support with much lower cost.

Quantitative integration of radiomic and genomic data improves survival prediction of low-grade glioma patients

  • Ma, Chen
  • Yao, Zhihao
  • Zhang, Qinran
  • Zou, Xiufen
Mathematical Biosciences and Engineering 2021 Journal Article, cited 0 times

A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation

  • Lyu, C.
  • Shu, H.
Brainlesion 2020 Journal Article, cited 0 times
Website
Automatic MRI brain tumor segmentation is of vital importance for the disease diagnosis, monitoring, and treatment planning. In this paper, we propose a two-stage encoder-decoder based model for brain tumor subregional segmentation. Variational autoencoder regularization is utilized in both stages to prevent the overfitting issue. The second-stage network adopts attention gates and is trained additionally using an expanded dataset formed by the first-stage outputs. On the BraTS 2020 validation dataset, the proposed method achieves the mean Dice score of 0.9041, 0.8350, and 0.7958, and Hausdorff distance (95%) of 4.953 , 6.299, 23.608 for the whole tumor, tumor core, and enhancing tumor, respectively. The corresponding results on the BraTS 2020 testing dataset are 0.8729, 0.8357, and 0.8205 for Dice score, and 11.4288, 19.9690, and 15.6711 for Hausdorff distance. The code is publicly available at https://github.com/shu-hai/two-stage-VAE-Attention-gate-BraTS2020.

Functional-structural Sub-region Graph Convolutional Network (FSGCN): Application to the Prognosis of Head and Neck Cancer with PET/CT imaging

  • Lv, Wenbing
  • Zhou, Zidong
  • Peng, Junyi
  • Peng, Lihong
  • Lin, Guoyu
  • Wu, Huiqin
  • Xu, Hui
  • Lu, Lijun
Computer Methods and Programs in Biomedicine 2023 Journal Article, cited 0 times
Background and objective Accurate risk stratification is crucial for enabling personalized treatment for head and neck cancer (HNC). Current PET/CT image-based prognostic methods include radiomics analysis and convolutional neural network (CNN), while extracting radiomics or deep features in grid Euclidean space has inherent limitations for risk stratification. Here, we propose a functional-structural sub-region graph convolutional network (FSGCN) for accurate risk stratification of HNC. Methods This study collected 642 patients from 8 different centers in The Cancer Imaging Archive (TCIA), 507 patients from 5 centers were used for training, and 135 patients from 3 centers were used for testing. The tumor was first clustered into multiple sub-regions by using PET and CT voxel information, and radiomics features were extracted from each sub-region to characterize its functional and structural information, a graph was then constructed to format the relationship/difference among different sub-regions in non-Euclidean space for each patient, followed by a residual gated graph convolutional network, the prognostic score was finally generated to predict the progression-free survival (PFS). Results In the testing cohort, compared with radiomics or FSGCN or clinical model alone, the model PETCTFea_CTROI + Cli that integrates FSGCN prognostic score and clinical parameter achieved the highest C-index and AUC of 0.767 (95% CI: 0.759-0.774) and 0.781 (95% CI: 0.774-0.788), respectively for PFS prediction. Besides, it also showed good prognostic performance on the secondary endpoints OS, RFS, and MFS in the testing cohort, with C-index of 0.786 (95% CI: 0.778-0.795), 0.775 (95% CI: 0.767-0.782) and 0.781 (95% CI: 0.772-0.789), respectively. Conclusions The proposed FSGCN can better capture the metabolic or anatomic difference/interaction among sub-regions of the whole tumor imaged with PET/CT. Extensive multi-center experiments demonstrated its capability and generalization of prognosis prediction in HNC over conventional radiomics analysis.

Context-Aware Saliency Guided Radiomics: Application to Prediction of Outcome and HPV-Status from Multi-Center PET/CT Images of Head and Neck Cancer

  • Lv, W.
  • Xu, H.
  • Han, X.
  • Zhang, H.
  • Ma, J.
  • Rahmim, A.
  • Lu, L.
Cancers (Basel) 2022 Journal Article, cited 6 times
Website
PURPOSE: This multi-center study aims to investigate the prognostic value of context-aware saliency-guided radiomics in (18)F-FDG PET/CT images of head and neck cancer (HNC). METHODS: 806 HNC patients (training vs. validation vs. external testing: 500 vs. 97 vs. 209) from 9 centers were collected from The Cancer Imaging Archive (TCIA). There were 100/384 and 60/123 oropharyngeal carcinoma (OPC) patients with human papillomavirus (HPV) status in training and testing cohorts, respectively. Six types of images were used for radiomics feature extraction and further model construction, namely (i) the original image (Origin), (ii) a context-aware saliency map (SalMap), (iii, iv) high- or low-saliency regions in the original image (highSal or lowSal), (v) a saliency-weighted image (SalxImg), and finally, (vi) a fused PET-CT image (FusedImg). Four outcomes were evaluated, i.e., recurrence-free survival (RFS), metastasis-free survival (MFS), overall survival (OS), and disease-free survival (DFS), respectively. Multivariate Cox analysis and logistic regression were adopted to construct radiomics scores for the prediction of outcome (Rad_Ocm) and HPV-status (Rad_HPV), respectively. Besides, the prognostic value of their integration (Rad_Ocm_HPV) was also investigated. RESULTS: In the external testing cohort, compared with the Origin model, SalMap and SalxImg achieved the highest C-indices for RFS (0.621 vs. 0.559) and MFS (0.785 vs. 0.739) predictions, respectively, while FusedImg performed the best for both OS (0.685 vs. 0.659) and DFS (0.641 vs. 0.582) predictions. In the OPC HPV testing cohort, FusedImg showed higher AUC for HPV-status prediction compared with the Origin model (0.653 vs. 0.484). In the OPC testing cohort, compared with Rad_Ocm or Rad_HPV alone, Rad_Ocm_HPV performed the best for OS and DFS predictions with C-indices of 0.702 (p = 0.002) and 0.684 (p = 0.006), respectively. CONCLUSION: Saliency-guided radiomics showed enhanced performance for both outcome and HPV-status predictions relative to conventional radiomics. The radiomics-predicted HPV status also showed complementary prognostic value.

Cascaded Training Pipeline for 3D Brain Tumor Segmentation

  • Luu, Minh Sao Khue
  • Pavlovskiy, Evgeniy
2022 Conference Paper, cited 0 times
Website
We apply a cascaded training pipeline for the 3D U-Net to segment each brain tumor sub-region separately and chronologically. Firstly, the volumetric data of four modalities are used to segment the whole tumor in the first round of training. Then, our model combines the whole tumor segmentation with the mpMRI images to segment the tumor core. Finally, the network uses whole tumor and tumor core segmentations to predict enhancing tumor regions. Unlike the standard 3D U-Net, we use Group Normalization and Randomized Leaky Rectified Linear Unit in the encoding and decoding blocks. We achieved dice scores on the validation set of 88.84, 81.97, and 75.02 for whole tumor, tumor core, and enhancing tumor, respectively.

vPSNR: a visualization-aware image fidelity metric tailored for diagnostic imaging

  • Lundström, Claes
International Journal of Computer Assisted Radiology and Surgery 2013 Journal Article, cited 0 times
Website
Purpose Often, the large amounts of data generated in diagnostic imaging cause overload problems for IT systems and radiologists. This entails a need of effective use of data reduction beyond lossless levels, which, in turn, underlines the need to measure and control the image fidelity. Existing image fidelity metrics, however, fail to fully support important requirements from a modern clinical context: support for high-dimensional data, visualization awareness, and independence from the original data. Methods We propose an image fidelity metric, called the visual peak signal-to-noise ratio (vPSNR), fulfilling the three main requirements. A series of image fidelity tests on CT data sets is employed. The impact of visualization transform (grayscale window) on diagnostic quality of irreversibly compressed data sets is evaluated through an observer-based study. In addition, several tests were performed demonstrating the benefits, limitations, and characteristics of vPSNR in different data reduction scenarios. Results The visualization transform has a significant impact on diagnostic quality, and the vPSNR is capable of representing this effect. Moreover, the tests establish that the vPSNR is broadly applicable. Conclusions vPSNR fills a gap not served by existing image fidelity metrics, relevant for the clinical context. While vPSNR alone cannot fulfill all image fidelity needs, it can be a useful complement in a wide range of scenarios.

Radiomics Software Tools: A comparative Analysis on Breast Cancer

  • Luna, Eduardo Almeda
  • Luna, José María
  • Ventura, Sebastián
2023 Conference Paper, cited 0 times
Website
Radiomics is an emerging and promising field used to describe visual information from medical images by means of numerical features. Several Radiomics software tools are available in the literature, but they return different features and make dissimilar calculations. Choosing one tool or another is not easy so a comparison for classification tasks is required. This paper compares three of these frameworks (3D Slicer, LIFEx and MaZda) on breast cancer data. In this analysis, we tested the features extracted from each tool using different pre-processing techniques and machine learning algorithms to classify the lesion as benign or malignant on more than 350 registers. Two different projections were considered, that is, craniocaudal (183 registers) and mediolateral oblique (172 registers). The results demonstrated that 3D Slicer obtained the best performance in the craniocaudal projection, while MaZda and LIFEx are more appropriate for the mediolateral oblique projection. The results are really promising for classification tasks, exceeding 85% in F1-score.

Evolutionary image simplification for lung nodule classification with convolutional neural networks

  • Lückehe, Daniel
  • von Voigt, Gabriele
International Journal of Computer Assisted Radiology and Surgery 2018 Journal Article, cited 0 times
Website

Study on Prognosis Factors of Non-Small Cell Lung Cancer Based on CT Image Features

  • Lu, Xiaoteng
  • Gong, Jing
  • Nie, Shengdong
2019 Journal Article, cited 0 times
This study aims to investigate the prognosis factors of non-small cell lung cancer (NSCLC) based on CT image features and develop a new quantitative image feature prognosis approach using CT images. Firstly, lung tumors were segmented and images features were extracted. Secondly, the Kaplan-Meier method was used to have a univariate survival analysis. A multiple survival analysis was carried out with the method of COX regression model. Thirdly, SMOTE algorithm was took to make the feature data balanced. Finally, classifiers based on WEKA were established to test the prognosis ability of independent prognosis factors. Univariate analysis results reflected that six features had significant influence on patients' prognosis. After multivariate analysis, angular second moment, srhge and volume were significantly related to the survival situation of NSCLC patients (P < 0.05). According to the results of classifiers, these three features could make a well prognosis on the NSCLC. The best classification accuracy was 78.4%. The results of our study suggested that angular second moment, srhge and volume were high potential independent prognosis factors of NSCLC.

Mutually aided uncertainty incorporated dual consistency regularization with pseudo label for semi-supervised medical image segmentation

  • Lu, Shanfu
  • Zhang, Zijian
  • Yan, Ziye
  • Wang, Yiran
  • Cheng, Tingting
  • Zhou, Rongrong
  • Yang, Guang
Neurocomputing 2023 Journal Article, cited 0 times
Website
Semi-supervised learning has contributed plenty to promoting computer vision tasks. Especially concerning medical images, semi-supervised image segmentation can significantly reduce the labor and time cost of labeling images. Among the existing semi-supervised methods, pseudo-labelling and consistency regularization prevail; however, the current related methods still need to achieve satisfactory results due to the poor quality of the pseudo-labels generated and needing more certainty awareness the models. To address this problem, we propose a novel method that combines pseudo-labelling with dual consistency regularization based on a high capability of uncertainty awareness. This method leverages a cycle-loss regularized to lead to a more accurate uncertainty estimate. Followed by the uncertainty estimation, the certain region with its pseudo-label is further trained in a supervised manner. In contrast, the uncertain region is used to promote the dual consistency between the student and teacher networks. The developed approach was tested on three public datasets and showed that: 1) The proposed method achieves excellent performance improvement by leveraging unlabeled data; 2) Compared with several state-of-the-art (SOTA) semi-supervised segmentation methods, ours achieved better or comparable performance.

Lung-CRNet: A convolutional recurrent neural network for lung 4DCT image registration

  • Lu, J.
  • Jin, R.
  • Song, E.
  • Ma, G.
  • Wang, M.
Med Phys 2021 Journal Article, cited 0 times
Website
PURPOSE: Deformable image registration (DIR) of lung four-dimensional computed tomography (4DCT) plays a vital role in a wide range of clinical applications. Most of the existing deep learning-based lung 4DCT DIR methods focus on pairwise registration which aims to register two images with large deformation. However, the temporal continuities of deformation fields between phases are ignored. This paper proposes a fast and accurate deep learning-based lung 4DCT DIR approach that leverages the temporal component of 4DCT images. METHODS: We present Lung-CRNet, an end-to-end convolutional recurrent registration neural network for lung 4DCT images and reformulate 4DCT DIR as a spatiotemporal sequence predicting problem in which the input is a sequence of three-dimensional computed tomography images from the inspiratory phase to the expiratory phase in a respiratory cycle. The first phase in the sequence is selected as the only reference image and the rest as moving images. Multiple convolutional gated recurrent units (ConvGRUs) are stacked to capture the temporal clues between images. The proposed network is trained in an unsupervised way using a spatial transformer layer. During inference, Lung-CRNet is able to yield the respective displacement field for each reference-moving image pair in the input sequence. RESULTS: We have trained the proposed network using a publicly available lung 4DCT dataset and evaluated performance on the widely used the DIR-Lab dataset. The mean and standard deviation of target registration error are 1.56 +/- 1.05 mm on the DIR-Lab dataset. The computation time for each forward prediction is less than 1 s on average. CONCLUSIONS: The proposed Lung-CRNet is comparable to the existing state-of-the-art deep learning-based 4DCT DIR methods in both accuracy and speed. Additionally, the architecture of Lung-CRNet can be generalized to suit other groupwise registration tasks which align multiple images simultaneously.

A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer

  • Lu, Haonan
  • Arshad, Mubarik
  • Thornton, Andrew
  • Avesani, Giacomo
  • Cunnea, Paula
  • Curry, Ed
  • Kanavati, Fahdi
  • Liang, Jack
  • Nixon, Katherine
  • Williams, Sophie T.
  • Hassan, Mona Ali
  • Bowtell, David D. L.
  • Gabra, Hani
  • Fotopoulou, Christina
  • Rockall, Andrea
  • Aboagye, Eric O.
Nature Communications 2019 Journal Article, cited 0 times
Website
The five-year survival rate of epithelial ovarian cancer (EOC) is approximately 35-40% despite maximal treatment efforts, highlighting a need for stratification biomarkers for personalized treatment. Here we extract 657 quantitative mathematical descriptors from the preoperative CT images of 364 EOC patients at their initial presentation. Using machine learning, we derive a non-invasive summary-statistic of the primary ovarian tumor based on 4 descriptors, which we name "Radiomic Prognostic Vector" (RPV). RPV reliably identifies the 5% of patients with median overall survival less than 2 years, significantly improves established prognostic methods, and is validated in two independent, multi-center cohorts. Furthermore, genetic, transcriptomic and proteomic analysis from two independent datasets elucidate that stromal phenotype and DNA damage response pathways are activated in RPV-stratified tumors. RPV and its associated analysis platform could be exploited to guide personalized therapy of EOC and is potentially transferrable to other cancer types.

Machine Learning-Based Radiomics for Molecular Subtyping of Gliomas

  • Lu, Chia-Feng
  • Hsu, Fei-Ting
  • Hsieh, Kevin Li-Chun
  • Kao, Yu-Chieh Jill
  • Cheng, Sho-Jen
  • Hsu, Justin Bo-Kai
  • Tsai, Ping-Huei
  • Chen, Ray-Jade
  • Huang, Chao-Ching
  • Yen, Yun
Clinical Cancer Research 2018 Journal Article, cited 1 times
Website

Evaluating the Interference of Noise when Performing MRI Segmentation

  • Lopez, Marc Moreno
2021 Thesis, cited 0 times
Website

A Probabilistic Model for Segmentation of Ambiguous 3D Lung Nodule

  • Long, Xiaojiang
  • Chen, Wei
  • Wang, Qiuli
  • Zhang, Xiaohong
  • Liu, Chen
  • Li, Yucong
  • Zhang, Jiuquan
2021 Conference Paper, cited 0 times
Website
Many medical images domains suffer from inherent ambiguities. A feasible approach to resolve the ambiguity of lung nodule in the segmentation task is to learn a distribution over segmentations based on a given 2D lung nodule image. Whereas lung nodule with 3D structure contains dense 3D spatial information, which is obviously helpful for resolving the ambiguity of lung nodule, but so far no one has studied it. To this end we propose a probabilistic generative segmentation model consisting of a V-Net and a conditional variational autoencoder. The proposed model obtains the 3D spatial information of lung nodule with V-Net to learn a density model over segmentations. It is capable of efficiently producing multiple plausible semantic lung nodule segmentation hypotheses to assist radiologists in making further diagnosis to resolve the present ambiguity. We evaluate our method on publicly available LIDC-IDRI dataset and achieves a new state-of-the-art result with 0.231±0.005 in D2GED. This result demonstrates the effectiveness and importance of leveraging the 3D spatial information of lung nodule for such problems. Code is available at: https://github.com/jiangjiangxiaolong/PV-Net.

Distant metastasis time to event analysis with CNNs in independent head and neck cancer cohorts

  • Lombardo, Elia
  • Kurz, Christopher
  • Marschner, Sebastian
  • Avanzo, Michele
  • Gagliardi, Vito
  • Fanetti, Giuseppe
  • Franchin, Giovanni
  • Stancanello, Joseph
  • Corradini, Stefanie
  • Niyazi, Maximilian
  • Belka, Claus
  • Parodi, Katia
  • Riboldi, Marco
  • Landry, Guillaume
2021 Journal Article, cited 1 times
Website
Deep learning models based on medical images play an increasingly important role for cancer outcome prediction. The standard approach involves usage of convolutional neural networks (CNNs) to automatically extract relevant features from the patient's image and perform a binary classification of the occurrence of a given clinical endpoint. In this work, a 2D-CNN and a 3D-CNN for the binary classification of distant metastasis (DM) occurrence in head and neck cancer patients were extended to perform time-to-event analysis. The newly built CNNs incorporate censoring information and output DM-free probability curves as a function of time for every patient. In total, 1037 patients were used to build and assess the performance of the time-to-event model. Training and validation was based on 294 patients also used in a previous benchmark classification study while for testing 743 patients from three independent cohorts were used. The best network could reproduce the good results from 3-fold cross validation [Harrell's concordance indices (HCIs) of 0.78, 0.74 and 0.80] in two out of three testing cohorts (HCIs of 0.88, 0.67 and 0.77). Additionally, the capability of the models for patient stratification into high and low-risk groups was investigated, the CNNs being able to significantly stratify all three testing cohorts. Results suggest that image-based deep learning models show good reliability for DM time-to-event analysis and could be used for treatment personalisation.

Brain tumor segmentation using morphological processing and the discrete wavelet transform

  • Lojzim, Joshua Michael
  • Fries, Marcus
Journal of Young Investigators 2017 Journal Article, cited 0 times
Website

Effect of Imaging Parameter Thresholds on MRI Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Subtypes

  • Lo, Wei-Ching
  • Li, Wen
  • Jones, Ella F
  • Newitt, David C
  • Kornak, John
  • Wilmes, Lisa J
  • Esserman, Laura J
  • Hylton, Nola M
PLoS One 2016 Journal Article, cited 7 times
Website

JOURNAL CLUB: Computer-Aided Detection of Lung Nodules on CT With a Computerized Pulmonary Vessel Suppressed Function

  • Lo, ShihChung B
  • Freedman, Matthew T
  • Gillis, Laura B
  • White, Charles S
  • Mun, Seong K
American Journal of Roentgenology 2018 Journal Article, cited 4 times
Website

A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Tampakas, Vassilis
  • Pintelas, Panagiotis
Algorithms 2019 Journal Article, cited 0 times
Website
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.

Detecting Lung Abnormalities From X-rays Using an Improved SSL Algorithm

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Pintelas, Panagiotis
Electronic Notes in Theoretical Computer Science 2019 Journal Article, cited 0 times

Conventional MR-based Preoperative Nomograms for Prediction of IDH/1p19q Subtype in Low-Grade Glioma

  • Liu, Zhenyin
  • Zhang, Tao
  • Jiang, Hua
  • Xu, Wenchan
  • Zhang, Jing
Academic Radiology 2018 Journal Article, cited 0 times
Website

Robustifying Deep Networks for Medical Image Segmentation

  • Liu, Zheng
  • Zhang, Jinnian
  • Jog, Varun
  • Loh, Po-Ling
  • McMillan, Alan B
J Digit Imaging 2021 Journal Article, cited 0 times
Website
The purpose of this study is to investigate the robustness of a commonly used convolutional neural network for image segmentation with respect to nearly unnoticeable adversarial perturbations, and suggest new methods to make these networks more robust to such perturbations. In this retrospective study, the accuracy of brain tumor segmentation was studied in subjects with low- and high-grade gliomas. Two representative UNets were implemented to segment four different MR series (T1-weighted, post-contrast T1-weighted, T2-weighted, and T2-weighted FLAIR) into four pixelwise labels (Gd-enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor, and background). We developed attack strategies based on the fast gradient sign method (FGSM), iterative FGSM (i-FGSM), and targeted iterative FGSM (ti-FGSM) to produce effective but imperceptible attacks. Additionally, we explored the effectiveness of distillation and adversarial training via data augmentation to counteract these adversarial attacks. Robustness was measured by comparing the Dice coefficients for the attacks using Wilcoxon signed-rank tests. The experimental results show that attacks based on FGSM, i-FGSM, and ti-FGSM were effective in reducing the quality of image segmentation by up to 65% in the Dice coefficient. For attack defenses, distillation performed significantly better than adversarial training approaches. However, all defense approaches performed worse compared to unperturbed test images. Therefore, segmentation networks can be adversely affected by targeted attacks that introduce visually minor (and potentially undetectable) modifications to existing images. With an increasing interest in applying deep learning techniques to medical imaging data, it is important to quantify the ramifications of adversarial inputs (either intentional or unintentional).

Radiogenomics correlation between MR imaging features and mRNA-based subtypes in lower-grade glioma

  • Liu, Zhenyin
  • Zhang, Jing
BMC Neurology 2020 Journal Article, cited 0 times
Website
To investigate associations between lower-grade glioma (LGG) mRNA-based subtypes (R1-R4) and MR features.

Need for Objective Task-Based Evaluation of Image Segmentation Algorithms for Quantitative PET: A Study with ACRIN 6668/RTOG 0235 Multicenter Clinical Trial Data

  • Liu, Z.
  • Mhlanga, J. C.
  • Xia, H.
  • Siegel, B. A.
  • Jha, A. K.
J Nucl Med 2024 Journal Article, cited 0 times
Website
Reliable performance of PET segmentation algorithms on clinically relevant tasks is required for their clinical translation. However, these algorithms are typically evaluated using figures of merit (FoMs) that are not explicitly designed to correlate with clinical task performance. Such FoMs include the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the Hausdorff distance (HD). The objective of this study was to investigate whether evaluating PET segmentation algorithms using these task-agnostic FoMs yields interpretations consistent with evaluation on clinically relevant quantitative tasks. Methods: We conducted a retrospective study to assess the concordance in the evaluation of segmentation algorithms using the DSC, JSC, and HD and on the tasks of estimating the metabolic tumor volume (MTV) and total lesion glycolysis (TLG) of primary tumors from PET images of patients with non-small cell lung cancer. The PET images were collected from the American College of Radiology Imaging Network 6668/Radiation Therapy Oncology Group 0235 multicenter clinical trial data. The study was conducted in 2 contexts: (1) evaluating conventional segmentation algorithms, namely those based on thresholding (SUV(max)40% and SUV(max)50%), boundary detection (Snakes), and stochastic modeling (Markov random field-Gaussian mixture model); (2) evaluating the impact of network depth and loss function on the performance of a state-of-the-art U-net-based segmentation algorithm. Results: Evaluation of conventional segmentation algorithms based on the DSC, JSC, and HD showed that SUV(max)40% significantly outperformed SUV(max)50%. However, SUV(max)40% yielded lower accuracy on the tasks of estimating MTV and TLG, with a 51% and 54% increase, respectively, in the ensemble normalized bias. Similarly, the Markov random field-Gaussian mixture model significantly outperformed Snakes on the basis of the task-agnostic FoMs but yielded a 24% increased bias in estimated MTV. For the U-net-based algorithm, our evaluation showed that although the network depth did not significantly alter the DSC, JSC, and HD values, a deeper network yielded substantially higher accuracy in the estimated MTV and TLG, with a decreased bias of 91% and 87%, respectively. Additionally, whereas there was no significant difference in the DSC, JSC, and HD values for different loss functions, up to a 73% and 58% difference in the bias of the estimated MTV and TLG, respectively, existed. Conclusion: Evaluation of PET segmentation algorithms using task-agnostic FoMs could yield findings discordant with evaluation on clinically relevant quantitative tasks. This study emphasizes the need for objective task-based evaluation of image segmentation algorithms for quantitative PET.

A Bayesian approach to tissue-fraction estimation for oncological PET segmentation

  • Liu, Z.
  • Mhlanga, J. C.
  • Laforest, R.
  • Derenoncourt, P. R.
  • Siegel, B. A.
  • Jha, A. K.
Phys Med Biol 2021 Journal Article, cited 0 times
Website
Tumor segmentation in oncological PET is challenging, a major reason being the partial-volume effects (PVEs) that arise due to low system resolution and finite voxel size. The latter results in tissue-fraction effects (TFEs), i.e. voxels contain a mixture of tissue classes. Conventional segmentation methods are typically designed to assign each image voxel as belonging to a certain tissue class. Thus, these methods are inherently limited in modeling TFEs. To address the challenge of accounting for PVEs, and in particular, TFEs, we propose a Bayesian approach to tissue-fraction estimation for oncological PET segmentation. Specifically, this Bayesian approach estimates the posterior mean of the fractional volume that the tumor occupies within each image voxel. The proposed method, implemented using a deep-learning-based technique, was first evaluated using clinically realistic 2D simulation studies with known ground truth, in the context of segmenting the primary tumor in PET images of patients with lung cancer. The evaluation studies demonstrated that the method accurately estimated the tumor-fraction areas and significantly outperformed widely used conventional PET segmentation methods, including a U-net-based method, on the task of segmenting the tumor. In addition, the proposed method was relatively insensitive to PVEs and yielded reliable tumor segmentation for different clinical-scanner configurations. The method was then evaluated using clinical images of patients with stage IIB/III non-small cell lung cancer from ACRIN 6668/RTOG 0235 multi-center clinical trial. Here, the results showed that the proposed method significantly outperformed all other considered methods and yielded accurate tumor segmentation on patient images with Dice similarity coefficient (DSC) of 0.82 (95% CI: 0.78, 0.86). In particular, the method accurately segmented relatively small tumors, yielding a high DSC of 0.77 for the smallest segmented cross-section of 1.30 cm(2). Overall, this study demonstrates the efficacy of the proposed method to accurately segment tumors in PET images.

Oligodendroglial tumours: subventricular zone involvement and seizure history are associated with CIC mutation status

  • Liu, Zhenyin
  • Liu, Hongsheng
  • Liu, Zhenqing
  • Zhang, Jing
BMC Neurol 2019 Journal Article, cited 1 times
Website
BACKGROUND: CIC-mutant oligodendroglial tumours linked to better prognosis. We aim to investigate associations between CIC gene mutation status, MR characteristics and clinical features. METHODS: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive (TCGA/TCIA) for 59 patients with oligodendroglial tumours were used. Differences between CIC mutation and CIC wild-type were tested using Chi-square test and binary logistic regression analysis. RESULTS: In univariate analysis, the clinical variables and MR features, which consisted 3 selected features (subventricular zone[SVZ] involvement, volume and seizure history) were associated with CIC mutation status (all p < 0.05). A multivariate logistic regression analysis identified that seizure history (no vs. yes odd ratio [OR]: 28.960, 95 confidence interval [CI]:2.625-319.49, p = 0.006) and SVZ involvement (SVZ- vs. SVZ+ OR: 77.092, p = 0.003; 95% CI: 4.578-1298.334) were associated with a higher incidence of CIC mutation status. The nomogram showed good discrimination, with a C-index of 0.906 (95% CI: 0.812-1.000) and was well calibrated. SVZ- group has increased (SVZ- vs. SVZ+, hazard ratio [HR]: 4.500, p = 0.04; 95% CI: 1.069-18.945) overall survival. CONCLUSIONS: Absence of seizure history and SVZ involvement (-) was associated with a higher incidence of CIC mutation.

Automatic Segmentation of Non-tumor Tissues in Glioma MR Brain Images Using Deformable Registration with Partial Convolutional Networks

  • Liu, Zhongqiang
  • Gu, Dongdong
  • Zhang, Yu
  • Cao, Xiaohuan
  • Xue, Zhong
2021 Book Section, cited 0 times
In brain tumor diagnosis and surgical planning, segmentation of tumor regions and accurate analysis of surrounding normal tissues are necessary for physicians. Pathological variability often renders difficulty to register a well-labeled normal atlas to such images and to automatic segment/label surrounding normal brain tissues. In this paper, we propose a new registration approach that first segments brain tumor using a U-Net and then simulates missed normal tissues within the tumor region using a partial convolutional network. Then, a standard normal brain atlas image is registered onto such tumor-removed images in order to segment/label the normal brain tissues. In this way, our new approach greatly reduces the effects of pathological variability in deformable registration and segments the normal tissues surrounding brain tumor well. In experiments, we used MICCAI BraTS2018 T1 and FLAIR images to evaluate the proposed algorithm. By comparing direct registration with the proposed algorithm, the results showed that the Dice coefficient for gray matters was significantly improved for surrounding normal brain tissues.

Radiomics-based prediction of survival in patients with head and neck squamous cell carcinoma based on pre- and post-treatment (18)F-PET/CT

  • Liu, Z.
  • Cao, Y.
  • Diao, W.
  • Cheng, Y.
  • Jia, Z.
  • Peng, X.
Aging (Albany NY) 2020 Journal Article, cited 0 times
Website
BACKGROUND: 18-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-PET/CT) has been widely applied for the imaging of head and neck squamous cell carcinoma (HNSCC). This study examined whether pre- and post-treatment (18)F-PET/CT features can help predict the survival of HNSCC patients. RESULTS: Three radiomics features were identified as prognostic factors. Radiomics score calculated from these features significantly predicted overall survival (OS) and disease-free disease (DFS). The clinicopathological characteristics combined with pre- or post-treatment nomograms showed better ROC curves and decision curves than the nomogram based only on clinicopathological characteristics. CONCLUSIONS: Combining clinicopathological characteristics with radiomics features of pre-treatment PET/CT or post-treatment PET/CT assessment of primary tumor sites as positive or negative may substantially improve prediction of OS and DFS of HNSCC patients. METHODS: 171 patients who received pre-treatment (18)F-PET/CT scans and 154 patients who received post-treatment (18)F-PET/CT scans with HNSCC in the Cancer Imaging Achieve (TCIA) were included. Nomograms that combined clinicopathological features with either pre-treatment PET/CT radiomics features or post-treatment assessment of primary tumor sites were constructed using data from 154 HNSCC patients. Receiver operating characteristic (ROC) curves and decision curves were used to compare the predictions of these models with those of a model incorporating only clinicopathological features.

Relationship between Glioblastoma Heterogeneity and Survival Time: An MR Imaging Texture Analysis

  • Liu, Y
  • Xu, X
  • Yin, L
  • Zhang, X
  • Li, L
  • Lu, H
American Journal of Neuroradiology 2017 Journal Article, cited 8 times
Website

3D Isotropic Super-resolution Prostate MRI Using Generative Adversarial Networks and Unpaired Multiplane Slices

  • Liu, Y.
  • Liu, Y.
  • Vanguri, R.
  • Litwiller, D.
  • Liu, M.
  • Hsu, H. Y.
  • Ha, R.
  • Shaish, H.
  • Jambawalikar, S.
J Digit Imaging 2021 Journal Article, cited 0 times
Website
We developed a deep learning-based super-resolution model for prostate MRI. 2D T2-weighted turbo spin echo (T2w-TSE) images are the core anatomical sequences in a multiparametric MRI (mpMRI) protocol. These images have coarse through-plane resolution, are non-isotropic, and have long acquisition times (approximately 10-15 min). The model we developed aims to preserve high-frequency details that are normally lost after 3D reconstruction. We propose a novel framework for generating isotropic volumes using generative adversarial networks (GAN) from anisotropic 2D T2w-TSE and single-shot fast spin echo (ssFSE) images. The CycleGAN model used in this study allows the unpaired dataset mapping to reconstruct super-resolution (SR) volumes. Fivefold cross-validation was performed. The improvements from patch-to-volume reconstruction (PVR) to SR are 80.17%, 63.77%, and 186% for perceptual index (PI), RMSE, and SSIM, respectively; the improvements from slice-to-volume reconstruction (SVR) to SR are 72.41%, 17.44%, and 7.5% for PI, RMSE, and SSIM, respectively. Five ssFSE cases were used to test for generalizability; the perceptual quality of SR images surpasses the in-plane ssFSE images by 37.5%, with 3.26% improvement in SSIM and a higher RMSE by 7.92%. SR images were quantitatively assessed with radiologist Likert scores. Our isotropic SR volumes are able to reproduce high-frequency detail, maintaining comparable image quality to in-plane TSE images in all planes without sacrificing perceptual accuracy. The SR reconstruction networks were also successfully applied to the ssFSE images, demonstrating that high-quality isotropic volume achieved from ultra-fast acquisition is feasible.

Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans

  • Liu, Yucheng
  • Khosravan, Naji
  • Liu, Yulin
  • Stember, Joseph
  • Shoag, Jonathan
  • Bagci, Ulas
  • Jambawalikar, Sachin
2019 Book Section, cited 0 times

Symmetric-Constrained Irregular Structure Inpainting for Brain MRI Registration with Tumor Pathology

  • Liu, X.
  • Xing, F.
  • Yang, C.
  • Jay Kuo, C. C.
  • El Fakhri, G.
  • Woo, J.
2021 Book Section, cited 0 times
Website
Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis. Since tumor region does not match with any ordinary brain tissue, it has been difficult to deformably register a patient's brain to a normal one. Many patient images are associated with irregularly distributed lesions, resulting in further distortion of normal tissue structures and complicating registration's similarity measure. In this work, we follow a multi-step context-aware image inpainting framework to generate synthetic tissue intensities in the tumor region. The coarse image-to-image translation is applied to make a rough inference of the missing parts. Then, a feature-level patch-match refinement module is applied to refine the details by modeling the semantic relevance between patch-wise features. A symmetry constraint reflecting a large degree of anatomical symmetry in the brain is further proposed to achieve better structure understanding. Deformable registration is applied between inpainted patient images and normal brains, and the resulting deformation field is eventually used to deform original patient data for the final alignment. The method was applied to the Multimodal Brain Tumor Segmentation (BraTS) 2018 challenge database and compared against three existing inpainting methods. The proposed method yielded results with increased peak signal-to-noise ratio, structural similarity index, inception score, and reduced L1 error, leading to successful patient-to-normal brain image registration.

Deep unregistered multi-contrast MRI reconstruction

  • Liu, X.
  • Wang, J.
  • Jin, J.
  • Li, M.
  • Tang, F.
  • Crozier, S.
  • Liu, F.
Magn Reson Imaging 2021 Journal Article, cited 0 times
Website
Multiple magnetic resonance images of different contrasts are normally acquired for clinical diagnosis. Recently, research has shown that the previously acquired multi-contrast (MC) images of the same patient can be used as anatomical prior to accelerating magnetic resonance imaging (MRI). However, current MC-MRI networks are based on the assumption that the images are perfectly registered, which is rarely the case in real-world applications. In this paper, we propose an end-to-end deep neural network to reconstruct highly accelerated images by exploiting the shareable information from potentially misaligned reference images of an arbitrary contrast. Specifically, a spatial transformation (ST) module is designed and integrated into the reconstruction network to align the pre-acquired reference images with the images to be reconstructed. The misalignment is further alleviated by maximizing the normalized cross-correlation (NCC) between the MC images. The visualization of feature maps demonstrates that the proposed method effectively reduces the misalignment between the images for shareable information extraction when applied to the publicly available brain datasets. Additionally, the experimental results on these datasets show the proposed network allows the robust exploitation of shareable information across the misaligned MC images, leading to improved reconstruction results.

A Genetic Polymorphism in CTLA-4 Is Associated with Overall Survival in Sunitinib-Treated Patients with Clear Cell Metastatic Renal Cell Carcinoma

  • Liu, X.
  • Swen, J. J.
  • Diekstra, M. H. M.
  • Boven, E.
  • Castellano, D.
  • Gelderblom, H.
  • Mathijssen, R. H. J.
  • Vermeulen, S. H.
  • Oosterwijk, E.
  • Junker, K.
  • Roessler, M.
  • Alexiusdottir, K.
  • Sverrisdottir, A.
  • Radu, M. T.
  • Ambert, V.
  • Eisen, T.
  • Warren, A.
  • Rodriguez-Antona, C.
  • Garcia-Donas, J.
  • Bohringer, S.
  • Koudijs, K. K. M.
  • Kiemeney, Lalm
  • Rini, B. I.
  • Guchelaar, H. J.
Clin Cancer Res 2018 Journal Article, cited 0 times
Website
Purpose: The survival of patients with clear cell metastatic renal cell carcinoma (cc-mRCC) has improved substantially since the introduction of tyrosine kinase inhibitors (TKI). With the fact that TKIs interact with immune responses, we investigated whether polymorphisms of genes involved in immune checkpoints are related to the clinical outcome of cc-mRCC patients treated with sunitinib as first TKI.Experimental Design: Twenty-seven single-nucleotide polymorphisms (SNP) in CD274 (PD-L1), PDCD1 (PD-1), and CTLA-4 were tested for a possible association with progression-free survival (PFS) and overall survival (OS) in a discovery cohort of 550 sunitinib-treated cc-mRCC patients. SNPs with a significant association (P < 0.05) were tested in an independent validation cohort of 138 sunitinib-treated cc-mRCC patients. Finally, data of the discovery and validation cohort were pooled for meta-analysis.Results:CTLA-4 rs231775 and CD274 rs7866740 showed significant associations with OS in the discovery cohort after correction for age, gender, and Heng prognostic risk group [HR, 0.84; 95% confidence interval (CI), 0.72-0.98; P = 0.028, and HR, 0.73; 95% CI, 0.54-0.99; P = 0.047, respectively]. In the validation cohort, the associations of both SNPs with OS did not meet the significance threshold of P < 0.05. After meta-analysis, CTLA-4 rs231775 showed a significant association with OS (HR, 0.83; 95% CI, 0.72-0.95; P = 0.008). Patients with the GG genotype had longer OS (35.1 months) compared with patients with an AG (30.3 months) or AA genotype (24.3 months). No significant associations with PFS were found.Conclusions: The G-allele of rs231775 in the CTLA-4 gene is associated with an improved OS in sunitinib-treated cc-mRCC patients and could potentially be used as a prognostic biomarker. Clin Cancer Res; 1-7. (c)2018 AACR.

Unsupervised Sparse-View Backprojection via Convolutional and Spatial Transformer Networks

  • Liu, Xueqing
  • Sajda, Paul
Brain Informatics 2023 Book Section, cited 0 times
Imaging technologies heavily rely on tomographic reconstruction, which involves solving a multidimensional inverse problem given a limited number of projections. Building upon our prior research [14], we have ascertained that the integration of the predicted source space derived from electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) can be effectively approached as a backprojection problem involving sensor non-uniformity. Although backprojection is a commonly used algorithm for tomographic reconstruction, it often produces subpar image reconstructions when the projection angles are sparse or the sensor characteristics are non-uniform. To address this issue, various deep learning-based algorithms have been developed to solve the inverse problem and reconstruct images using a reduced number of projections. However, these algorithms typically require ground-truth examples, i.e., reconstructed images, to achieve satisfactory performance. In this paper, we present an unsupervised sparse-view backprojection algorithm that does not rely on ground-truth examples. Our algorithm comprises two modules within a generator-projector framework: a convolutional neural network and a spatial transformer network. We evaluate the effectiveness of our algorithm using computed tomography (CT) images of the human chest. The results demonstrate that our algorithm outperforms filtered backprojection significantly in scenarios with very sparse projection angles or varying sensor characteristics for different angles. Our proposed approach holds practical implications for medical imaging and other imaging modalities (e.g., radar) where sparse and/or non-uniform projections may arise due to time or sampling constraints.

Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas

  • Liu, Xing
  • Li, Yiming
  • Sun, Zhiyan
  • Li, Shaowu
  • Wang, Kai
  • Fan, Xing
  • Liu, Yuqing
  • Wang, Lei
  • Wang, Yinyan
  • Jiang, Tao
Cancer medicine 2018 Journal Article, cited 0 times
Website

A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas

  • Liu, Xing
  • Li, Yiming
  • Qian, Zenghui
  • Sun, Zhiyan
  • Xu, Kaibin
  • Wang, Kai
  • Liu, Shuai
  • Fan, Xing
  • Li, Shaowu
  • Zhang, Zhong
NeuroImage: Clinical 2018 Journal Article, cited 0 times
Website

A CADe system for nodule detection in thoracic CT images based on artificial neural network

  • Liu, Xinglong
  • Hou, Fei
  • Qin, Hong
  • Hao, Aimin
Science China Information Sciences 2017 Journal Article, cited 11 times
Website

Magnetic resonance perfusion image features uncover an angiogenic subgroup of glioblastoma patients with poor survival and better response to antiangiogenic treatment

  • Liu, Tiffany T.
  • Achrol, Achal S.
  • Mitchell, Lex A.
  • Rodriguez, Scott A.
  • Feroze, Abdullah
  • Michael Iv
  • Kim, Christine
  • Chaudhary, Navjot
  • Gevaert, Olivier
  • Stuart, Josh M.
  • Harsh, Griffith R.
  • Chang, Steven D.
  • Rubin, Daniel L.
2016 Journal Article, cited 15 times
Website
Background. In previous clinical trials, antiangiogenic therapies such as bevacizumab did not show efficacy in patients with newly diagnosed glioblastoma (GBM). This may be a result of the heterogeneity of GBM, which has a variety of imaging-based phenotypes and gene expression patterns. In this study, we sought to identify a phenotypic subtype of GBM patients who have distinct tumor-image features and molecular activities and who may benefit from antiangiogenic therapies.Methods. Quantitative image features characterizing subregions of tumors and the whole tumor were extracted from preoperative and pretherapy perfusion magnetic resonance (MR) images of 117 GBM patients in 2 independent cohorts. Unsupervised consensus clustering was performed to identify robust clusters of GBM in each cohort. Cox survival and gene set enrichment analyses were conducted to characterize the clinical significance and molecular pathway activities of the clusters. The differential treatment efficacy of antiangiogenic therapy between the clusters was evaluated.Results. A subgroup of patients with elevated perfusion features was identified and was significantly associated with poor patient survival after accounting for other clinical covariates (P values <.01; hazard ratios > 3) consistently found in both cohorts. Angiogenesis and hypoxia pathways were enriched in this subgroup of patients, suggesting the potential efficacy of antiangiogenic therapy. Patients of the angiogenic subgroups pooled from both cohorts, who had chemotherapy information available, had significantly longer survival when treated with antiangiogenic therapy (log-rank P=.022).Conclusions. Our findings suggest that an angiogenic subtype of GBM patients may benefit from antiangiogenic therapy with improved overall survival.

Computational Identification of Tumor Anatomic Location Associated with Survival in 2 Large Cohorts of Human Primary Glioblastomas

  • Liu, T T
  • Achrol, A S
  • Mitchell, L A
  • Du, W A
  • Loya, J J
  • Rodriguez, S A
  • Feroze, A
  • Westbroek, E M
  • Yeom, K W
  • Stuart, J M
  • Chang, S D
  • Harsh, G R 4th
  • Rubin, D L
American Journal of Neuroradiology 2016 Journal Article, cited 6 times
Website
BACKGROUND AND PURPOSE: Tumor location has been shown to be a significant prognostic factor in patients with glioblastoma. The purpose of this study was to characterize glioblastoma lesions by identifying MR imaging voxel-based tumor location features that are associated with tumor molecular profiles, patient characteristics, and clinical outcomes. MATERIALS AND METHODS: Preoperative T1 anatomic MR images of 384 patients with glioblastomas were obtained from 2 independent cohorts (n = 253 from the Stanford University Medical Center for training and n = 131 from The Cancer Genome Atlas for validation). An automated computational image-analysis pipeline was developed to determine the anatomic locations of tumor in each patient. Voxel-based differences in tumor location between good (overall survival of >17 months) and poor (overall survival of <11 months) survival groups identified in the training cohort were used to classify patients in The Cancer Genome Atlas cohort into 2 brain-location groups, for which clinical features, messenger RNA expression, and copy number changes were compared to elucidate the biologic basis of tumors located in different brain regions. RESULTS: Tumors in the right occipitotemporal periventricular white matter were significantly associated with poor survival in both training and test cohorts (both, log-rank P < .05) and had larger tumor volume compared with tumors in other locations. Tumors in the right periatrial location were associated with hypoxia pathway enrichment and PDGFRA amplification, making them potential targets for subgroup-specific therapies. CONCLUSIONS: Voxel-based location in glioblastoma is associated with patient outcome and may have a potential role for guiding personalized treatment.

Improving Brain Tumor Segmentation with Multi-direction Fusion and Fine Class Prediction

  • Liu, Sun’ao
  • Guo, Xiaonan
2020 Book Section, cited 0 times
Convolutional neural networks have been broadly used for medical image analysis. Due to its characteristics, segmentation of glioma is considered to be one of the most challenging tasks. In this paper, we propose a novel Multi-direction Fusion Network (MFNet) for brain tumor segmentation with 3D multimodal MRI data. Unlike conventional 3D networks, the feature-extracting process is decomposed and fused in the proposed network. Furthermore, we design an additional task called Fine Class Prediction to reinforce the encoder and prevent over-segmentation. The proposed methods finally obtain dice scores of 0.81796, 0.8227, 0.88459 for enhancing tumor, tumor core and whole tumor respectively on BraTS 2019 test set.

The impact of variance in carnitine palmitoyltransferase-1 expression on breast cancer prognosis is stratified by clinical and anthropometric factors

  • Liu, R.
  • Ospanova, S.
  • Perry, R. J.
PLoS One 2023 Journal Article, cited 0 times
Website
CPT1A is a rate-limiting enzyme in fatty acid oxidation and is upregulated in high-risk breast cancer. Obesity and menopausal status' relationship with breast cancer prognosis is well established, but its connection with fatty acid metabolism is not. We utilized RNA sequencing data in the Xena Functional Genomics Explorer, to explore CPT1A's effect on breast cancer patients' survival probability. Using [18F]-fluorothymidine positron emission tomography-computed tomography images from The Cancer Imaging Archive, we segmented these analyses by obesity and menopausal status. In 1214 patients, higher CPT1A expression is associated with lower breast cancer survivability. We confirmed a previously observed protective relationship between obesity and breast cancer in pre-menopausal patients and supported this data using two-sided Pearson correlations. Taken together, these analyses using open-access databases bolster the potential role of CPT1A-dependent fatty acid metabolism as a pathogenic factor in breast cancer.

LSW-Net: A Learning Scattering Wavelet Network for Brain Tumor and Retinal Image Segmentation

  • Liu, Ruihua
  • Nan, Haoyu
  • Zou, Yangyang
  • Xie, Ting
  • Ye, Zhiyong
2022 Journal Article, cited 0 times
Website
Convolutional network models have been widely used in image segmentation. However, there are many types of boundary contour features in medical images which seriously affect the stability and accuracy of image segmentation models, such as the ambiguity of tumors, the variability of lesions, and the weak boundaries of fine blood vessels. In this paper, in order to solve these problems we first introduce the dual-tree complex wavelet scattering transform module, and then innovatively propose a learning scattering wavelet network model. In addition, a new improved active contour loss function is further constructed to deal with complex segmentation. Finally, the equilibrium coefficient of our model is discussed. Experiments on the BraTS2020 dataset show that the LSW-Net model has improved the Dice coefficient, accuracy, and sensitivity of the classic FCN, SegNet, and At-Unet models by at least 3.51%, 2.11%, and 0.46%, respectively. In addition, the LSW-Net model still has an advantage in the average measure of Dice coefficients compared with some advanced segmentation models. Experiments on the DRIVE dataset prove that our model outperforms the other 14 algorithms in both Dice coefficient and specificity measures. In particular, the sensitivity of our model provides a 3.39% improvement when compared with the Unet model, and the model's effect is obvious.

Synthetic minority image over-sampling technique: How to improve AUC for glioblastoma patient survival prediction

  • Liu, Renhao
  • Hall, Lawrence O.
  • Bowyer, Kevin W.
  • Goldgof, Dmitry B.
  • Gatenby, Robert
  • Ben Ahmed, Kaoutar
2017 Conference Proceedings, cited 3 times
Website
Real-world datasets are often imbalanced, with an important class having many fewer examples than other classes. In medical data, normal examples typically greatly outnumber disease examples. A classifier learned from imbalanced data, will tend to be very good at the predicting examples in the larger (normal) class, yet the smaller (disease) class is typically of more interest. Imbalance is dealt with at the feature vector level (create synthetic feature vectors or discard some examples from the larger class) or by assigning differential costs to errors. Here, we introduce a novel method for over-sampling minority class examples at the image level, rather than the feature vector level. Our method was applied to the problem of Glioblastoma patient survival group prediction. Synthetic minority class examples were created by adding Gaussian noise to original medical images from the minority class. Uniform local binary patterns (LBP) histogram features were then extracted from the original and synthetic image examples with a random forests classifier. Experimental results show the new method (Image SMOTE) increased minority class predictive accuracy and also the AUC (area under the receiver operating characteristic curve), compared to using the imbalanced dataset directly or to creating synthetic feature vectors.

Deep learning for magnetic resonance imaging-genomic mapping of invasive breast carcinoma

  • Liu, Qian
2019 Thesis, cited 0 times
Website
To identify MRI-based radiomic features that could be obtained automatically by a deep learning (DL) model and could predict the clinical characteristics of breast cancer (BC). Also, to explain the potential underlying genomic mechanisms of the predictive radiomic features. A denoising autoencoder (DA) was developed to retrospectively extract 4,096 phenotypes from the MRI of 110 BC patients collected by The Cancer Imaging Archive (TCIA). The associations of these phenotypes with genomic features (commercialized gene signatures, expression of risk genes, and biological pathways activities extracted from the same patients’ mRNA expression collected by The Cancer Genome Atlas (TCGA)) were tested based on linear mixed effect (LME) models. A least absolute shrinkage and selection operator (LASSO) model was used to identify the most predictive MRI phenotypes for each clinical phenotype (tumor size (T), lymph node metastasis(N), status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2)). More than 1,000 of the 4,096 MRI phenotypes were associated with the activities of risk genes, gene signatures, and biological pathways (adjusted P-value < 0.05). High performances are obtained in the prediction of the status of T, N, ER, PR, HER2 (AUC>0.9). These identified MRI phenotypes also show significant power to stratify the BC tumors. DL based auto MRI features performed very well in predicting clinical characteristics of BC and these phenotypes were identified to have genomic significance.

Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation

  • Liu, K. L.
  • Wu, T.
  • Chen, P. T.
  • Tsai, Y. M.
  • Roth, H.
  • Wu, M. S.
  • Liao, W. C.
  • Wang, W.
2020 Journal Article, cited 141 times
Website
BACKGROUND: The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks' potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation. METHODS: In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison. FINDINGS: Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0.973, specificity of 1.000, and accuracy of 0.986 (area under the curve [AUC] 0.997 (95% CI 0.992-1.000). In local test set 2, CNN-based analysis had a sensitivity of 0.990, specificity of 0.989, and accuracy of 0.989 (AUC 0.999 [0.998-1.000]). In the US test set, CNN-based analysis had a sensitivity of 0.790, specificity of 0.976, and accuracy of 0.832 (AUC 0.920 [0.891-0.948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0.983 vs 0.929, difference 0.054 [95% CI 0.011-0.098]; p=0.014) in the two local test sets combined. CNN missed three (1.7%) of 176 pancreatic cancers (1.1-1.2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1.0-3.3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92.1% in the local test sets and 63.1% in the US test set. INTERPRETATION: CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation. FUNDING: Taiwan Ministry of Science and Technology.

A Postoperative Displacement Measurement Method for Femoral Neck Fracture Internal Fixation Implants Based on Femoral Segmentation and Multi-Resolution Frame Registration

  • Liu, Kaifeng
  • Nagamune, Kouki
  • Oe, Keisuke
  • Kuroda, Ryosuke
  • Niikura, Takahiro
Symmetry 2021 Journal Article, cited 0 times
Website
Femoral neck fractures have a high incidence in the geriatric population and are associatedwith high mortality and disability rates. With the minimally invasive nature, internal fixation iswidely used as a treatment option to stabilize femoral neck fractures. The fixation effectiveness andstability of the implant is an essential guide for the surgeon. However, there is no long-term reliableevaluation method to quantify the implant’s fixation effect without affecting the patient’s behaviorand synthesizing long-term treatment data. For the femur’s symmetrical structure, this study used3D convolutional networks for biomedical image segmentation (3D-UNet) to segment the injuredfemur as a mask, aligned computerized tomography (CT) scans of the patient at different times aftersurgery and quantified the displacement in the specified direction using the generated 3D point cloud.In the experimental part, we used 10 groups containing two CT images scanned at the one-yearinterval after surgery. By comparing manual segmentation of femur and segmentation of femur as amask using neural network, the mask obtained by segmentation using the 3D-UNet network withsymmetric structure fully meets the requirements of image registration. The data obtained fromthe 3D point cloud calculation is within the error tolerance, and the calculated displacement of theimplant can be visualized in 3D space.

Image Classification Algorithm Based on Deep Learning-Kernel Function

  • Liu, Jun-e
  • An, Feng-Ping
Scientific Programming 2020 Journal Article, cited 11 times
Website
Although the existing traditional image classification methods have been widely applied in practical problems, there are some problems in the application process, such as unsatisfactory effects, low classification accuracy, and weak adaptive ability. This method separates image feature extraction and classification into two steps for classification operation. The deep learning model has a powerful learning ability, which integrates the feature extraction and classification process into a whole to complete the image classification test, which can effectively improve the image classification accuracy. However, this method has the following problems in the application process: first, it is impossible to effectively approximate the complex functions in the deep learning model. Second, the deep learning model comes with a low classifier with low accuracy. So, this paper introduces the idea of sparse representation into the architecture of the deep learning network and comprehensively utilizes the sparse representation of well multidimensional data linear decomposition ability and the deep structural advantages of multilayer nonlinear mapping to complete the complex function approximation in the deep learning model. And a sparse representation classification method based on the optimized kernel function is proposed to replace the classifier in the deep learning model, thereby improving the image classification effect. Therefore, this paper proposes an image classification algorithm based on the stacked sparse coding depth learning model-optimized kernel function nonnegative sparse representation. The experimental results show that the proposed method not only has a higher average accuracy than other mainstream methods but also can be good adapted to various image databases. Compared with other deep learning methods, it can better solve the problems of complex function approximation and poor classifier effect, thus further improving image classification accuracy.

AI-Driven Robust Kidney and Renal Mass Segmentation and Classification on 3D CT Images

  • Liu, Jingya
  • Yildirim, Onur
  • Akin, Oguz
  • Tian, Yingli
2023 Journal Article, cited 0 times
Website
Early intervention in kidney cancer helps to improve survival rates. Abdominal computed tomography (CT) is often used to diagnose renal masses. In clinical practice, the manual segmentation and quantification of organs and tumors are expensive and time-consuming. Artificial intelligence (AI) has shown a significant advantage in assisting cancer diagnosis. To reduce the workload of manual segmentation and avoid unnecessary biopsies or surgeries, in this paper, we propose a novel end-to-end AI-driven automatic kidney and renal mass diagnosis framework to identify the abnormal areas of the kidney and diagnose the histological subtypes of renal cell carcinoma (RCC). The proposed framework first segments the kidney and renal mass regions by a 3D deep learning architecture (Res-UNet), followed by a dual-path classification network utilizing local and global features for the subtype prediction of the most common RCCs: clear cell, chromophobe, oncocytoma, papillary, and other RCC subtypes. To improve the robustness of the proposed framework on the dataset collected from various institutions, a weakly supervised learning schema is proposed to leverage the domain gap between various vendors via very few CT slice annotations. Our proposed diagnosis system can accurately segment the kidney and renal mass regions and predict tumor subtypes, outperforming existing methods on the KiTs19 dataset. Furthermore, cross-dataset validation results demonstrate the robustness of datasets collected from different institutions trained via the weakly supervised learning schema.

Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model

  • Liu, J.
  • Cui, J.
  • Liu, F.
  • Yuan, Y.
  • Guo, F.
  • Zhang, G.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.

Deep learning infers clinically relevant protein levels and drug response in breast cancer from unannotated pathology images

  • Liu, H.
  • Xie, X.
  • Wang, B.
npj Breast Cancer 2024 Journal Article, cited 0 times
Website
The computational pathology has been demonstrated to effectively uncover tumor-related genomic alterations and transcriptomic patterns. Although proteomics has indeed shown great potential in the field of precision medicine, few studies have focused on the computational prediction of protein levels from pathology images. In this paper, we assume that deep learning-based pathological features imply the protein levels of tumor biomarkers that are indicative of prognosis and drug response. For this purpose, we propose wsi2rppa, a weakly supervised contrastive learning framework to infer the protein levels of tumor biomarkers from whole slide images (WSIs) in breast cancer. We first conducted contrastive learning-based pre-training on tessellated tiles to extract pathological features, which are then aggregated by attention pooling and adapted to downstream tasks. We conducted extensive evaluation experiments on the TCGA-BRCA cohort (1978 WSIs of 1093 patients with protein levels of 223 biomarkers) and the CPTAC-BRCA cohort (642 WSIs of 134 patients). The results showed that our method achieved state-of-the-art performance in tumor diagnostic tasks, and also performed well in predicting clinically relevant protein levels and drug response. To show the model interpretability, we spatially visualized the WSIs colored the tiles by their attention scores, and found that the regions with high scores were highly consistent with the tumor and necrotic regions annotated by a 10-year experienced pathologist. Moreover, spatial transcriptomic data further verified that the heatmap generated by attention scores agrees greatly with the spatial expression landscape of two typical tumor biomarker genes. In predicting the response to drug trastuzumab treatment, our method achieved a 0.79 AUC value which is much higher than the previous study reported 0.68. These findings showed the remarkable potential of computational pathology in the prediction of clinically relevant protein levels, drug response, and clinical outcomes.

DL-MRI: A Unified Framework of Deep Learning-Based MRI Super Resolution

  • Liu, Huanyu
  • Liu, Jiaqi
  • Li, Junbao
  • Pan, Jeng-Shyang
  • Yu, Xiaqiong
  • Lu, Hao Chun
Journal of Healthcare Engineering 2021 Journal Article, cited 0 times
Website
Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.

Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities

  • Liu, Gengbo
2019 Thesis, cited 0 times
Website
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.

Accelerated brain tumor dynamic contrast‐enhanced MRI using Adaptive Pharmaco‐Kinetic Model Constrained method

  • Liu, Fan
  • Li, Dongxiao
  • Jin, Xinyu
  • Qiu, Wenyuan
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2021 Journal Article, cited 0 times
In brain tumor, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) spatiotemporally resolved high-quality reconstruction which is required for quantitative analysis of some physiological characteristics of brain tissue. By exploiting some kind of sparsity priori, compressed sensing methods can achieve high spatiotemporal DCE-MRI image reconstruction from undersampled k-space data. Recently, as a kind of priori information about the contrast agent (CA) concentration dynamics, Pharmacokinetic (PK) models have been explored for undersampled DCE-MRI reconstruction. This paper presents a novel dictionary learning-based reconstruction method with Adaptive Pharmaco-Kinetic Model Constraints (APKMC). In APKMC, the priori knowledge about CA dynamics is incorporated into a novel dictionary, which consists of PK model-based atoms and adaptive atoms. The PK atoms are constructed based on Patlak model and K-SVD dimension reduction algorithm, and the adaptive ones are used to resolve PK model inconsistencies. To solve APKMC, an optimization algorithm based on variable splitting and alternating iterative optimization is presented. The proposed method has been validated on three brain tumor DCE-MRI data sets by comparing with two state-of-the-art methods. As demonstrated by the quantitative and qualitative analysis of results, APKMC achieved substantially better quality in the reconstruction of brain DCE-MRI images, as well as in the reconstruction of PK model parameter maps.

The Current Role of Image Compression Standards in Medical Imaging

  • Liu, Feng
  • Hernandez-Cabronero, Miguel
  • Sanchez, Victor
  • Marcellin, Michael W
  • Bilgin, Ali
Information 2017 Journal Article, cited 4 times
Website

Normalized Euclidean Super-Pixels for Medical Image Segmentation

  • Liu, Feihong
  • Feng, Jun
  • Su, Wenhuo
  • Lv, Zhaohui
  • Xiao, Fang
  • Qiu, Shi
2017 Conference Proceedings, cited 0 times
Website
We propose a super-pixel segmentation algorithm based on normalized Euclidean distance for handling the uncertainty and complexity in medical image. Benefited from the statistic characteristics, compactness within super-pixels is described by normalized Euclidean distance. Our algorithm banishes the balance factor of the Simple Linear Iterative Clustering framework. In this way, our algorithm properly responses to the lesion tissues, such as tiny lung nodules, which have a little difference in luminance with their neighbors. The effectiveness of proposed algorithm is verified in The Cancer Imaging Archive (TCIA) database. Compared with Simple Linear Iterative Clustering (SLIC) and Linear Spectral Clustering (LSC), the experiment results show that, the proposed algorithm achieves competitive performance over super-pixel segmentation in the state of art.

SGEResU-Net for brain tumor segmentation

  • Liu, D.
  • Sheng, N.
  • He, T.
  • Wang, W.
  • Zhang, J.
  • Zhang, J.
Math Biosci Eng 2022 Journal Article, cited 0 times
Website
The precise segmentation of tumor regions plays a pivotal role in the diagnosis and treatment of brain tumors. However, due to the variable location, size, and shape of brain tumors, the automatic segmentation of brain tumors is a relatively challenging application. Recently, U-Net related methods, which largely improve the segmentation accuracy of brain tumors, have become the mainstream of this task. Following merits of the 3D U-Net architecture, this work constructs a novel 3D U-Net model called SGEResU-Net to segment brain tumors. SGEResU-Net simultaneously embeds residual blocks and spatial group-wise enhance (SGE) attention blocks into a single 3D U-Net architecture, in which SGE attention blocks are employed to enhance the feature learning of semantic regions and reduce possible noise and interference with almost no extra parameters. Besides, the self-ensemble module is also utilized to improve the segmentation accuracy of brain tumors. Evaluation experiments on the Brain Tumor Segmentation (BraTS) Challenge 2020 and 2021 benchmarks demonstrate the effectiveness of the proposed SGEResU-Net for this medical application. Moreover, it achieves DSC values of 83.31, 91.64 and 86.85%, as well as Hausdorff distances (95%) of 19.278, 5.945 and 7.567 for the enhancing tumor, whole tumor, and tumor core on BraTS 2021 dataset, respectively.

Radiogenomic associations clear cell renal cell carcinoma: an exploratory study

  • Liu, D.
  • Dani, K.
  • Reddy, S. S.
  • Lei, X.
  • Demirjian, N.
  • Hwang, D.
  • Varghese, B. A.
  • Rhie, S. K.
  • Yap, F. Y.
  • Quinn, D. I.
  • Siddiqi, I.
  • Aron, M.
  • Vaishampayan, U.
  • Zahoor, H.
  • Cen, S. Y.
  • Gill, I. S.
  • Duddalwar, V.
2023 Journal Article, cited 0 times
Website
OBJECTIVES: This study investigates how quantitative texture analysis can be used to non-invasively identify novel radiogenomic correlations with Clear Cell Renal Cell Carcinoma (ccRCC) biomarkers. METHODS: The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma (TCGA-KIRC) open-source database was used to identify 190 sets of patient genomic data that had corresponding multiphase contrast-enhanced CT images in The Cancer Imaging Archive (TCIA-KIRC). 2824 radiomic features spanning fifteen texture families were extracted from CT images using a custom-built MATLAB software package. Robust radiomic features with strong inter-scanner reproducibility were selected. Random Forest (RF), AdaBoost, and Elastic Net machine learning (ML) algorithms evaluated the ability of the selected radiomic features to predict the presence of 12 clinically relevant molecular biomarkers identified from literature. ML analysis was repeated with cases stratified by stage (I/II vs. III/IV) and grade (1/2 vs. 3/4). 10-fold cross validation was used to evaluate model performance. RESULTS: Before stratification by tumor grade and stage, radiomics predicted the presence of several biomarkers with weak discrimination (AUC 0.60-0.68). Once stratified, radiomics predicted KDM5C, SETD2, PBRM1, and mTOR mutation status with acceptable to excellent predictive discrimination (AUC ranges from 0.70 to 0.86). CONCLUSIONS: Radiomic texture analysis can potentially identify a variety of clinically relevant biomarkers in patients with ccRCC and may have a prognostic implication.

Multiview Self-Supervised Segmentation for OARs Delineation in Radiotherapy

  • Liu, C.
  • Zhang, X.
  • Si, W.
  • Ni, X.
Evid Based Complement Alternat Med 2021 Journal Article, cited 0 times
Website
Radiotherapy has become a common treatment option for head and neck (H&N) cancer, and organs at risk (OARs) need to be delineated to implement a high conformal dose distribution. Manual drawing of OARs is time consuming and inaccurate, so automatic drawing based on deep learning models has been proposed to accurately delineate the OARs. However, state-of-the-art performance usually requires a decent amount of delineation, but collecting pixel-level manual delineations is labor intensive and may not be necessary for representation learning. Encouraged by the recent progress in self-supervised learning, this study proposes and evaluates a novel multiview contrastive representation learning to boost the models from unlabelled data. The proposed learning architecture leverages three views of CTs (coronal, sagittal, and transverse plane) to collect positive and negative training samples. Specifically, a CT in 3D is first projected into three 2D views (coronal, sagittal, and transverse planes), then a convolutional neural network takes 3 views as inputs and outputs three individual representations in latent space, and finally, a contrastive loss is used to pull representation of different views of the same image closer ("positive pairs") and push representations of views from different images ("negative pairs") apart. To evaluate performance, we collected 220 CT images in H&N cancer patients. The experiment demonstrates that our method significantly improves quantitative performance over the state-of-the-art (from 83% to 86% in absolute Dice scores). Thus, our method provides a powerful and principled means to deal with the label-scarce problem.

Brain Tumor Segmentation Network Using Attention-Based Fusion and Spatial Relationship Constraint

  • Liu, Chenyu
  • Ding, Wangbin
  • Li, Lei
  • Zhang, Zhen
  • Pei, Chenhao
  • Huang, Liqin
  • Zhuang, Xiahai
2021 Book Section, cited 0 times
Delineating the brain tumor from magnetic resonance (MR) images is critical for the treatment of gliomas. However, automatic delineation is challenging due to the complex appearance and ambiguous outlines of tumors. Considering that multi-modal MR images can reflect different tumor biological properties, we develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images. The MMTSN is composed of three sub-branches and a main branch. Specifically, the sub-branches are used to capture different tumor features from multi-modal images, while in the main branch, we design a spatial-channel fusion block (SCFB) to effectively aggregate multi-modal features. Additionally, inspired by the fact that the spatial relationship between sub-regions of the tumor is relatively fixed, e.g., the enhancing tumor is always in the tumor core, we propose a spatial loss to constrain the relationship between different sub-regions of tumor. We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs2020). The method achieves 0.8764, 0.8243 and 0.773 Dice score for the whole tumor, tumor core and enhancing tumor, respectively.

Automatic Labeling of Special Diagnostic Mammography Views from Images and DICOM Headers

  • Lituiev, D. S.
  • Trivedi, H.
  • Panahiazar, M.
  • Norgeot, B.
  • Seo, Y.
  • Franc, B.
  • Harnish, R.
  • Kawczynski, M.
  • Hadley, D.
J Digit Imaging 2019 Journal Article, cited 0 times
Website
Applying state-of-the-art machine learning techniques to medical images requires a thorough selection and normalization of input data. One of such steps in digital mammography screening for breast cancer is the labeling and removal of special diagnostic views, in which diagnostic tools or magnification are applied to assist in assessment of suspicious initial findings. As a common task in medical informatics is prediction of disease and its stage, these special diagnostic views, which are only enriched among the cohort of diseased cases, will bias machine learning disease predictions. In order to automate this process, here, we develop a machine learning pipeline that utilizes both DICOM headers and images to predict such views in an automatic manner, allowing for their removal and the generation of unbiased datasets. We achieve AUC of 99.72% in predicting special mammogram views when combining both types of models. Finally, we apply these models to clean up a dataset of about 772,000 images with expected sensitivity of 99.0%. The pipeline presented in this paper can be applied to other datasets to obtain high-quality image sets suitable to train algorithms for disease detection.

Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images

  • Linmin, Pei
  • Lasitha, Vidyaratne
  • Monibor, Rahman Md
  • Iftekharuddin, Khan M
Scientific Reports (Nature Publisher Group) 2020 Journal Article, cited 0 times
Website
A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge.

Deep Learning in Prostate Cancer Diagnosis and Gleason Grading in Histopathology Images: An Extensive Study

  • Linkon, Ali Hasan Md
  • Labib, Mahir
  • Hasan, Tarik
  • Hossain, Mozammal
  • E-Jannat, Marium
Informatics in Medicine Unlocked 2021 Journal Article, cited 0 times
Website
Among American men, prostate cancer is the cause of the second-highest death by any cancer. It is also the most common cancer in men worldwide, and the annual numbers are quite alarming. The most prognostic marker for prostate cancer is the Gleason grading system on histopathology images. Pathologists determine the Gleason grade on stained tissue specimens of Hematoxylin and Eosin (H&E) based on tumor structural growth patterns from whole slide images. Recent advances in Computer-Aided Detection (CAD) using deep learning have brought the immense scope of automatic detection and recognition at very high accuracy in prostate cancer like other medical diagnoses and prognoses. Automated deep learning systems have delivered promising results from histopathological images to accurate grading of prostate cancer. Many studies have shown that deep learning strategies can achieve better outcomes than simpler systems that make use of pathology samples. This article aims to provide an insight into the gradual evolution of deep learning in detecting prostate cancer and Gleason grading. This article also evaluates a comprehensive, synthesized overview of the current state and existing methodological approaches as well as unique insights in prostate cancer detection using deep learning. We have also described research findings, current limitations, and future avenues for research. We have tried to make this paper applicable to deep learning communities and hope it will encourage new collaborations to create dedicated applications and improvements for prostate cancer detection and Gleason grading.

Free-breathing and instantaneous abdominal T(2) mapping via single-shot multiple overlapping-echo acquisition and deep learning reconstruction

  • Lin, X.
  • Dai, L.
  • Yang, Q.
  • Yang, Q.
  • He, H.
  • Ma, L.
  • Liu, J.
  • Cheng, J.
  • Cai, C.
  • Bao, J.
  • Chen, Z.
  • Cai, S.
  • Zhong, J.
Eur Radiol 2023 Journal Article, cited 0 times
Website
OBJECTIVES: To develop a real-time abdominal T(2) mapping method without requiring breath-holding or respiratory-gating. METHODS: The single-shot multiple overlapping-echo detachment (MOLED) pulse sequence was employed to achieve free-breathing T(2) mapping of the abdomen. Deep learning was used to untangle the non-linear relationship between the MOLED signal and T(2) mapping. A synthetic data generation flow based on Bloch simulation, modality synthesis, and randomization was proposed to overcome the inadequacy of real-world training set. RESULTS: The results from simulation and in vivo experiments demonstrated that our method could deliver high-quality T(2) mapping. The average NMSE and R(2) values of linear regression in the digital phantom experiments were 0.0178 and 0.9751. Pearson's correlation coefficient between our predicted T(2) and reference T(2) in the phantom experiments was 0.9996. In the measurements for the patients, real-time capture of the T(2) value changes of various abdominal organs before and after contrast agent injection was realized. A total of 33 focal liver lesions were detected in the group, and the mean and standard deviation of T(2) values were 141.1 +/- 50.0 ms for benign and 63.3 +/- 16.0 ms for malignant lesions. The coefficients of variance in a test-retest experiment were 2.9%, 1.2%, 0.9%, 3.1%, and 1.8% for the liver, kidney, gallbladder, spleen, and skeletal muscle, respectively. CONCLUSIONS: Free-breathing abdominal T(2) mapping is achieved in about 100 ms on a clinical MRI scanner. The work paved the way for the development of real-time dynamic T(2) mapping in the abdomen. KEY POINTS: * MOLED achieves free-breathing abdominal T(2) mapping in about 100 ms, enabling real-time capture of T(2) value changes due to CA injection in abdominal organs. * Synthetic data generation flow mitigates the issue of lack of sizable abdominal training datasets.

A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma

  • Lin, Peng
  • Wen, Dong-Yue
  • Chen, Ling
  • Li, Xin
  • Li, Sheng-Hua
  • Yan, Hai-Biao
  • He, Rong-Quan
  • Chen, Gang
  • He, Yun
  • Yang, Hong
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.

Molecular hallmarks of breast multiparametric magnetic resonance imaging during neoadjuvant chemotherapy

  • Lin, P.
  • Wan, W. J.
  • Kang, T.
  • Qin, L. F.
  • Meng, Q. X.
  • Wu, X. X.
  • Qin, H. Y.
  • Lin, Y. Q.
  • He, Y.
  • Yang, H.
2023 Journal Article, cited 0 times
Website
PURPOSE: To identify molecular basis of four parameters obtained from dynamic contrast-enhanced magnetic resonance imaging, including functional tumor volume (FTV), longest diameter (LD), sphericity, and contralateral background parenchymal enhancement (BPE). MATERIAL AND METHODS: Pretreatment-available gene expression profiling and different treatment timepoints MRI features were integrated for Spearman correlation analysis. MRI feature-related genes were submitted to hypergeometric distribution-based gene functional enrichment analysis to identify related Kyoto Encyclopedia of Genes and Genomes annotation. Gene set variation analysis was utilized to assess the infiltration of distinct immune cells, which were used to determine relationships between immune phenotypes and medical imaging phenotypes. The clinical significance of MRI and relevant molecular features were analyzed to identify their prediction performance of neoadjuvant chemotherapy (NAC) and prognostic impact. RESULTS: Three hundred and eighty-three patients were included for integrative analysis of MRI features and molecular information. FTV, LD, and sphericity measurements were most positively significantly correlated with proliferation-, signal transmission-, and immune-related pathways, respectively. However, BPE did not show marked correlation relationships with gene expression alteration status. FTV, LD and sphericity all showed significant positively or negatively correlated with some immune-related processes and immune cell infiltration levels. Sphericity decreased at 3 cycles after treatment initiation was also markedly negatively related to baseline sphericity measurements and immune signatures. Its decreased status could act as a predictor for prediction of response to NAC. CONCLUSION: Different MRI features capture different tumor molecular characteristics that could explain their corresponding clinical significance.

MRI-based radiogenomics analysis for predicting genetic alterations in oncogenic signalling pathways in invasive breast carcinoma

  • Lin, P
  • Liu, WK
  • Li, X
  • Wan, D
  • Qin, H
  • Li, Q
  • Chen, G
  • He, Y
  • Yang, H
Clinical Radiology 2020 Journal Article, cited 0 times

Radiomic profiling of clear cell renal cell carcinoma reveals subtypes with distinct prognoses and molecular pathways

  • Lin, P.
  • Lin, Y. Q.
  • Gao, R. Z.
  • Wen, R.
  • Qin, H.
  • He, Y.
  • Yang, H.
Translational oncologyTransl Oncol 2021 Journal Article, cited 0 times
Website
BACKGROUND: To identify radiomic subtypes of clear cell renal cell carcinoma (ccRCC) patients with distinct clinical significance and molecular characteristics reflective of the heterogeneity of ccRCC. METHODS: Quantitative radiomic features of ccRCC were extracted from preoperative CT images of 160 ccRCC patients. Unsupervised consensus cluster analysis was performed to identify robust radiomic subtypes based on these features. The Kaplan-Meier method and chi-square test were used to assess the different clinicopathological characteristics and gene mutations among the radiomic subtypes. Subtype-specific marker genes were identified, and gene set enrichment analyses were performed to reveal the specific molecular characteristics of each subtype. Moreover, a gene expression-based classifier of radiomic subtypes was developed using the random forest algorithm and tested in another independent cohort (n = 101). RESULTS: Radiomic profiling revealed three ccRCC subtypes with distinct clinicopathological features and prognoses. VHL, MUC16, FBN2, and FLG were found to have different mutation frequencies in these radiomic subtypes. In addition, transcriptome analysis revealed that the dysregulation of cell cycle-related pathways may be responsible for the distinct clinical significance of the obtained subtypes. The prognostic value of the radiomic subtypes was further validated in another independent cohort (log-rank P = 0.015). CONCLUSION: In the present multi-scale radiogenomic analysis of ccRCC, radiomics played a central role. Radiomic subtypes could help discern genomic alterations and non-invasively stratify ccRCC patients.

Integrative radiomics and transcriptomics analyses reveal subtype characterization of non-small cell lung cancer

  • Lin, P.
  • Lin, Y. Q.
  • Gao, R. Z.
  • Wan, W. J.
  • He, Y.
  • Yang, H.
Eur Radiol 2023 Journal Article, cited 0 times
Website
OBJECTIVES: To assess whether integrative radiomics and transcriptomics analyses could provide novel insights for radiomic features' molecular annotation and effective risk stratification in non-small cell lung cancer (NSCLC). METHODS: A total of 627 NSCLC patients from three datasets were included. Radiomics features were extracted from segmented 3-dimensional tumour volumes and were z-score normalized for further analysis. In transcriptomics level, 186 pathways and 28 types of immune cells were assessed by using the Gene Set Variation Analysis (GSVA) algorithm. NSCLC patients were categorized into subgroups based on their radiomic features and pathways enrichment scores using consensus clustering. Subgroup-specific radiomics features were used to validate clustering performance and prognostic value. Kaplan-Meier survival analysis with the log-rank test and univariable and multivariable Cox analyses were conducted to explore survival differences among the subgroups. RESULTS: Three radiotranscriptomics subtypes (RTSs) were identified based on the radiomics and pathways enrichment profiles. The three RTSs were characterized as having specific molecular hallmarks: RTS1 (proliferation subtype), RTS2 (metabolism subtype), and RTS3 (immune activation subtype). RTS3 showed increased infiltration of most immune cells. The RTS stratification strategy was validated in a validation cohort and showed significant prognostic value. Survival analysis demonstrated that the RTS strategy could stratify NSCLC patients according to prognosis (p = 0.009), and the RTS strategy remained an independent prognostic indicator after adjusting for other clinical parameters. CONCLUSIONS: This radiotranscriptomics study provides a stratification strategy for NSCLC that could provide information for radiomics feature molecular annotation and prognostic prediction. KEY POINTS: * Radiotranscriptomics subtypes (RTSs) could be used to stratify molecularly heterogeneous patients. * RTSs showed relationships between molecular phenotypes and radiomics features. * The RTS algorithm could be used to identify patients with poor prognosis.

High-resolution anatomic correlation of cyclic motor patterns in the human colon: Evidence of a rectosigmoid brake

  • Lin, Anthony Y
  • Du, Peng
  • Dinning, Philip G
  • Arkwright, John W
  • Kamp, Jozef P
  • Cheng, Leo K
  • Bissett, Ian P
  • O'Grady, Gregory
American Journal of Physiology-Gastrointestinal and Liver Physiology 2017 Journal Article, cited 12 times
Website

Three-dimensional steerable discrete cosine transform with application to 3D image compression

  • Lima, Verusca S.
  • Madeiro, Francisco
  • Lima, Juliano B.
Multidimensional Systems and Signal Processing 2020 Journal Article, cited 0 times
Website
This work introduces the three-dimensional steerable discrete cosine transform (3D-SDCT), which is obtained from the relationship between the discrete cosine transform (DCT) and the graph Fourier transform of a signal on a path graph. One employs the fact that the basis vectors of the 3D-DCT constitute a possible eigenbasis for the Laplacian of the product of such graphs. The proposed transform employs a rotated version of the 3D-DCT basis. We then evaluate the applicability of the 3D-SDCT in the field of 3D medical image compression. We consider the case where we have only one pair of rotation angles per block, rotating all the 3D-DCT basis vectors by the same pair. The obtained results show that the 3D-SDCT can be efficiently used in the referred application scenario and it outperforms the classical 3D-DCT.

Automated pancreas segmentation and volumetry using deep neural network on computed tomography

  • Lim, S. H.
  • Kim, Y. J.
  • Park, Y. H.
  • Kim, D.
  • Kim, K. G.
  • Lee, D. H.
2022 Journal Article, cited 0 times
Website
Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the cancer imaging archive pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.

An UNet-Based Brain Tumor Segmentation Framework via Optimal Mass Transportation Pre-processing

  • Liao, Jia-Wei
  • Huang, Tsung-Ming
  • Li, Tiexiang
  • Lin, Wen-Wei
  • Wang, Han
  • Yau, Shing-Tung
2023 Conference Paper, cited 0 times
Website
This article aims to build a framework for MRI images of brain tumor segmentation using the deep learning method. For this purpose, we develop a novel 2-Phase UNet-based OMT framework to increase the ratio of brain tumors using optimal mass transportation (OMT). Moreover, due to the scarcity of training data, we change the density function by different parameters to increase the data diversity. For the post-processing, we propose an adaptive ensemble procedure by solving the eigenvectors of the Dice similarity matrix and choosing the result with the highest aggregation probability as the predicted label. The Dice scores of the whole tumor (WT), tumor core (TC), and enhanced tumor (ET) regions for online validation computed by SegResUNet were 0.9214, 0.8823, and 0.8411, respectively. Compared with random crop pre-processing, OMT is far superior.

Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-OR Network

  • Liao, Fangzhou
  • Liang, Ming
  • Li, Zhe
  • Hu, Xiaolin
  • Song, Sen
IEEE Trans Neural Netw Learn Syst 2017 Journal Article, cited 15 times
Website
Automatic diagnosing lung cancer from computed tomography scans involves two steps: detect all suspicious lesions (pulmonary nodules) and evaluate the whole-lung/pulmonary malignancy. Currently, there are many studies about the first step, but few about the second step. Since the existence of nodule does not definitely indicate cancer, and the morphology of nodule has a complicated relationship with cancer, the diagnosis of lung cancer demands careful investigations on every suspicious nodule and integration of information of all nodules. We propose a 3-D deep neural network to solve this problem. The model consists of two modules. The first one is a 3-D region proposal network for nodule detection, which outputs all suspicious nodules for a subject. The second one selects the top five nodules based on the detection confidence, evaluates their cancer probabilities, and combines them with a leaky noisy-OR gate to obtain the probability of lung cancer for the subject. The two modules share the same backbone network, a modified U-net. The overfitting caused by the shortage of the training data is alleviated by training the two modules alternately. The proposed model won the first place in the Data Science Bowl 2017 competition.

ORRN: An ODE-Based Recursive Registration Network for Deformable Respiratory Motion Estimation With Lung 4DCT Images

  • Liang, X.
  • Lin, S.
  • Liu, F.
  • Schreiber, D.
  • Yip, M.
IEEE Trans Biomed Eng 2023 Journal Article, cited 3 times
Website
OBJECTIVE: Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. METHODS: This article presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. RESULTS: We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24 mm and 1.26 mm, respectively. Additionally, it produces less than 0.001% unrealistic image folding, and the computation speed is less than 1 s for each CT volume. CONCLUSION: ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. SIGNIFICANCE: It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.

Incorporating the Hybrid Deformable Model for Improving the Performance of Abdominal CT Segmentation via Multi-Scale Feature Fusion Network

  • Liang, Xiaokun
  • Li, Na
  • Zhang, Zhicheng
  • Xiong, Jing
  • Zhou, Shoujun
  • Xie, Yaoqin
Medical Image Analysis 2021 Journal Article, cited 0 times
Website
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.

Human-level comparable control volume mapping with a deep unsupervised-learning model for image-guided radiation therapy

  • Liang, X.
  • Bassenne, M.
  • Hristov, D. H.
  • Islam, M. T.
  • Zhao, W.
  • Jia, M.
  • Zhang, Z.
  • Gensheimer, M.
  • Beadle, B.
  • Le, Q.
  • Xing, L.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
PURPOSE: To develop a deep unsupervised learning method with control volume (CV) mapping from patient positioning daily CT (dCT) to planning computed tomography (pCT) for precise patient positioning. METHODS: We propose an unsupervised learning framework, which maps CVs from dCT to pCT to automatically generate the couch shifts, including translation and rotation dimensions. The network inputs are dCT, pCT and CV positions in the pCT. The output is the transformation parameter of the dCT used to setup the head and neck cancer (HNC) patients. The network is trained to maximize image similarity between the CV in the pCT and the CV in the dCT. A total of 554 CT scans from 158 HNC patients were used for the evaluation of the proposed model. At different points in time, each patient had many CT scans. Couch shifts are calculated for the testing by averaging the translation and rotation from the CVs. The ground-truth of the shifts come from bone landmarks determined by an experienced radiation oncologist. RESULTS: The system positioning errors of translation and rotation are less than 0.47 mm and 0.17 degrees , respectively. The random positioning errors of translation and rotation are less than 1.13 mm and 0.29 degrees , respectively. The proposed method enhanced the proportion of cases registered within a preset tolerance (2.0 mm/1.0 degrees ) from 66.67% to 90.91% as compared to standard registrations. CONCLUSIONS: We proposed a deep unsupervised learning architecture for patient positioning with inclusion of CVs mapping, which weights the CVs regions differently to mitigate any potential adverse influence of image artifacts on the registration. Our experimental results show that the proposed method achieved efficient and effective HNC patient positioning.

Fast automated detection of COVID-19 from medical images using convolutional neural networks

  • Liang, Shuang
  • Liu, Huixiang
  • Gu, Yu
  • Guo, Xiuhua
  • Li, Hongjun
  • Li, Li
  • Wu, Zhiyuan
  • Liu, Mengyang
  • Tao, Lixin
Communications Biology 2021 Journal Article, cited 0 times
Website

Brain Tumor Segmentation Using 3D Convolutional Neural Network

  • Liang, Kaisheng
  • Lu, Wenlian
2020 Book Section, cited 0 times
Brain tumors segmentation is one of the most crucial procedures in the diagnosis of brain tumors because it is of great significance for the analysis and visualization of brain structures that can guide the surgery. With the development of natural scene segmentation model FCN, the most representative model U-net has been developed. An increasing number of people are trying to improve the encoder-decoder architecture to achieve better performance currently. In this paper, we focus on the improvement of the encoder-decoder network and the analysis of 3D medical images. We propose an additional path to enhance the encoder part and two separate up-sampling paths for the decoder part of the model. The proposed approach was trained and evaluated on BraTS 2019 dataset.

Multiregional radiomics profiling from multiparametric MRI: Identifying an imaging predictor of IDH1 mutation status in glioblastoma

  • Li, Zhi‐Cheng
  • Bai, Hongmin
  • Sun, Qiuchang
  • Zhao, Yuanshen
  • Lv, Yanchun
  • Zhou, Jian
  • Liang, Chaofeng
  • Chen, Yinsheng
  • Liang, Dong
  • Zheng, Hairong
Cancer medicine 2018 Journal Article, cited 0 times
Website
PURPOSE: Isocitrate dehydrogenase 1 (IDH1) has been proven as a prognostic and predictive marker in glioblastoma (GBM) patients. The purpose was to preoperatively predict IDH mutation status in GBM using multiregional radiomics features from multiparametric magnetic resonance imaging (MRI). METHODS: In this retrospective multicenter study, 225 patients were included. A total of 1614 multiregional features were extracted from enhancement area, non-enhancement area, necrosis, edema, tumor core, and whole tumor in multiparametric MRI. Three multiregional radiomics models were built from tumor core, whole tumor, and all regions using an all-relevant feature selection and a random forest classification for predicting IDH1. Four single-region models and a model combining all-region features with clinical factors (age, sex, and Karnofsky performance status) were also built. All models were built from a training cohort (118 patients) and tested on an independent validation cohort (107 patients). RESULTS: Among the four single-region radiomics models, the edema model achieved the best accuracy of 96% and the best F1-score of 0.75 while the non-enhancement model achieved the best area under the receiver operating characteristic curve (AUC) of 0.88 in the validation cohort. The overall performance of the tumor-core model (accuracy 0.96, AUC 0.86 and F1-score 0.75) and the whole-tumor model (accuracy 0.96, AUC 0.88 and F1-score 0.75) was slightly better than the single-regional models. The 8-feature all-region radiomics model achieved an improved overall performance of an accuracy 96%, an AUC 0.90, and an F1-score 0.78. Among all models, the model combining all-region imaging features with age achieved the best performance of an accuracy 97%, an AUC 0.96, and an F1-score 0.84. CONCLUSIONS: The radiomics model built with multiregional features from multiparametric MRI has the potential to preoperatively detect the IDH1 mutation status in GBM patients. The multiregional model built with all-region features performed better than the single-region models, while combining age with all-region features achieved the best performance.

Large-scale retrieval for medical image analytics: A comprehensive review

  • Li, Zhongyu
  • Zhang, Xiaofan
  • Müller, Henning
  • Zhang, Shaoting
Medical Image Analysis 2018 Journal Article, cited 23 times
Website

Low-Dose CT Image Denoising with Improving WGAN and Hybrid Loss Function

  • Li, Z.
  • Shi, W.
  • Xing, Q.
  • Miao, Y.
  • He, W.
  • Yang, H.
  • Jiang, Z.
Comput Math Methods Med 2021 Journal Article, cited 1 times
Website
The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.

Automatic Brain Tumor Segmentation Using Multi-scale Features and Attention Mechanism

  • Li, Zhaopei
  • Shen, Zhiqiang
  • Wen, Jianhui
  • He, Tian
  • Pan, Lin
2022 Book Section, cited 0 times
Gliomas are the most common primary malignant tumors of the brain. Magnetic resonance (MR) imaging is one of the main detection methods of brain tumors, so accurate segmentation of brain tumors from MR images has important clinical significance in the whole process of diagnosis. At present, most popular automatic medical image segmentation methods are based on deep learning. Many researchers have developed convolutional neural network and applied it to brain tumor segmentation, and proved superior performance. In this paper, we propose a novel deep learned-based method named multi-scale feature recalibration network(MSFR-Net), which can extract features with multiple scales and recalibrate them through the multi-scale feature extraction and recalibration (MSFER) module. In addition, we improve the segmentation performance by exploiting cross-entropy and dice loss to solve the class imbalance problem. We evaluate our proposed architecture on the brain tumor segmentation challenges (BraTS) 2021 test dataset. The proposed method achieved 89.15%, 83.02%, 82.08% dice coefficients for the whole tumor, tumor core and enhancing tumor, respectively.

x4 Super-Resolution Analysis of Magnetic Resonance Imaging based on Generative Adversarial Network without Supervised Images

  • Li, Yunhe
  • Zhao, Huiyan
  • Li, Bo
  • Wang, Yi
2021 Conference Paper, cited 0 times
Website
Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.

Prostate gleason score prediction via MRI using capsule network

  • Li, Yuheng
  • Wang, Jing
  • Hu, Mingzhe
  • Patel, Pretesh
  • Mao, Hui
  • Liu, Tian
  • Yang, Xiaofeng
  • Iftekharuddin, Khan M.
  • Chen, Weijie
2023 Conference Paper, cited 0 times
Magnetic Resonance imaging (MRI) is a non-invasive modality for diagnosing prostate carcinoma (PCa) and deep learning has gained increasing interest in MR images. We propose a novel 3D Capsule Network to perform low grade vs high grade PCa classification. The proposed network utilizes Efficient CapsNet as backbone and consists of three main components, 3D convolutional blocks, depth-wise separable 3D convolution, and self-attention routing. The network employs convolutional blocks to extract high level features, which will form primary capsules via depth-wise separable convolution operations. A self-attention mechanism is used to route primary capsules to higher level capsules and finally a PCa grade is assigned. The proposed 3D Capsule Network was trained and tested using a public dataset that involves 529 patients diagnosed with PCa. A baseline 3D CNN method was also experimented for comparison. Our Capsule Network achieved 85% accuracy and 0.87 AUC, while the baseline CNN achieved 80% accuracy and 0.84 AUC. The superior performance of Capsule Network demonstrates its feasibility for PCa grade classification from prostate MRI and shows its potential in assisting clinical decision-making.

Influence of feature calculating parameters on the reproducibility of CT radiomic features: a thoracic phantom study

  • Li, Ying
  • Tan, Guanghua
  • Vangel, Mark
  • Hall, Jonathan
  • Cai, Wenli
Quantitative Imaging in Medicine and Surgery 2020 Journal Article, cited 0 times
Website

Genotype prediction of ATRX mutation in lower-grade gliomas using an MRI radiomics signature

  • Li, Y.
  • Liu, X.
  • Qian, Z.
  • Sun, Z.
  • Xu, K.
  • Wang, K.
  • Fan, X.
  • Zhang, Z.
  • Li, S.
  • Wang, Y.
  • Jiang, T.
Eur Radiol 2018 Journal Article, cited 2 times
Website
OBJECTIVES: To predict ATRX mutation status in patients with lower-grade gliomas using radiomic analysis. METHODS: Cancer Genome Atlas (TCGA) patients with lower-grade gliomas were randomly allocated into training (n = 63) and validation (n = 32) sets. An independent external-validation set (n = 91) was built based on the Chinese Genome Atlas (CGGA) database. After feature extraction, an ATRX-related signature was constructed. Subsequently, the radiomic signature was combined with a support vector machine to predict ATRX mutation status in training, validation and external-validation sets. Predictive performance was assessed by receiver operating characteristic curve analysis. Correlations between the selected features were also evaluated. RESULTS: Nine radiomic features were screened as an ATRX-associated radiomic signature of lower-grade gliomas based on the LASSO regression model. All nine radiomic features were texture-associated (e.g. sum average and variance). The predictive efficiencies measured by the area under the curve were 94.0 %, 92.5 % and 72.5 % in the training, validation and external-validation sets, respectively. The overall correlations between the nine radiomic features were low in both TCGA and CGGA databases. CONCLUSIONS: Using radiomic analysis, we achieved efficient prediction of ATRX genotype in lower-grade gliomas, and our model was effective in two independent databases. KEY POINTS: * ATRX in lower-grade gliomas could be predicted using radiomic analysis. * The LASSO regression algorithm and SVM performed well in radiomic analysis. * Nine radiomic features were screened as an ATRX-predictive radiomic signature. * The machine-learning model for ATRX-prediction was validated by an independent database.

Histopathologic and proteogenomic heterogeneity reveals features of clear cell renal cell carcinoma aggressiveness

  • Li, Y.
  • Lih, T. M.
  • Dhanasekaran, S. M.
  • Mannan, R.
  • Chen, L.
  • Cieslik, M.
  • Wu, Y.
  • Lu, R. J.
  • Clark, D. J.
  • Kolodziejczak, I.
  • Hong, R.
  • Chen, S.
  • Zhao, Y.
  • Chugh, S.
  • Caravan, W.
  • Naser Al Deen, N.
  • Hosseini, N.
  • Newton, C. J.
  • Krug, K.
  • Xu, Y.
  • Cho, K. C.
  • Hu, Y.
  • Zhang, Y.
  • Kumar-Sinha, C.
  • Ma, W.
  • Calinawan, A.
  • Wyczalkowski, M. A.
  • Wendl, M. C.
  • Wang, Y.
  • Guo, S.
  • Zhang, C.
  • Le, A.
  • Dagar, A.
  • Hopkins, A.
  • Cho, H.
  • Leprevost, F. D. V.
  • Jing, X.
  • Teo, G. C.
  • Liu, W.
  • Reimers, M. A.
  • Pachynski, R.
  • Lazar, A. J.
  • Chinnaiyan, A. M.
  • Van Tine, B. A.
  • Zhang, B.
  • Rodland, K. D.
  • Getz, G.
  • Mani, D. R.
  • Wang, P.
  • Chen, F.
  • Hostetter, G.
  • Thiagarajan, M.
  • Linehan, W. M.
  • Fenyo, D.
  • Jewell, S. D.
  • Omenn, G. S.
  • Mehra, R.
  • Wiznerowicz, M.
  • Robles, A. I.
  • Mesri, M.
  • Hiltke, T.
  • An, E.
  • Rodriguez, H.
  • Chan, D. W.
  • Ricketts, C. J.
  • Nesvizhskii, A. I.
  • Zhang, H.
  • Ding, L.
  • Clinical Proteomic Tumor Analysis, Consortium
Cancer Cell 2022 Journal Article, cited 0 times
Website
Clear cell renal cell carcinomas (ccRCCs) represent approximately 75% of RCC cases and account for most RCC-associated deaths. Inter- and intratumoral heterogeneity (ITH) results in varying prognosis and treatment outcomes. To obtain the most comprehensive profile of ccRCC, we perform integrative histopathologic, proteogenomic, and metabolomic analyses on 305 ccRCC tumor segments and 166 paired adjacent normal tissues from 213 cases. Combining histologic and molecular profiles reveals ITH in 90% of ccRCCs, with 50% demonstrating immune signature heterogeneity. High tumor grade, along with BAP1 mutation, genome instability, increased hypermethylation, and a specific protein glycosylation signature define a high-risk disease subset, where UCHL1 expression displays prognostic value. Single-nuclei RNA sequencing of the adverse sarcomatoid and rhabdoid phenotypes uncover gene signatures and potential insights into tumor evolution. In vitro cell line studies confirm the potential of inhibiting identified phosphoproteome targets. This study molecularly stratifies aggressive histopathologic subtypes that may inform more effective treatment strategies.

Histopathologic and proteogenomic heterogeneity reveals features of clear cell renal cell carcinoma aggressiveness

  • Li, Yize
  • Lih, Tung-Shing M
  • Dhanasekaran, Saravana M
  • Mannan, Rahul
  • Chen, Lijun
  • Cieslik, Marcin
  • Wu, Yige
  • Lu, Rita Jiu-Hsien
  • Clark, David J
  • Kołodziejczak, Iga
Cancer Cell 2023 Journal Article, cited 0 times

4× Super‐resolution of unsupervised CT images based on GAN

  • Li, Yunhe
  • Chen, Lunqiang
  • Li, Bo
  • Zhao, Huiyan
2023 Journal Article, cited 0 times
Improving the resolution of computed tomography (CT) medical images can help doctors more accurately identify lesions, which is important in clinical diagnosis. In the absence of natural paired datasets of high resolution and low resolution image pairs, we abandoned the conventional Bicubic method and innovatively used a dataset of images of a single resolution to create near-natural high–low-resolution image pairs by designing a deep learning network and utilizing noise injection. In addition, we propose a super-resolution generative adversarial network called KerSRGAN which includes a super-resolution generator, super-resolution discriminator, and super-resolution feature extractor to achieve a 4× super-resolution of CT images. The results of an experimental evaluation show that KerSRGAN achieved superior performance compared to the state-of-the-art methods in terms of a quantitative comparison of non-reference image quality evaluation indicators on the generated 4× super-resolution CT images. Moreover, in terms of an intuitive visual comparison, the images generated by the KerSRGAN method had more precise details and better perceptual quality.

Radiomics-Based Method for Predicting the Glioma Subtype as Defined by Tumor Grade, IDH Mutation, and 1p/19q Codeletion

  • Li, Yingping
  • Ammari, Samy
  • Lawrance, Littisha
  • Quillent, Arnaud
  • Assi, Tarek
  • Lassau, Nathalie
  • Chouzenoux, Emilie
Cancers 2022 Journal Article, cited 0 times
Website
Gliomas are among the most common types of central nervous system (CNS) tumors. A prompt diagnosis of the glioma subtype is crucial to estimate the prognosis and personalize the treatment strategy. The objective of this study was to develop a radiomics pipeline based on the clinical Magnetic Resonance Imaging (MRI) scans to noninvasively predict the glioma subtype, as defined based on the tumor grade, isocitrate dehydrogenase (IDH) mutation status, and 1p/19q codeletion status. A total of 212 patients from the public retrospective The Cancer Genome Atlas Low Grade Glioma (TCGA-LGG) and The Cancer Genome Atlas Glioblastoma Multiforme (TCGA-GBM) datasets were used for the experiments and analyses. Different settings in the radiomics pipeline were investigated to improve the classification, including the Z-score normalization, the feature extraction strategy, the image filter applied to the MRI images, the introduction of clinical information, ComBat harmonization, the classifier chain strategy, etc. Based on numerous experiments, we finally reached an optimal pipeline for classifying the glioma tumors. We then tested this final radiomics pipeline on the hold-out test data with 51 randomly sampled random seeds for reliable and robust conclusions. The results showed that, after tuning the radiomics pipeline, the mean AUC improved from 0.8935 (±0.0351) to 0.9319 (±0.0386), from 0.8676 (±0.0421) to 0.9283 (±0.0333), and from 0.6473 (±0.1074) to 0.8196 (±0.0702) in the test data for predicting the tumor grade, IDH mutation, and 1p/19q codeletion status, respectively. The mean accuracy for predicting the five glioma subtypes also improved from 0.5772 (±0.0816) to 0.6716 (±0.0655). Finally, we analyzed the characteristics of the radiomic features that best distinguished the glioma grade, the IDH mutation, and the 1p/19q codeletion status, respectively. Apart from the promising prediction of the glioma subtype, this study also provides a better understanding of the radiomics model development and interpretability. The results in this paper are replicable with our python codes publicly available in github.

Multiple instance learning-based two-stage metric learning network for whole slide image classification

  • Li, Xiaoyu
  • Yang, Bei
  • Chen, Tiandong
  • Gao, Zheng
  • Li, Huijie
2023 Journal Article, cited 0 times
Cancer is one of the most common diseases around the world. For cancer diagnosis, pathological examination is the most effective method. But the heavy and time-consuming workflow has increased the workload of the pathologists. With the appearance of whole slide image (WSI) scanners, tissues on a glass slide can be saved as a high-definition digital image, which makes it possible to diagnose diseases with computer aid. However, the extreme size and the lack of pixel-level annotations of WSIs make machine learning face a great challenge in pathology image diagnosis. To solve this problem, we propose a metric learning-based two-stage MIL framework (TSMIL) for WSI classification, which combines two stages of supervised clustering and metric-based classification. The training samples (WSIs) are first clustered into different clusters based on their labels in supervised clustering. Then, based on the previous step, we propose four different strategies to measure the distance of the test samples to each class cluster to achieve the test samples classification: MaxS, AvgS, DenS and HybS. Our model is evaluated on three pathology datasets: TCGA-NSCLC, TCGA-RCC and HER2. The average AUC scores can be up to 0.9895 and 0.9988 over TCGA-NSCLC and TCGA-RCC, and 0.9265 on HER2, respectively. The results showed that compared with the state-of-the-art methods, our method outperformed. The excellent performance on different kinds of cancer datasets verifies the feasibility of our method as a general architecture.

Multi-step Cascaded Networks for Brain Tumor Segmentation

  • Li, Xiangyu
  • Luo, Gongning
  • Wang, Kuanquan
2020 Book Section, cited 0 times
Automatic brain tumor segmentation method plays an extremely important role in the whole process of brain tumor diagnosis and treatment. In this paper, we propose a multi-step cascaded network which takes the hierarchical topology of the brain tumor substructures into consideration and segments the substructures from coarse to fine. During segmentation, the result of the former step is utilized as the prior information for the next step to guide the finer segmentation process. The whole network is trained in an end-to-end fashion. Besides, to alleviate the gradient vanishing issue and reduce overfitting, we added several auxiliary outputs as a kind of deep supervision for each step and introduced several data augmentation strategies, respectively, which proved to be quite efficient for brain tumor segmentation. Lastly, focal loss is utilized to solve the problem of remarkably imbalance of the tumor regions and background. Our model is tested on the BraTS 2019 validation dataset, the preliminary results of mean dice coefficients are 0.886, 0.813, 0.771 for the whole tumor, tumor core and enhancing tumor respectively. Code is available at https://github.com/JohnleeHIT/Brats2019.

SIFT-GVF-based lung edge correction method for correcting the lung region in CT images

  • Li, X.
  • Feng, B.
  • Qiao, S.
  • Wei, H.
  • Feng, C.
PLoS One 2023 Journal Article, cited 0 times
Website
Juxtapleural nodules were excluded from the segmented lung region in the Hounsfield unit threshold-based segmentation method. To re-include those regions in the lung region, a new approach was presented using scale-invariant feature transform and gradient vector flow models in this study. First, the scale-invariant feature transform method was utilized to detect all scale-invariant points in the binary lung region. The boundary points in the neighborhood of a scale-invariant point were collected to form the supportive boundary lines. Then, we utilized a Fourier descriptor to obtain a character representation of each supportive boundary line. Spectrum energy recognizes supportive boundaries that must be corrected. Third, the gradient vector flow-snake method was presented to correct the recognized supportive borders with a smooth profile curve, giving an ideal correction edge in those regions. Finally, the performance of the proposed method was evaluated through experiments on multiple authentic computed tomography images. The perfect results and robustness proved that the proposed method could correct the juxtapleural region precisely.

Adaptive multi-modality fusion network for glioma grading; 自适应多模态特征融合胶质瘤分级网络

  • Wang Li
  • Cao Ying
  • Tian Lili
  • Chen Qijian
  • Guo Shunchao
  • Zhang Jian
  • Wang Lihui
Journal of Image and Graphics 2021 Journal Article, cited 0 times
Objective The accurate grading of glioma is the main method to assist in the formulation of personalized treatment plans, but most of the existing studies focus on the classification prediction based on the tumor area, which needs to delineate the area of ​​interest in advance, which cannot meet the real-time performance of clinical intelligent auxiliary diagnosis. need. Therefore, this paper proposes an adaptive multi-modal fusion network (AMMFNet), which can achieve end-to-end accurate prediction from the original acquired images to the glioma level without the need to delineate the tumor region. . Methods The AMMFNet method uses four isomorphic network branches to extract multi-scale image features of different modalities; uses adaptive multi-modal feature fusion module and dimensionality reduction module for feature fusion; combines cross-entropy classification loss and feature embedding loss to improve glue. Classification accuracy of plasmoid tumors. In order to verify the model performance, this paper uses the MICCAI (Medical Image Computing and Computer Assisted Intervention Society) 2018 public dataset for training and testing, and compares it with the cutting-edge deep learning model and the latest glioma classification model, and uses the accuracy and subject The area under the curve (AUC) and other indicators were used for quantitative analysis. Results Without delineating the tumor area, the AUC of this model for predicting glioma grade was 0.965; when the tumor area was used, its AUC was as high as 0.997, and the accuracy was 0.982, which was more than the current best glioma classification model- The task convolutional neural network increased by 1.2% year-on-year. Conclusion The adaptive multimodal feature fusion network proposed in this paper can accurately predict glioma grades without delineating tumor regions by combining multimodal and multi-semantic-level features. Glioma grading ; deep learning ; multimodal fusion ; multiscale features ; end-to-end classification

Breast Multiparametric MRI for Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer: The BMMR2 Challenge

  • Li, W.
  • Partridge, S. C.
  • Newitt, D. C.
  • Steingrimsson, J.
  • Marques, H. S.
  • Bolan, P. J.
  • Hirano, M.
  • Bearce, B. A.
  • Kalpathy-Cramer, J.
  • Boss, M. A.
  • Teng, X.
  • Zhang, J.
  • Cai, J.
  • Kontos, D.
  • Cohen, E. A.
  • Mankowski, W. C.
  • Liu, M.
  • Ha, R.
  • Pellicer-Valero, O. J.
  • Maier-Hein, K.
  • Rabinovici-Cohen, S.
  • Tlusty, T.
  • Ozery-Flato, M.
  • Parekh, V. S.
  • Jacobs, M. A.
  • Yan, R.
  • Sung, K.
  • Kazerouni, A. S.
  • DiCarlo, J. C.
  • Yankeelov, T. E.
  • Chenevert, T. L.
  • Hylton, N. M.
Radiol Imaging Cancer 2024 Journal Article, cited 0 times
Website
Purpose To describe the design, conduct, and results of the Breast Multiparametric MRI for prediction of neoadjuvant chemotherapy Response (BMMR2) challenge. Materials and Methods The BMMR2 computational challenge opened on May 28, 2021, and closed on December 21, 2021. The goal of the challenge was to identify image-based markers derived from multiparametric breast MRI, including diffusion-weighted imaging (DWI) and dynamic contrast-enhanced (DCE) MRI, along with clinical data for predicting pathologic complete response (pCR) following neoadjuvant treatment. Data included 573 breast MRI studies from 191 women (mean age [+/-SD], 48.9 years +/- 10.56) in the I-SPY 2/American College of Radiology Imaging Network (ACRIN) 6698 trial (ClinicalTrials.gov: NCT01042379). The challenge cohort was split into training (60%) and test (40%) sets, with teams blinded to test set pCR outcomes. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC) and compared with the benchmark established from the ACRIN 6698 primary analysis. Results Eight teams submitted final predictions. Entries from three teams had point estimators of AUC that were higher than the benchmark performance (AUC, 0.782 [95% CI: 0.670, 0.893], with AUCs of 0.803 [95% CI: 0.702, 0.904], 0.838 [95% CI: 0.748, 0.928], and 0.840 [95% CI: 0.748, 0.932]). A variety of approaches were used, ranging from extraction of individual features to deep learning and artificial intelligence methods, incorporating DCE and DWI alone or in combination. Conclusion The BMMR2 challenge identified several models with high predictive performance, which may further expand the value of multiparametric breast MRI as an early marker of treatment response. Clinical trial registration no. NCT01042379 Keywords: MRI, Breast, Tumor Response Supplemental material is available for this article. (c) RSNA, 2024.

Prediction and verification of survival in patients with non-small-cell lung cancer based on an integrated radiomics nomogram

  • Li, R.
  • Peng, H.
  • Xue, T.
  • Li, J.
  • Ge, Y.
  • Wang, G.
  • Feng, F.
Clinical Radiology 2021 Journal Article, cited 0 times
Website
AIM To develop and validate a nomogram to predict 1-, 2-, and 5-year survival in patients with non-small-cell lung cancer (NSCLC) by combining optimised radiomics features, clinicopathological factors, and conventional image features extracted from three-dimensional (3D) computed tomography (CT) images. MATERIALS AND METHODS A total of 172 patients with NSCLC were selected to construct the model, and 74 and 72 patients were selected for internal validation and external testing, respectively. A total of 828 radiomics features were extracted from each patient's 3D CT images. Univariable Cox regression and least absolute shrinkage and selection operator (LASSO) regression were used to select features and generate a radiomics signature (radscore). The performance of the nomogram was evaluated by calibration curves, clinical practicability, and the c-index. Kaplan–Meier (KM) analysis was used to compare the overall survival (OS) between the two subgroups. RESULT The radiomics features of the NSCLC patients correlated significantly with survival time. The c-indexes of the nomogram in the training cohort, internal validation cohort, and external test cohort were 0.670, 0.658, and 0.660, respectively. The calibration curves showed that the predicted survival time was close to the actual survival time. Decision curve analysis shows that the nomogram could be useful in the clinic. According to KM analysis, the 1-, 2- and 5-year survival rates of the low-risk group were higher than those of the high-risk group. CONCLUSION The nomogram, combining the radscore, clinicopathological factors, and conventional CT parameters, can improve the accuracy of survival prediction in patients with NSCLC.

An efficient interactive multi-label segmentation tool for 2D and 3D medical images using fully connected conditional random field

  • Li, R.
  • Chen, X.
Comput Methods Programs Biomed 2022 Journal Article, cited 2 times
Website
OBJECTIVE: Image segmentation is a crucial and fundamental step in many medical image analysis tasks, such as tumor measurement, surgery planning, disease diagnosis, etc. To ensure the quality of image segmentation, most of the current solutions require labor-intensive manual processes by tracing the boundaries of the objects. The workload increases tremendously for the case of three dimensional (3D) image with multiple objects to be segmented. METHOD: In this paper, we introduce our developed interactive image segmentation tool that provides efficient segmentation of multiple labels for both 2D and 3D medical images. The core segmentation method is based on a fast implementation of the fully connected conditional random field. The software also enables automatic recommendation of the next slice to be annotated in 3D, leading to a higher efficiency. RESULTS: We have evaluated the tool on many 2D and 3D medical image modalities (e.g. CT, MRI, ultrasound, X-ray, etc.) and different objects of interest (abdominal organs, tumor, bones, etc.), in terms of segmentation accuracy, repeatability and computational time. CONCLUSION: In contrast to other interactive image segmentation tools, our software produces high quality image segmentation results without the requirement of parameter tuning for each application.

TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation

  • Li, Q.
  • Yu, Z.
  • Wang, Y.
  • Zheng, H.
Sensors (Basel) 2020 Journal Article, cited 41 times
Website
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.

Comparison Between Radiological Semantic Features and Lung-RADS in Predicting Malignancy of Screen-Detected Lung Nodules in the National Lung Screening Trial

  • Li, Qian
  • Balagurunathan, Yoganand
  • Liu, Ying
  • Qi, Jin
  • Schabath, Matthew B
  • Ye, Zhaoxiang
  • Gillies, Robert J
Clinical Lung Cancer 2017 Journal Article, cited 3 times
Website

A Fully-Automatic Multiparametric Radiomics Model: Towards Reproducible and Prognostic Imaging Signature for Prediction of Overall Survival in Glioblastoma Multiforme

  • Li, Qihua
  • Bai, Hongmin
  • Chen, Yinsheng
  • Sun, Qiuchang
  • Liu, Lei
  • Zhou, Sijie
  • Wang, Guoliang
  • Liang, Chaofeng
  • Li, Zhi-Cheng
Sci RepScientific reports 2017 Journal Article, cited 9 times
Website

mResU-Net: multi-scale residual U-Net-based brain tumor segmentation from multimodal MRI

  • Li, P.
  • Li, Z.
  • Wang, Z.
  • Li, C.
  • Wang, M.
2023 Journal Article, cited 0 times
Website
Brain tumor segmentation is an important direction in medical image processing, and its main goal is to accurately mark the tumor part in brain MRI. This study proposes a brand new end-to-end model for brain tumor segmentation, which is a multi-scale deep residual convolutional neural network called mResU-Net. The semantic gap between the encoder and decoder is bridged by using skip connections in the U-Net structure. The residual structure is used to alleviate the vanishing gradient problem during training and ensure sufficient information in deep networks. On this basis, multi-scale convolution kernels are used to improve the segmentation accuracy of targets of different sizes. At the same time, we also integrate channel attention modules into the network to improve its accuracy. The proposed model has an average dice score of 0.9289, 0.9277, and 0.8965 for tumor core (TC), whole tumor (WT), and enhanced tumor (ET) on the BraTS 2021 dataset, respectively. Comparing the segmentation results of this method with existing techniques shows that mResU-Net can significantly improve the segmentation performance of brain tumor subregions.

Biomechanical model for computing deformations for whole‐body image registration: A meshless approach

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Kikinis, Ron
  • Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering 2016 Journal Article, cited 13 times
Website

Patient-specific biomechanical model as whole-body CT image registration tool

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Doyle, Barry
  • Garlapati, Revanth Reddy
  • Kikinis, Ron
  • Wittek, Adam
Medical Image Analysis 2015 Journal Article, cited 15 times
Website
Whole-body computed tomography (CT) image registration is important for cancer diagnosis, therapy planning and treatment. Such registration requires accounting for large differences between source and target images caused by deformations of soft organs/tissues and articulated motion of skeletal structures. The registration algorithms relying solely on image processing methods exhibit deficiencies in accounting for such deformations and motion. We propose to predict the deformations and movements of body organs/tissues and skeletal structures for whole-body CT image registration using patient-specific non-linear biomechanical modelling. Unlike the conventional biomechanical modelling, our approach for building the biomechanical models does not require time-consuming segmentation of CT scans to divide the whole body into non-overlapping constituents with different material properties. Instead, a Fuzzy C-Means (FCM) algorithm is used for tissue classification to assign the constitutive properties automatically at integration points of the computation grid. We use only very simple segmentation of the spine when determining vertebrae displacements to define loading for biomechanical models. We demonstrate the feasibility and accuracy of our approach on CT images of seven patients suffering from cancer and aortic disease. The results confirm that accurate whole-body CT image registration can be achieved using a patient-specific non-linear biomechanical model constructed without time-consuming segmentation of the whole-body images. (C) 2015 Elsevier B.V. All rights reserved.

Accurate pancreas segmentation using multi-level pyramidal pooling residual U-Net with adversarial mechanism

  • Li, M.
  • Lian, F.
  • Wang, C.
  • Guo, S.
BMC Med Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: A novel multi-level pyramidal pooling residual U-Net with adversarial mechanism was proposed for organ segmentation from medical imaging, and was conducted on the challenging NIH Pancreas-CT dataset. METHODS: The 82 pancreatic contrast-enhanced abdominal CT volumes were split via four-fold cross validation to test the model performance. In order to achieve accurate segmentation, we firstly involved residual learning into an adversarial U-Net to achieve a better gradient information flow for improving segmentation performance. Then, we introduced a multi-level pyramidal pooling module (MLPP), where a novel pyramidal pooling was involved to gather contextual information for segmentation, then four groups of structures consisted of a different number of pyramidal pooling blocks were proposed to search for the structure with the optimal performance, and two types of pooling blocks were applied in the experimental section to further assess the robustness of MLPP for pancreas segmentation. For evaluation, Dice similarity coefficient (DSC) and recall were used as the metrics in this work. RESULTS: The proposed method preceded the baseline network 5.30% and 6.16% on metrics DSC and recall, and achieved competitive results compared with the-state-of-art methods. CONCLUSIONS: Our algorithm showed great segmentation performance even on the particularly challenging pancreas dataset, this indicates that the proposed model is a satisfactory and promising segmentor.

Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images

  • Li, M.
  • Lian, F.
  • Li, Y.
  • Guo, S.
J Appl Clin Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Segmenting the organs from computed tomography (CT) images is crucial to early diagnosis and treatment. Pancreas segmentation is especially challenging because the pancreas has a small volume and a large variation in shape. METHODS: To mitigate this issue, an attention-guided duplex adversarial U-Net (ADAU-Net) for pancreas segmentation is proposed in this work. First, two adversarial networks are integrated into the baseline U-Net to ensure the obtained prediction maps resemble the ground truths. Then, attention blocks are applied to preserve much contextual information for segmentation. The implementation of the proposed ADAU-Net consists of two steps: 1) backbone segmentor selection scheme is introduced to select an optimal backbone segmentor from three two-dimensional segmentation model variants based on a conventional U-Net and 2) attention blocks are integrated into the backbone segmentor at several locations to enhance the interdependency among pixels for a better segmentation performance, and the optimal structure is selected as a final version. RESULTS: The experimental results on the National Institutes of Health Pancreas-CT dataset show that our proposed ADAU-Net outperforms the baseline segmentation network by 6.39% in dice similarity coefficient and obtains a competitive performance compared with the-state-of-art methods for pancreas segmentation. CONCLUSION: The ADAU-Net achieves satisfactory segmentation results on the public pancreas dataset, indicating that the proposed model can segment pancreas outlines from CT images accurately.

An Adversarial Network Embedded with Attention Mechanism for Pancreas Segmentation

  • Li, Meiyu
  • Lian, Fenghui
  • Guo, Shuxu
2021 Conference Paper, cited 1 times
Website
Pancreas segmentation plays an important role in the diagnosis of pancreatic diseases and related complications. However, accurately segmenting pancreas from computed tomography (CT) images tends to be challenging due to the limited proportion and irregular shape of pancreas in the abdominal CT volume. To solve this issue, we propose an adversarial network embedded with attention mechanism for pancreas segmentation in this paper. The involvement of generative adversarial network contributes to retaining much spatial information for segmentation through capturing high dimensional data distributions due to its competing mechanism between the discriminator and the generator. Furthermore, the application of attention mechanism enhances the interdependency among pixels, and thus containing contextual information for segmentation. Experimental results show that our proposed model achieves competitive performance compared with most pancreas segmentation methods.

Multi-scale Selection and Multi-channel Fusion Model for Pancreas Segmentation Using Adversarial Deep Convolutional Nets

  • Li, M.
  • Lian, F.
  • Guo, S.
J Digit Imaging 2021 Journal Article, cited 0 times
Website
Organ segmentation from existing imaging is vital to the medical image analysis and disease diagnosis. However, the boundary shapes and area sizes of the target region tend to be diverse and flexible. And the frequent applications of pooling operations in traditional segmentor result in the loss of spatial information which is advantageous to segmentation. All these issues pose challenges and difficulties for accurate organ segmentation from medical imaging, particularly for organs with small volumes and variable shapes such as the pancreas. To offset aforesaid information loss, we propose a deep convolutional neural network (DCNN) named multi-scale selection and multi-channel fusion segmentation model (MSC-DUnet) for pancreas segmentation. This proposed model contains three stages to collect detailed cues for accurate segmentation: (1) increasing the consistency between the distributions of the output probability maps from the segmentor and the original samples by involving the adversarial mechanism that can capture spatial distributions, (2) gathering global spatial features from several receptive fields via multi-scale field selection (MSFS), and (3) integrating multi-level features located in varying network positions through the multi-channel fusion module (MCFM). Experimental results on the NIH Pancreas-CT dataset show that our proposed MSC-DUnet obtains superior performance to the baseline network by achieving an improvement of 5.1% in index dice similarity coefficient (DSC), which adequately indicates that MSC-DUnet has great potential for pancreas segmentation.

Evaluating the performance of a deep learning‐based computer‐aided diagnosis (DL‐CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists

  • Li, Li
  • Liu, Zhou
  • Huang, Hua
  • Lin, Meng
  • Luo, Dehong
Thoracic cancer 2018 Journal Article, cited 0 times
Website

ITHscore: comprehensive quantification of intra-tumor heterogeneity in NSCLC by multi-scale radiomic features

  • Li, J.
  • Qiu, Z.
  • Zhang, C.
  • Chen, S.
  • Wang, M.
  • Meng, Q.
  • Lu, H.
  • Wei, L.
  • Lv, H.
  • Zhong, W.
  • Zhang, X.
Eur Radiol 2022 Journal Article, cited 0 times
Website
OBJECTIVES: To quantify intra-tumor heterogeneity (ITH) in non-small cell lung cancer (NSCLC) from computed tomography (CT) images. METHODS: We developed a quantitative ITH measurement-ITHscore-by integrating local radiomic features and global pixel distribution patterns. The associations of ITHscore with tumor phenotypes, genotypes, and patient's prognosis were examined on six patient cohorts (n = 1399) to validate its effectiveness in characterizing ITH. RESULTS: For stage I NSCLC, ITHscore was consistent with tumor progression from stage IA1 to IA3 (p < 0.001) and captured key pathological change in terms of malignancy (p < 0.001). ITHscore distinguished the presence of lymphovascular invasion (p = 0.003) and pleural invasion (p = 0.001) in tumors. ITHscore also separated patient groups with different overall survival (p = 0.004) and disease-free survival conditions (p = 0.005). Radiogenomic analysis showed that the level of ITHscore in stage I and stage II NSCLC is correlated with heterogeneity-related pathways. In addition, ITHscore was proved to be a stable measurement and can be applied to ITH quantification in head-and-neck cancer (HNC). CONCLUSIONS: ITH in NSCLC can be quantified from CT images by ITHscore, which is an indicator for tumor phenotypes and patient's prognosis. KEY POINTS: * ITHscore provides a radiomic quantification of intra-tumor heterogeneity in NSCLC. * ITHscore is an indicator for tumor phenotypes and patient's prognosis. * ITHscore has the potential to be generalized to other cancer types such as HNC.

Gradient-Rebalanced Uncertainty Minimization for Cross-Site Adaptation of Medical Image Segmentation

  • Li, Jiaming
  • Fang, Chaowei
  • Li, Guanbin
2022 Conference Paper, cited 0 times
Website
Automatically adapting image segmentation across data sites benefits to reduce the data annotation burden in medical image analysis. Due to variations in image collection procedures, there usually exists moderate domain gap between medical image datasets from different sites. Increasing the prediction certainty is beneficial for gradually reducing the category-wise domain shift. However, uncertainty minimization naturally leads to bias towards major classes since the target object usually occupies a small portion of pixels in the input image. In this paper, we propose a gradient-rebalanced uncertainty minimization scheme which is capable of eliminating the learning bias. First, the foreground pixels and background pixels are reweighted according to the total gradient amplitude of every class. Furthermore, we devise a feature-level adaptation scheme to reduce the overall domain gap between source and target datasets, based on feature norm regularization and adversarial learning. Experiments on CT pancreas segmentation and MRI prostate segmentation validate that, our method outperforms existing cross-site adaptation algorithms by around 3% on the DICE similarity coefficient.

Multiomics profiling reveals the benefits of gamma-delta (gammadelta) T lymphocytes for improving the tumor microenvironment, immunotherapy efficacy and prognosis in cervical cancer

  • Li, J.
  • Cao, Y.
  • Liu, Y.
  • Yu, L.
  • Zhang, Z.
  • Wang, X.
  • Bai, H.
  • Zhang, Y.
  • Liu, S.
  • Gao, M.
  • Lu, C.
  • Li, C.
  • Guan, Y.
  • Tao, Z.
  • Wu, Z.
  • Chen, J.
  • Yuan, Z.
2024 Journal Article, cited 0 times
Website
BACKGROUND: As an unconventional subpopulation of T lymphocytes, gammadelta T cells can recognize antigens independently of major histocompatibility complex restrictions. Recent studies have indicated that gammadelta T cells play contrasting roles in tumor microenvironments-promoting tumor progression in some cancers (eg, gallbladder and leukemia) while suppressing it in others (eg, lung and gastric). gammadelta T cells are mainly enriched in peripheral mucosal tissues. As the cervix is a mucosa-rich tissue, the role of gammadelta T cells in cervical cancer warrants further investigation. METHODS: We employed a multiomics strategy that integrated abundant data from single-cell and bulk transcriptome sequencing, whole exome sequencing, genotyping array, immunohistochemistry, and MRI. RESULTS: Heterogeneity was observed in the level of gammadelta T-cell infiltration in cervical cancer tissues, mainly associated with the tumor somatic mutational landscape. Definitely, gammadelta T cells play a beneficial role in the prognosis of patients with cervical cancer. First, gammadelta T cells exert direct cytotoxic effects in the tumor microenvironment of cervical cancer through the dynamic evolution of cellular states at both poles. Second, higher levels of gammadelta T-cell infiltration also shape the microenvironment of immune activation with cancer-suppressive properties. We found that these intricate features can be observed by MRI-based radiomics models to non-invasively assess gammadelta T-cell proportions in tumor tissues in patients. Importantly, patients with high infiltration levels of gammadelta T cells may be more amenable to immunotherapies including immune checkpoint inhibitors and autologous tumor-infiltrating lymphocyte therapies, than to chemoradiotherapy. CONCLUSIONS: gammadelta T cells play a beneficial role in antitumor immunity in cervical cancer. The abundance of gammadelta T cells in cervical cancerous tissue is associated with higher response rates to immunotherapy.

Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set

  • Li, Hui
  • Zhu, Yitan
  • Burnside, Elizabeth S
  • Huang, Erich
  • Drukker, Karen
  • Hoadley, Katherine A
  • Fan, Cheng
  • Conzen, Suzanne D
  • Zuley, Margarita
  • Net, Jose M
npj Breast Cancer 2016 Journal Article, cited 63 times
Website

MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays

  • Li, Hui
  • Zhu, Yitan
  • Burnside, Elizabeth S
  • Drukker, Karen
  • Hoadley, Katherine A
  • Fan, Cheng
  • Conzen, Suzanne D
  • Whitman, Gary J
  • Sutton, Elizabeth J
  • Net, Jose M
RadiologyRadiology 2016 Journal Article, cited 103 times
Website

DT-MIL: Deformable Transformer for Multi-instance Learning on Histopathological Image

  • Li, Hang
  • Yang, Fan
  • Zhao, Yu
  • Xing, Xiaohan
  • Zhang, Jun
  • Gao, Mingxuan
  • Huang, Junzhou
  • Wang, Liansheng
  • Yao, Jianhua
2021 Conference Proceedings, cited 0 times
Website

Low-Dose CT streak artifacts removal using deep residual neural network

  • Li, Heyi
  • Mueller, Klaus
2017 Conference Proceedings, cited 6 times
Website

A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty

  • Li, D.
  • Hu, L.
  • Peng, X.
  • Xiao, N.
  • Zhao, H.
  • Liu, G.
  • Liu, H.
  • Li, K.
  • Ai, B.
  • Xia, H.
  • Lu, L.
  • Gao, Y.
  • Wu, J.
  • Liang, H.
2022 Journal Article, cited 3 times
Website
Artificial Intelligence (AI) has achieved state-of-the-art performance in medical imaging. However, most algorithms focused exclusively on improving the accuracy of classification while neglecting the major challenges in a real-world application. The opacity of algorithms prevents users from knowing when the algorithms might fail. And the natural gap between training datasets and the in-reality data may lead to unexpected AI system malfunction. Knowing the underlying uncertainty is essential for improving system reliability. Therefore, we developed a COVID-19 AI system, utilizing a Bayesian neural network to calculate uncertainties in classification and reliability intervals of datasets. Validated with four multi-region datasets simulating different scenarios, our approach was proved to be effective to suggest the system failing possibility and give the decision power to human experts in time. Leveraging on the complementary strengths of AI and health professionals, our present method has the potential to improve the practicability of AI systems in clinical application.

Multi-Dimensional Cascaded Net with Uncertain Probability Reduction for Abdominal Multi-Organ Segmentation in CT Sequences

  • Li, C.
  • Mao, Y.
  • Guo, Y.
  • Li, J.
  • Wang, Y.
Comput Methods Programs Biomed 2022 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Deep learning abdominal multi-organ segmentation provides preoperative guidance for abdominal surgery. However, due to the large volume of 3D CT sequences, the existing methods cannot balance complete semantic features and high-resolution detail information, which leads to uncertain, rough, and inaccurate segmentation, especially in small and irregular organs. In this paper, we propose a two-stage algorithm named multi-dimensional cascaded net (MDCNet) to solve the above problems and segment multi-organs in CT images, including the spleen, kidney, gallbladder, esophagus, liver, stomach, pancreas, and duodenum. METHODS: MDCNet combines the powerful semantic encoder ability of a 3D net and the rich high-resolution information of a 2.5D net. In stage1, a prior-guided shallow-layer-enhanced 3D location net extracts entire semantic features from a downsampled CT volume to perform rough segmentation. Additionally, we use circular inference and parameter Dice loss to alleviate uncertain boundary. The inputs of stage2 are high-resolution slices, which are obtained by the original image and coarse segmentation of stage1. Stage2 offsets the details lost during downsampling, resulting in smooth and accurate refined contours. The 2.5D net from the axial, coronal, and sagittal views also compensates for the missing spatial information of a single view. RESULTS: The experiments on the two datasets both obtained the best performance, particularly a higher Dice on small gallbladders and irregular duodenums, which reached 0.85+/-0.12 and 0.77+/-0.07 respectively, increasing by 0.02 and 0.03 compared to the state-of-the-art method. CONCLUSION: Our method can extract all semantic and high-resolution detail information from a large-volume CT image. It reduces the boundary uncertainty while yielding smoother segmentation edges, indicating good clinical application prospects.

Application of Deep Learning on the Prognosis of Cutaneous Melanoma Based on Full Scan Pathology Images

  • Li,Anhai
  • Li, Xiaoyuan
  • Li, Wenwen
  • Yu, Xiaoqian
  • Qi, Mengmeng
  • Li, Ding
2022 Journal Article, cited 0 times
Website
INTRODUCTION: The purpose of this study is to use deep learning and machine learning to learn and classify patients with cutaneous melanoma with different prognoses and to explore the application value of deep learning in the prognosis of cutaneous melanoma patients. METHODS: In deep learning, VGG-19 is selected as the network architecture and learning model for learning and classification. In machine learning, deep features are extracted through the VGG-19 network architecture, and the support vector machine (SVM) model is selected for learning and classification. Compare and explore the application value of deep learning and machine learning in predicting the prognosis of patients with cutaneous melanoma. RESULT: According to receiver operating characteristic (ROC) curves and area under the curve (AUC), the average accuracy of deep learning is higher than that of machine learning, and even the lowest accuracy is better than that of machine learning. CONCLUSION: As the number of learning increases, the accuracy of machine learning and deep learning will increase, but in the same number of cutaneous melanoma patient pathology maps, the accuracy of deep learning will be higher. This study provides new ideas and theories for computational pathology in predicting the prognosis of patients with cutaneous melanoma.

Automated Segmentation of Prostate MR Images Using Prior Knowledge Enhanced Random Walker

  • Li, Ang
  • Li, Changyang
  • Wang, Xiuying
  • Eberl, Stefan
  • Feng, David Dagan
  • Fulham, Michael
2013 Conference Proceedings, cited 9 times
Website

A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI

  • Lew, C. O.
  • Harouni, M.
  • Kirksey, E. R.
  • Kang, E. J.
  • Dong, H.
  • Gu, H.
  • Grimm, L. J.
  • Walsh, R.
  • Lowell, D. A.
  • Mazurowski, M. A.
2024 Journal Article, cited 0 times
Website
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.

LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction

  • Leuschner, J.
  • Schmidt, M.
  • Baguer, D. O.
  • Maass, P.
Sci Data 2021 Journal Article, cited 0 times
Website
Deep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios.

Automated Whole-Body Tumor Segmentation and Prognosis of Cancer on PET/CT

  • Leung, Kevin H.
2023 Conference Paper, cited 0 times
Automatic characterization of malignant disease is an important clinical need to facilitate early detection and treatment of cancer. A deep semi-supervised transfer learning approach was developed for automated whole-body tumor segmentation and prognosis on positron emission tomography (PET)/computed tomography (CT) scans using limited annotations. This study analyzed five datasets consisting of 408 prostate-specific membrane antigen (PSMA) PET/CT scans of prostate cancer patients and 611 18F-fluorodeoxyglucose (18F-FDG) PET/CT scans of lung, melanoma, lymphoma, head and neck, and breast cancer patients. Transfer learning generalized the segmentation task across PSMA and 18F-FDG PET/CT. Imaging measures quantifying molecular tumor burden were extracted from the predicted segmentations. Prognostic risk models were developed and evaluated on follow-up clinical measures, Kaplan-Meier survival analysis, and response assessment for patients with prostate, head and neck, and breast cancers, respectively. The proposed approach demonstrated accurate tumor segmentation and prognosis on PET/CT of patients across six cancer types.

Multimodal Brain Tumor Classification

  • Lerousseau, Marvin
  • Deutsch, Eric
  • Paragios, Nikos
2021 Book Section, cited 0 times
Cancer is a complex disease that provides various types of information depending on the scale of observation. While most tumor diagnostics are performed by observing histopathological slides, radiology images should yield additional knowledge towards the efficacy of cancer diagnostics. This work investigates a deep learning method combining whole slide images and magnetic resonance images to classify tumors. In particular, our solution comprises a powerful, generic and modular architecture for whole slide image classification. Experiments are prospectively conducted on the 2020 Computational Precision Medicine challenge, in a 3-classes unbalanced classification task. We report cross-validation (resp. validation) balanced-accuracy, kappa and f1 of 0.913, 0.897 and 0.951 (resp. 0.91, 0.90 and 0.94). For research purposes, including reproducibility and direct performance comparisons, our finale submitted models are usable off-the-shelf in a Docker image available at https://hub.docker.com/repository/docker/marvinler/cpm_2020_marvinler.

The Impact of Obesity on Tumor Glucose Uptake in Breast and Lung Cancer

  • Leitner, Brooks P.
  • Perry, Rachel J.
JNCI Cancer Spectrum 2020 Journal Article, cited 0 times
Website
Obesity confers an increased incidence and poorer clinical prognosis in over ten cancer types. Paradoxically, obesity provides protection from poor outcomes in lung cancer. Mechanisms for the obesity-cancer links are not fully elucidated, with altered glucose metabolism being a promising candidate. Using 18F-Fluorodeoxyglucose positron-emission-tomography/computed-tomography images from The Cancer Imaging Archive, we explored the relationship between body mass index (BMI) and glucose metabolism in several cancers. In 188 patients (BMI: 27.7, SD = 5.1, Range = 17.4-49.3 kg/m2), higher BMI was associated with greater tumor glucose uptake in obesity-associated breast cancer r = 0.36, p = 0.02), and with lower tumor glucose uptake in non-small-cell lung cancer (r=-0.26, p = 0.048) using two-sided Pearson correlations. No relationship was observed in soft tissue sarcoma or squamous cell carcinoma. Harnessing The National Cancer Institute’s open-access database, we demonstrate altered tumor glucose metabolism as a potential mechanism for the detrimental and protective effects of obesity on breast and lung cancer, respectively.

Multimodal analysis suggests differential immuno-metabolic crosstalk in lung squamous cell carcinoma and adenocarcinoma

  • Leitner, B. P.
  • Givechian, K. B.
  • Ospanova, S.
  • Beisenbayeva, A.
  • Politi, K.
  • Perry, R. J.
NPJ Precis Oncol 2022 Journal Article, cited 0 times
Website
Immunometabolism within the tumor microenvironment is an appealing target for precision therapy approaches in lung cancer. Interestingly, obesity confers an improved response to immune checkpoint inhibition in non-small cell lung cancer (NSCLC), suggesting intriguing relationships between systemic metabolism and the immunometabolic environment in lung tumors. We hypothesized that visceral fat and (18)F-Fluorodeoxyglucose uptake influenced the tumor immunometabolic environment and that these bidirectional relationships differ in NSCLC subtypes, lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). By integrating (18)F-FDG PET/CT imaging, bulk and single-cell RNA-sequencing, and histology, we observed that LUSC had a greater dependence on glucose than LUAD. In LUAD tumors with high glucose uptake, glutaminase was downregulated, suggesting a tradeoff between glucose and glutamine metabolism, while in LUSC tumors with high glucose uptake, genes related to fatty acid and amino acid metabolism were also increased. We found that tumor-infiltrating T cells had the highest expression of glutaminase, ribosomal protein 37, and cystathionine gamma-lyase in NSCLC, highlighting the metabolic flexibility of this cell type. Further, we demonstrate that visceral adiposity, but not body mass index (BMI), was positively associated with tumor glucose uptake in LUAD and that patients with high BMI had favorable prognostic transcriptional profiles, while tumors of patients with high visceral fat had poor prognostic gene expression. We posit that metabolic adjunct therapy may be more successful in LUSC rather than LUAD due to LUAD's metabolic flexibility and that visceral adiposity, not BMI alone, should be considered when developing precision medicine approaches for the treatment of NSCLC.

Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network

  • Lei, Y.
  • Wang, T.
  • Jeong, J. J.
  • Janopaul-Naylor, J.
  • Kesarwala, A. H.
  • Roper, J.
  • Tian, S.
  • Bradley, J. D.
  • Liu, T.
  • Higgins, K.
  • Yang, X.
Med Phys 2022 Journal Article, cited 0 times
Website
BACKGROUND: Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE: In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS: The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS: In fivefold cross-validation, this method achieved Dice and MSD of 0.84 +/- 0.15 and 1.38 +/- 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION: The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.

Multiple-response regression analysis links magnetic resonance imaging features to de-regulated protein expression and pathway activity in lower grade glioma

  • Lehrer, Michael
  • Bhadra, Anindya
  • Ravikumar, Visweswaran
  • Chen, James Y
  • Wintermark, Max
  • Hwang, Scott N
  • Holder, Chad A
  • Huang, Erich P
  • Fevrier-Sullivan, Brenda
  • Freymann, John B
  • Rao, Arvind
Oncoscience 2017 Journal Article, cited 1 times
Website
BACKGROUND AND PURPOSE: Lower grade gliomas (LGGs), lesions of WHO grades II and III, comprise 10-15% of primary brain tumors. In this first-of-a-kind study, we aim to carry out a radioproteomic characterization of LGGs using proteomics data from the TCGA and imaging data from the TCIA cohorts, to obtain an association between tumor MRI characteristics and protein measurements. The availability of linked imaging and molecular data permits the assessment of relationships between tumor genomic/proteomic measurements with phenotypic features. MATERIALS AND METHODS: Multiple-response regression of the image-derived, radiologist scored features with reverse-phase protein array (RPPA) expression levels generated correlation coefficients for each combination of image-feature and protein or phospho-protein in the RPPA dataset. Significantly-associated proteins for VASARI features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis was used to determine which feature groups were most strongly correlated with pathway activity and cellular functions. RESULTS: The multiple-response regression approach identified multiple proteins associated with each VASARI imaging feature. VASARI features were found to be correlated with expression of IL8, PTEN, PI3K/Akt, Neuregulin, ERK/MAPK, p70S6K and EGF signaling pathways. CONCLUSION: Radioproteomics analysis might enable an insight into the phenotypic consequences of molecular aberrations in LGGs.

High-dimensional regression analysis links magnetic resonance imaging features and protein expression and signaling pathway alterations in breast invasive carcinoma

  • Lehrer, M.
  • Bhadra, A.
  • Aithala, S.
  • Ravikumar, V.
  • Zheng, Y.
  • Dogan, B.
  • Bonaccio, E.
  • Burnside, E. S.
  • Morris, E.
  • Sutton, E.
  • Whitman, G. J.
  • Net, J.
  • Brandt, K.
  • Ganott, M.
  • Zuley, M.
  • Rao, A.
  • Tcga Breast Phenotype Research Group
Oncoscience 2018 Journal Article, cited 0 times
Website
Background: Imaging features derived from MRI scans can be used for not only breast cancer detection and measuring disease extent, but can also determine gene expression and patient outcomes. The relationships between imaging features, gene/protein expression, and response to therapy hold potential to guide personalized medicine. We aim to characterize the relationship between radiologist-annotated tumor phenotypic features (based on MRI) and the underlying biological processes (based on proteomic profiling) in the tumor. Methods: Multiple-response regression of the image-derived, radiologist-scored features with reverse-phase protein array expression levels generated association coefficients for each combination of image-feature and protein in the RPPA dataset. Significantly-associated proteins for features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis determined which features were most strongly correlated with pathway activity and cellular functions. Results: Each of the twenty-nine imaging features was found to have a set of significantly correlated molecules, associated biological functions, and pathways. Conclusions: We interrogated the pathway alterations represented by the protein expression associated with each imaging feature. Our study demonstrates the relationships between biological processes (via proteomic measurements) and MRI features within breast tumors.

HGG and LGG Brain Tumor Segmentation in Multi-Modal MRI Using Pretrained Convolutional Neural Networks of Amazon Sagemaker

  • Lefkovits, S.
  • Lefkovits, L.
  • Szilagyi, L.
2022 Journal Article, cited 0 times
Website
Automatic brain tumor segmentation from multimodal MRI plays a significant role in assisting the diagnosis, treatment, and surgery of glioblastoma and lower glade glioma. In this article, we propose applying several deep learning techniques implemented in AWS SageMaker Framework. The different CNN architectures are adapted and fine-tuned for our purpose of brain tumor segmentation.The experiments are evaluated and analyzed in order to obtain the best parameters as possible for the models created. The selected architectures are trained on the publicly available BraTS 2017-2020 dataset. The segmentation distinguishes the background, healthy tissue, whole tumor, edema, enhanced tumor, and necrosis. Further, a random search for parameter optimization is presented to additionally improve the architectures obtained. Lastly, we also compute the detection results of the ensemble model created from the weighted average of the six models described. The goal of the ensemble is to improve the segmentation at the tumor tissue boundaries. Our results are compared to the BraTS 2020 competition and leaderboard and are among the first 25% considering the ranking of Dice scores.

Are radiomics features universally applicable to different organs?

  • Lee, S. H.
  • Cho, H. H.
  • Kwon, J.
  • Lee, H. Y.
  • Park, H.
Cancer Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: Many studies have successfully identified radiomics features reflecting macroscale tumor features and tumor microenvironment for various organs. There is an increased interest in applying these radiomics features found in a given organ to other organs. Here, we explored whether common radiomics features could be identified over target organs in vastly different environments. METHODS: Four datasets of three organs were analyzed. One radiomics model was constructed from the training set (lungs, n = 401), and was further evaluated in three independent test sets spanning three organs (lungs, n = 59; kidneys, n = 48; and brains, n = 43). Intensity histograms derived from the whole organ were compared to establish organ-level differences. We constructed a radiomics score based on selected features using training lung data over the tumor region. A total of 143 features were computed for each tumor. We adopted a feature selection approach that favored stable features, which can also capture survival. The radiomics score was applied to three independent test data from lung, kidney, and brain tumors, and whether the score could be used to separate high- and low-risk groups, was evaluated. RESULTS: Each organ showed a distinct pattern in the histogram and the derived parameters (mean and median) at the organ-level. The radiomics score trained from the lung data of the tumor region included seven features, and the score was only effective in stratifying survival for other lung data, not in other organs such as the kidney and brain. Eliminating the lung-specific feature (2.5 percentile) from the radiomics score led to similar results. There were no common features between training and test sets, but a common category of features (texture category) was identified. CONCLUSION: Although the possibility of a generally applicable model cannot be excluded, we suggest that radiomics score models for survival were mostly specific for a given organ; applying them to other organs would require careful consideration of organ-specific properties.

Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software

  • Lee, Myungeun
  • Woo, Boyeong
  • Kuo, Michael D
  • Jamshidi, Neema
  • Kim, Jong Hyo
Korean journal of radiology 2017 Journal Article, cited 7 times
Website

Evaluation of few-shot detection of head and neck anatomy in CT

  • Lee, Kyungeun
  • Cho, Jihoon
  • Lee, Jiye
  • Xing, Fangxu
  • Liu, Xiaofeng
  • Bae, Hyungjoon
  • Lee, Kyungsu
  • Hwang, Jae Youn
  • Park, Jinah
  • El Fakhri, Georges
  • Jee, Kyung-Wook
  • Woo, Jonghye
  • Astley, Susan M.
  • Chen, Weijie
2024 Conference Paper, cited 0 times
Website
The detection of anatomical structures in medical imaging data plays a crucial role as a preprocessing step for various downstream tasks. It, however, poses a significant challenge due to highly variable appearances and intensity values within medical imaging data. In addition, there is a scarcity of annotated datasets in medical imaging data, due to high costs and the requirement for specialized knowledge. These limitations motivate researchers to develop automated and accurate few-shot object detection approaches. While there are generalpurpose deep learning models available for detecting objects in natural images, the applicability of these models for medical imaging data remains uncertain and needs to be validated. To address this, we carry out an unbiased evaluation of the state-of-the-art few-shot object detection methods for detecting head and neck anatomy in CT images. In particular, we choose Query Adaptive Few-Shot Object Detection (QA-FewDet), Meta Faster R-CNN, and Few-Shot Object Detection with Fully Cross-Transformer (FCT) methods and apply each model to detect various anatomical structures using novel datasets containing only a few images, ranging from 1- to 30-shot, during the fine-tuning stage. Our experimental results, carried out under the same setting, demonstrate that few-shot object detection methods can accurately detect anatomical structures, showing promising potential for integration into the clinical workflow.

Added prognostic value of 3D deep learning-derived features from preoperative MRI for adult-type diffuse gliomas

  • Lee, J. O.
  • Ahn, S. S.
  • Choi, K. S.
  • Lee, J.
  • Jang, J.
  • Park, J. H.
  • Hwang, I.
  • Park, C. K.
  • Park, S. H.
  • Chung, J. W.
  • Choi, S. H.
2024 Journal Article, cited 0 times
Website
BACKGROUND: To investigate the prognostic value of spatial features from whole-brain MRI using a three-dimensional (3D) convolutional neural network for adult-type diffuse gliomas. METHODS: In a retrospective, multicenter study, 1925 diffuse glioma patients were enrolled from 5 datasets: SNUH (n = 708), UPenn (n = 425), UCSF (n = 500), TCGA (n = 160), and Severance (n = 132). The SNUH and Severance datasets served as external test sets. Precontrast and postcontrast 3D T1-weighted, T2-weighted, and T2-FLAIR images were processed as multichannel 3D images. A 3D-adapted SE-ResNeXt model was trained to predict overall survival. The prognostic value of the deep learning-based prognostic index (DPI), a spatial feature-derived quantitative score, and established prognostic markers were evaluated using Cox regression. Model evaluation was performed using the concordance index (C-index) and Brier score. RESULTS: The MRI-only median DPI survival prediction model achieved C-indices of 0.709 and 0.677 (BS = 0.142 and 0.215) and survival differences (P < 0.001 and P = 0.002; log-rank test) for the SNUH and Severance datasets, respectively. Multivariate Cox analysis revealed DPI as a significant prognostic factor, independent of clinical and molecular genetic variables: hazard ratio = 0.032 and 0.036 (P < 0.001 and P = 0.004) for the SNUH and Severance datasets, respectively. Multimodal prediction models achieved higher C-indices than models using only clinical and molecular genetic variables: 0.783 vs. 0.774, P = 0.001, SNUH; 0.766 vs. 0.748, P = 0.023, Severance. CONCLUSIONS: The global morphologic feature derived from 3D CNN models using whole-brain MRI has independent prognostic value for diffuse gliomas. Combining clinical, molecular genetic, and imaging data yields the best performance.

Spatiotemporal genomic architecture informs precision oncology in glioblastoma

  • Lee, Jin-Ku
  • Wang, Jiguang
  • Sa, Jason K.
  • Ladewig, Erik
  • Lee, Hae-Ock
  • Lee, In-Hee
  • Kang, Hyun Ju
  • Rosenbloom, Daniel S.
  • Camara, Pablo G.
  • Liu, Zhaoqi
  • van Nieuwenhuizen, Patrick
  • Jung, Sang Won
  • Choi, Seung Won
  • Kim, Junhyung
  • Chen, Andrew
  • Kim, Kyu-Tae
  • Shin, Sang
  • Seo, Yun Jee
  • Oh, Jin-Mi
  • Shin, Yong Jae
  • Park, Chul-Kee
  • Kong, Doo-Sik
  • Seol, Ho Jun
  • Blumberg, Andrew
  • Lee, Jung-Il
  • Iavarone, Antonio
  • Park, Woong-Yang
  • Rabadan, Raul
  • Nam, Do-Hyun
Nat Genet 2017 Journal Article, cited 45 times
Website
Precision medicine in cancer proposes that genomic characterization of tumors can inform personalized targeted therapies. However, this proposition is complicated by spatial and temporal heterogeneity. Here we study genomic and expression profiles across 127 multisector or longitudinal specimens from 52 individuals with glioblastoma (GBM). Using bulk and single-cell data, we find that samples from the same tumor mass share genomic and expression signatures, whereas geographically separated, multifocal tumors and/or long-term recurrent tumors are seeded from different clones. Chemical screening of patient-derived glioma cells (PDCs) shows that therapeutic response is associated with genetic similarity, and multifocal tumors that are enriched with PIK3CA mutations have a heterogeneous drug-response pattern. We show that targeting truncal events is more efficacious than targeting private events in reducing the tumor burden. In summary, this work demonstrates that evolutionary inference from integrated genomic analysis in multisector biopsies can inform targeted therapeutic interventions for patients with GBM.

Associating spatial diversity features of radiologically defined tumor habitats with epidermal growth factor receptor driver status and 12-month survival in glioblastoma: methods and preliminary investigation

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan J
  • Rao, Ganesh
  • Rao, Arvind
Journal of Medical Imaging 2015 Journal Article, cited 15 times
Website
We analyzed the spatial diversity of tumor habitats, regions with distinctly different intensity characteristics of a tumor, using various measurements of habitat diversity within tumor regions. These features were then used for investigating the association with a 12-month survival status in glioblastoma (GBM) patients and for the identification of epidermal growth factor receptor (EGFR)-driven tumors. T1 postcontrast and T2 fluid attenuated inversion recovery images from 65 GBM patients were analyzed in this study. A total of 36 spatial diversity features were obtained based on pixel abundances within regions of interest. Performance in both the classification tasks was assessed using receiver operating characteristic (ROC) analysis. For association with 12-month overall survival, area under the ROC curve was 0.74 with confidence intervals [0.630 to 0.858]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.59 and 0.75, respectively. For the identification of EGFR-driven tumors, the area under the ROC curve (AUC) was 0.85 with confidence intervals [0.750 to 0.945]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.76 and 0.83, respectively. Our findings suggest that these spatial habitat diversity features are associated with these clinical characteristics and could be a useful prognostic tool for magnetic resonance imaging studies of patients with GBM.

Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data Are Associated with Molecular Subtype and 12-Month Survival Status in Glioblastoma Multiforme

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan
  • Rao, Ganesh
  • Rao, Arvind
PLoS One 2015 Journal Article, cited 14 times
Website
One of the most common and aggressive malignant brain tumors is Glioblastoma multiforme. Despite the multimodality treatment such as radiation therapy and chemotherapy (temozolomide: TMZ), the median survival rate of glioblastoma patient is less than 15 months. In this study, we investigated the association between measures of spatial diversity derived from spatial point pattern analysis of multiparametric magnetic resonance imaging (MRI) data with molecular status as well as 12-month survival in glioblastoma. We obtained 27 measures of spatial proximity (diversity) via spatial point pattern analysis of multiparametric T1 post-contrast and T2 fluid-attenuated inversion recovery MRI data. These measures were used to predict 12-month survival status (</=12 or >12 months) in 74 glioblastoma patients. Kaplan-Meier with receiver operating characteristic analyses was used to assess the relationship between derived spatial features and 12-month survival status as well as molecular subtype status in patients with glioblastoma. Kaplan-Meier survival analysis revealed that 14 spatial features were capable of stratifying overall survival in a statistically significant manner. For prediction of 12-month survival status based on these diversity indices, sensitivity and specificity were 0.86 and 0.64, respectively. The area under the receiver operating characteristic curve and the accuracy were 0.76 and 0.75, respectively. For prediction of molecular subtype status, proneural subtype shows highest accuracy of 0.93 among all molecular subtypes based on receiver operating characteristic analysis. We find that measures of spatial diversity from point pattern analysis of intensity habitats from T1 post-contrast and T2 fluid-attenuated inversion recovery images are associated with both tumor subtype status and 12-month survival status and may therefore be useful indicators of patient prognosis, in addition to providing potential guidance for molecularly-targeted therapies in Glioblastoma multiforme.

Texture feature ratios from relative CBV maps of perfusion MRI are associated with patient survival in glioblastoma

  • Lee, J
  • Jain, R
  • Khalil, K
  • Griffith, B
  • Bosca, R
  • Rao, G
  • Rao, A
American Journal of Neuroradiology 2016 Journal Article, cited 27 times
Website
BACKGROUND AND PURPOSE: Texture analysis has been applied to medical images to assist in tumor tissue classification and characterization. In this study, we obtained textural features from parametric (relative CBV) maps of dynamic susceptibility contrast-enhanced MR images in glioblastoma and assessed their relationship with patient survival. MATERIALS AND METHODS: MR perfusion data of 24 patients with glioblastoma from The Cancer Genome Atlas were analyzed in this study. One- and 2D texture feature ratios and kinetic textural features based on relative CBV values in the contrast-enhancing and nonenhancing lesions of the tumor were obtained. Receiver operating characteristic, Kaplan-Meier, and multivariate Cox proportional hazards regression analyses were used to assess the relationship between texture feature ratios and overall survival. RESULTS: Several feature ratios are capable of stratifying survival in a statistically significant manner. These feature ratios correspond to homogeneity (P = .008, based on the log-rank test), angular second moment (P = .003), inverse difference moment (P = .013), and entropy (P = .008). Multivariate Cox proportional hazards regression analysis showed that homogeneity, angular second moment, inverse difference moment, and entropy from the contrast-enhancing lesion were significantly associated with overall survival. For the nonenhancing lesion, skewness and variance ratios of relative CBV texture were associated with overall survival in a statistically significant manner. For the kinetic texture analysis, the Haralick correlation feature showed a P value close to .05. CONCLUSIONS: Our study revealed that texture feature ratios from contrast-enhancing and nonenhancing lesions and kinetic texture analysis obtained from perfusion parametric maps provide useful information for predicting survival in patients with glioblastoma.

Prognostic value and molecular correlates of a CT image-based quantitative pleural contact index in early stage NSCLC

  • Lee, Juheon
  • Cui, Yi
  • Sun, Xiaoli
  • Li, Bailiang
  • Wu, Jia
  • Li, Dengwang
  • Gensheimer, Michael F
  • Loo, Billy W
  • Diehn, Maximilian
  • Li, Ruijiang
European Radiology 2018 Journal Article, cited 3 times
Website
PURPOSE: To evaluate the prognostic value and molecular basis of a CT-derived pleural contact index (PCI) in early stage non-small cell lung cancer (NSCLC). EXPERIMENTAL DESIGN: We retrospectively analysed seven NSCLC cohorts. A quantitative PCI was defined on CT as the length of tumour-pleura interface normalised by tumour diameter. We evaluated the prognostic value of PCI in a discovery cohort (n = 117) and tested in an external cohort (n = 88) of stage I NSCLC. Additionally, we identified the molecular correlates and built a gene expression-based surrogate of PCI using another cohort of 89 patients. To further evaluate the prognostic relevance, we used four datasets totalling 775 stage I patients with publically available gene expression data and linked survival information. RESULTS: At a cutoff of 0.8, PCI stratified patients for overall survival in both imaging cohorts (log-rank p = 0.0076, 0.0304). Extracellular matrix (ECM) remodelling was enriched among genes associated with PCI (p = 0.0003). The genomic surrogate of PCI remained an independent predictor of overall survival in the gene expression cohorts (hazard ratio: 1.46, p = 0.0007) adjusting for age, gender, and tumour stage. CONCLUSIONS: CT-derived pleural contact index is associated with ECM remodelling and may serve as a noninvasive prognostic marker in early stage NSCLC. KEY POINTS: * A quantitative pleural contact index (PCI) predicts survival in early stage NSCLC. * PCI is associated with extracellular matrix organisation and collagen catabolic process. * A multi-gene surrogate of PCI is an independent predictor of survival. * PCI can be used to noninvasively identify patients with poor prognosis.

Volumetric and Voxel-Wise Analysis of Dominant Intraprostatic Lesions on Multiparametric MRI

  • Lee, Joon
  • Carver, Eric
  • Feldman, Aharon
  • Pantelic, Milan V
  • Elshaikh, Mohamed
  • Wen, Ning
Front Oncol 2019 Journal Article, cited 0 times
Website
Introduction: Multiparametric MR imaging (mpMRI) has shown promising results in the diagnosis and localization of prostate cancer. Furthermore, mpMRI may play an important role in identifying the dominant intraprostatic lesion (DIL) for radiotherapy boost. We sought to investigate the level of correlation between dominant tumor foci contoured on various mpMRI sequences. Methods: mpMRI data from 90 patients with MR-guided biopsy-proven prostate cancer were obtained from the SPIE-AAPM-NCI Prostate MR Classification Challenge. Each case consisted of T2-weighted (T2W), apparent diffusion coefficient (ADC), and K(trans) images computed from dynamic contrast-enhanced sequences. All image sets were rigidly co-registered, and the dominant tumor foci were identified and contoured for each MRI sequence. Hausdorff distance (HD), mean distance to agreement (MDA), and Dice and Jaccard coefficients were calculated between the contours for each pair of MRI sequences (i.e., T2 vs. ADC, T2 vs. K(trans), and ADC vs. K(trans)). The voxel wise spearman correlation was also obtained between these image pairs. Results: The DILs were located in the anterior fibromuscular stroma, central zone, peripheral zone, and transition zone in 35.2, 5.6, 32.4, and 25.4% of patients, respectively. Gleason grade groups 1-5 represented 29.6, 40.8, 15.5, and 14.1% of the study population, respectively (with group grades 4 and 5 analyzed together). The mean contour volumes for the T2W images, and the ADC and K(trans) maps were 2.14 +/- 2.1, 2.22 +/- 2.2, and 1.84 +/- 1.5 mL, respectively. K(trans) values were indistinguishable between cancerous regions and the rest of prostatic regions for 19 patients. The Dice coefficient and Jaccard index were 0.74 +/- 0.13, 0.60 +/- 0.15 for T2W-ADC and 0.61 +/- 0.16, 0.46 +/- 0.16 for T2W-K(trans). The voxel-based Spearman correlations were 0.20 +/- 0.20 for T2W-ADC and 0.13 +/- 0.25 for T2W-K(trans). Conclusions: The DIL contoured on T2W images had a high level of agreement with those contoured on ADC maps, but there was little to no quantitative correlation of these results with tumor location and Gleason grade group. Technical hurdles are yet to be solved for precision radiotherapy to target the DILs based on physiological imaging. A Boolean sum volume (BSV) incorporating all available MR sequences may be reasonable in delineating the DIL boost volume.

Comparison of novel multi-level Otsu (MO-PET) and conventional PET segmentation methods for measuring FDG metabolic tumor volume in patients with soft tissue sarcoma

  • Lee, Inki
  • Im, Hyung-Jun
  • Solaiyappan, Meiyappan
  • Cho, Steve Y
EJNMMI physics 2017 Journal Article, cited 0 times
Website

Integrative Radiogenomics Approach for Risk Assessment of Post-Operative Metastasis in Pathological T1 Renal Cell Carcinoma: A Pilot Retrospective Cohort Study

  • Lee, H. W.
  • Cho, H. H.
  • Joung, J. G.
  • Jeon, H. G.
  • Jeong, B. C.
  • Jeon, S. S.
  • Lee, H. M.
  • Nam, D. H.
  • Park, W. Y.
  • Kim, C. K.
  • Seo, S. I.
  • Park, H.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Despite the increasing incidence of pathological stage T1 renal cell carcinoma (pT1 RCC), postoperative distant metastases develop in many surgically treated patients, causing death in certain cases. Therefore, this study aimed to create a radiomics model using imaging features from multiphase computed tomography (CT) to more accurately predict the postoperative metastasis of pT1 RCC and further investigate the possible link between radiomics parameters and gene expression profiles generated by whole transcriptome sequencing (WTS). Four radiomic features, including the minimum value of a histogram feature from inner regions of interest (ROIs) (INNER_Min_hist), the histogram of the energy feature from outer ROIs (OUTER_Energy_Hist), the maximum probability of gray-level co-occurrence matrix (GLCM) feature from inner ROIs (INNER_MaxProb_GLCM), and the ratio of voxels under 80 Hounsfield units (Hus) in the nephrographic phase of postcontrast CT (Under80HURatio), were detected to predict the postsurgical metastasis of patients with pathological stage T1 RCC, and the clinical outcomes of patients could be successfully stratified based on their radiomic risk scores. Furthermore, we identified heterogenous-trait-associated gene signatures correlated with these four radiomic features, which captured clinically relevant molecular pathways, tumor immune microenvironment, and potential treatment strategies. Our results of accurate surrogates using radiogenomics could lead to additional benefit from adjuvant therapy or postsurgical metastases in pT1 RCC.

Evaluation of the Usefulness of Detection of Abdominal CT Kidney and Vertebrae using Deep Learning

  • Lee, H.-J., kwak, M.-H., Yoon, H.-W., Ryu, E.-J., Song, H.-G., & Hong, J.-W.
2021 Journal Article, cited 0 times
CT is important role in the medical field, such as disease diagnosis, but the number of examination and CT images are increasing. Recently, deep learning has been actively used in the medical field, and it has been used to diagnose auxiliary disease through object detection during deep learning using medical images. The purpose of study to evaluate accuracy by detecting kidney and vertebrae during abdominal CT using object detection deep learning in YOLOv3. As a results of the study, the detection accuracy of the kidney and vertebrae was 83.00%, 82.45%, and can be used as basic data for the object detection of medical images using deep learning.

Restoration of Full Data from Sparse Data in Low-Dose Chest Digital Tomosynthesis Using Deep Convolutional Neural Networks

  • Lee, Donghoon
  • Kim, Hee-Joung
2018 Journal Article, cited 0 times
Website

High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains

  • Lee, Donghoong
  • Choi, Sunghoon
  • Kim, Hee‐Joung
Medical Physics 2018 Journal Article, cited 0 times
Website

A Three-Dimensional-Printed Patient-Specific Phantom for External Beam Radiation Therapy of Prostate Cancer

  • Lee, Christopher L
  • Dietrich, Max C
  • Desai, Uma G
  • Das, Ankur
  • Yu, Suhong
  • Xiang, Hong F
  • Jaffe, C Carl
  • Hirsch, Ariel E
  • Bloch, B Nicolas
Journal of Engineering and Science in Medical Diagnostics and Therapy 2018 Journal Article, cited 0 times
Website

GRAPH-BASED SIGNAL PROCESSING TO CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE SEGMENTATION

  • Le-Tien, Thuong
  • To, Thanh-Nha
  • Vo, Giang
2022 Journal Article, cited 0 times
Automatic medical image segmentation normally is a difficult task because medical images are complex in nature therefore many researchers have studied a lot of approaches to analyze patterns of images. In which, the crucial applications of deep learning in medicine are growing trends, especially Convolutional Neural Networks (CNNs) in the field of Computer Vision, yielding many remarkable results. In this paper, we propose a method to apply graph-based signal processing to CNNs architecture for medical image segmentation application. In particular, the processed architecture is based on the graph convolution to extract features in the image instead of the traditional convolution in DSP (Digital Signal Processing). The proposed solution is effective in learning neighboring links. We also introduce a back-propagation algorithm that optimizes the weights of the graph filter and finds the adjacency matrix that fits the training data. Then, the network model is applied on the dataset of medical images to help detect abnormal areas.

Cross-institutional outcome prediction for head and neck cancer patients using self-attention neural networks

  • Le, W. T.
  • Vorontsov, E.
  • Romero, F. P.
  • Seddik, L.
  • Elsharief, M. M.
  • Nguyen-Tan, P. F.
  • Roberge, D.
  • Bahig, H.
  • Kadoury, S.
2022 Journal Article, cited 0 times
Website
In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving [Formula: see text] AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.

Automatic GPU memory management for large neural models in TensorFlow

  • Le, Tung D.
  • Imai, Haruki
  • Negishi, Yasushi
  • Kawachiya, Kiyokuni
2019 Conference Proceedings, cited 0 times
Website
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.

Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

  • Le, Trong-Ngoc
  • Bao, Pham The
  • Huynh, Hieu Trung
BioMed Research International 2016 Journal Article, cited 5 times
Website
Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

Can Persistent Homology Features Capture More Intrinsic Information about Tumors from (18)F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Images of Head and Neck Cancer Patients?

  • Le, Q. C.
  • Arimura, H.
  • Ninomiya, K.
  • Kodama, T.
  • Moriyama, T.
2022 Journal Article, cited 0 times
Website
This study hypothesized that persistent homology (PH) features could capture more intrinsic information about the metabolism and morphology of tumors from (18)F-fluorodeoxyglucose positron emission tomography (PET)/computed tomography (CT) images of patients with head and neck (HN) cancer than other conventional features. PET/CT images and clinical variables of 207 patients were selected from the publicly available dataset of the Cancer Imaging Archive. PH images were generated from persistent diagrams obtained from PET/CT images. The PH features were derived from the PH PET/CT images. The signatures were constructed in a training cohort from features from CT, PET, PH-CT, and PH-PET images; clinical variables; and the combination of features and clinical variables. Signatures were evaluated using statistically significant differences (p-value, log-rank test) between survival curves for low- and high-risk groups and the C-index. In an independent test cohort, the signature consisting of PH-PET features and clinical variables exhibited the lowest log-rank p-value of 3.30 x 10(-5) and C-index of 0.80, compared with log-rank p-values from 3.52 x 10(-2) to 1.15 x 10(-4) and C-indices from 0.34 to 0.79 for other signatures. This result suggests that PH features can capture the intrinsic information of tumors and predict prognosis in patients with HN cancer.

Radiomic features based on Hessian index for prediction of prognosis in head-and-neck cancer patients

  • Le, Quoc Cuong
  • Arimura, Hidetaka
  • Ninomiya, Kenta
  • Kabata, Yutaro
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI

  • Le, Nguyen Quoc Khanh
  • Hung, Truong Nguyen Khanh
  • Do, Duyen Thi
  • Lam, Luu Ho Thanh
  • Dang, Luong Huu
  • Huynh, Tuan-Tu
Comput Biol Med 2021 Journal Article, cited 0 times
Website
BACKGROUND: In the field of glioma, transcriptome subtypes have been considered as an important diagnostic and prognostic biomarker that may help improve the treatment efficacy. However, existing identification methods of transcriptome subtypes are limited due to the relatively long detection period, the unattainability of tumor specimens via biopsy or surgery, and the fleeting nature of intralesional heterogeneity. In search of a superior model over previous ones, this study evaluated the efficiency of eXtreme Gradient Boosting (XGBoost)-based radiomics model to classify transcriptome subtypes in glioblastoma patients. METHODS: This retrospective study retrieved patients from TCGA-GBM and IvyGAP cohorts with pathologically diagnosed glioblastoma, and separated them into different transcriptome subtypes groups. GBM patients were then segmented into three different regions of MRI: enhancement of the tumor core (ET), non-enhancing portion of the tumor core (NET), and peritumoral edema (ED). We subsequently used handcrafted radiomics features (n = 704) from multimodality MRI and two-level feature selection techniques (Spearman correlation and F-score tests) in order to find the features that could be relevant. RESULTS: After the feature selection approach, we identified 13 radiomics features that were the most meaningful ones that can be used to reach the optimal results. With these features, our XGBoost model reached the predictive accuracies of 70.9%, 73.3%, 88.4%, and 88.4% for classical, mesenchymal, neural, and proneural subtypes, respectively. Our model performance has been improved in comparison with the other models as well as previous works on the same dataset. CONCLUSION: The use of XGBoost and two-level feature selection analysis (Spearman correlation and F-score) could be expected as a potential combination for classifying transcriptome subtypes with high performance and might raise public attention for further research on radiomics-based GBM models.

Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI

  • Le, Nguyen Quoc Khanh
  • Hung, Truong Nguyen Khanh
  • Do, Duyen Thi
  • Lam, Luu Ho Thanh
  • Dang, Luong Huu
  • Huynh, Tuan-Tu
2021 Journal Article, cited 0 times
Website

XGBoost Improves Classification of MGMT Promoter Methylation Status in IDH1 Wildtype Glioblastoma

  • Le, N. Q. K.
  • Do, D. T.
  • Chiu, F. Y.
  • Yapp, E. K. Y.
  • Yeh, H. Y.
  • Chen, C. Y.
J Pers Med 2020 Journal Article, cited 1 times
Website
Approximately 96% of patients with glioblastomas (GBM) have IDH1 wildtype GBMs, characterized by extremely poor prognosis, partly due to resistance to standard temozolomide treatment. O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation status is a crucial prognostic biomarker for alkylating chemotherapy resistance in patients with GBM. However, MGMT methylation status identification methods, where the tumor tissue is often undersampled, are time consuming and expensive. Currently, presurgical noninvasive imaging methods are used to identify biomarkers to predict MGMT methylation status. We evaluated a novel radiomics-based eXtreme Gradient Boosting (XGBoost) model to identify MGMT promoter methylation status in patients with IDH1 wildtype GBM. This retrospective study enrolled 53 patients with pathologically proven GBM and tested MGMT methylation and IDH1 status. Radiomics features were extracted from multimodality MRI and tested by F-score analysis to identify important features to improve our model. We identified nine radiomics features that reached an area under the curve of 0.896, which outperformed other classifiers reported previously. These features could be important biomarkers for identifying MGMT methylation status in IDH1 wildtype GBM. The combination of radiomics feature extraction and F-core feature selection significantly improved the performance of the XGBoost model, which may have implications for patient stratification and therapeutic strategy in GBM.

Narrow Band Active Contour Attention Model for Medical Segmentation

  • Le, N.
  • Bui, T.
  • Vo-Ho, V. K.
  • Yamazaki, K.
  • Luu, K.
Diagnostics (Basel) 2021 Journal Article, cited 6 times
Website
Medical image segmentation is one of the most challenging tasks in medical image analysis and widely developed for many clinical applications. While deep learning-based approaches have achieved impressive performance in semantic segmentation, they are limited to pixel-wise settings with imbalanced-class data problems and weak boundary object segmentation in medical images. In this paper, we tackle those limitations by developing a new two-branch deep network architecture which takes both higher level features and lower level features into account. The first branch extracts higher level feature as region information by a common encoder-decoder network structure such as Unet and FCN, whereas the second branch focuses on lower level features as support information around the boundary and processes in parallel to the first branch. Our key contribution is the second branch named Narrow Band Active Contour (NB-AC) attention model which treats the object contour as a hyperplane and all data inside a narrow band as support information that influences the position and orientation of the hyperplane. Our proposed NB-AC attention model incorporates the contour length with the region energy involving a fixed-width band around the curve or surface. The proposed network loss contains two fitting terms: (i) a high level feature (i.e., region) fitting term from the first branch; (ii) a lower level feature (i.e., contour) fitting term from the second branch including the (ii1) length of the object contour and (ii2) regional energy functional formed by the homogeneity criterion of both the inner band and outer band neighboring the evolving curve or surface. The proposed NB-AC loss can be incorporated into both 2D and 3D deep network architectures. The proposed network has been evaluated on different challenging medical image datasets, including DRIVE, iSeg17, MRBrainS18 and Brats18. The experimental results have shown that the proposed NB-AC loss outperforms other mainstream loss functions: Cross Entropy, Dice, Focal on two common segmentation frameworks Unet and FCN. Our 3D network which is built upon the proposed NB-AC loss and 3DUnet framework achieved state-of-the-art results on multiple volumetric datasets.

Deep Learning–based Method for Denoising and Image Enhancement in Low-Field MRI

  • Le, Dang Bich Thuy
  • Sadinski, Meredith
  • Nacev, Aleksandar
  • Narayanan, Ram
  • Kumar, Dinesh
2021 Conference Paper, cited 3 times
Website
Deep learning has proven successful in a variety of medical image processing applications, including denoising and removing artifacts. This is of particular interest for low-field Magnetic Resonance Imaging (MRI), which is promising for its affordability, compact footprint, and reduced shielding requirements, but inherently suffers from low signal-to-noise ratio. In this work, we propose a method of simulating scanner-specific images from publicly available, 1.5T and 3T database of MR images, using a signal encoding matrix incorporating explicitly modeled imaging gradients and fields. We apply a stacked, U-Net architecture to reduce noise from the system and remove artifacts due to the inhomogeneous B0 field, nonlinear gradients, undersampling of k-space and image reconstruction to enhance low-field MR images. The final network is applied as a post-processing step following image reconstruction to phantom and human images acquired on a 60-67mT MR scanner and demonstrates promising qualitative and quantitative improvements to overall image quality.

Quantitative neuroimaging with handcrafted and deep radiomics in neurological diseases

  • Lavrova, Elizaveta
2024 Thesis, cited 0 times
Website
The motivation behind this thesis is to explore the potential of "radiomics" in the field of neurology, where early diagnosis and accurate treatment selection are crucial for improving patient outcomes. Neurological diseases are a major cause of disability and death globally, and there is a pressing need for reliable imaging biomarkers to aid in disease detection and monitoring. While radiomics has shown promising results in oncology, its application in neurology remains relatively unexplored. Therefore, this work aims to investigate the feasibility and challenges of implementing radiomics in the neurological context, addressing various limitations and proposing potential solutions. The thesis begins with a demonstration of the predictive power of radiomics for identifying important diagnostic biomarkers in neuro-oncology. Building on this foundation, the research then delves into radiomics in non-oncological neurology, providing an overview of the pipeline steps, potential clinical applications, and existing challenges. Despite promising results in proof-of-concept studies, the field faces limitations, mostly data-related, such as small sample sizes, retrospective nature, and lack of external validation. To explore the predictive power of radiomics in non-oncological tasks, a radiomics approach was implemented to distinguish between multiple sclerosis patients and normal controls. Notably, radiomic features extracted from normal-appearing white matter were found to contain distinctive information for multiple sclerosis detection, confirming the hypothesis of the thesis. To overcome the data harmonization challenge, in this work quantitative mapping of the brain was used. Unlike traditional imaging methods, quantitative mapping involves measuring the physical properties of brain tissues, providing a more standardized and consistent data representation. By reconstructing the physical properties of each voxel based on multi-echo MRI acquisition, quantitative mapping produces data that is less susceptible to domain-specific biases and scanner variability. Additionally, the insights gained from quantitative mapping are building the bridge toward the physical and biological properties of brain tissues, providing a deeper understanding of the underlying pathology. Another crucial challenge in radiomics is robust and fast data labeling, particularly segmentation. A deep learning method was proposed to perform automated carotid artery segmentation in stroke at-risk patients, surpassing current state-of-the-art approaches. This novel method showcases the potential of automated segmentation to enhance radiomics pipeline implementation. In addition to addressing specific challenges, the thesis also proposes a community-driven open-source toolbox for radiomics, aimed at enhancing pipeline standardization and transparency. This software package would facilitate data curation and exploratory analysis, fostering collaboration and reproducibility in radiomics research. Through an in-depth exploration of radiomics in neuroimaging, this thesis demonstrates its potential to enhance neurological disease diagnosis and monitoring. By uncovering valuable information from seemingly normal brain tissues, radiomics holds promise for early disease detection. Furthermore, the development of innovative tools and methods, including deep learning and quantitative mapping, has the potential to address data labeling and harmonization challenges. Looking to the future, embracing larger, diverse datasets and longitudinal studies will further enhance the generalizability and predictive power of radiomics in neurology. By addressing the challenges identified in this thesis and fostering collaboration within the research community, radiomics can advance toward clinical implementation, revolutionizing precision medicine in neurology.

Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI

  • Lavasani, S Navaei
  • Mostaar, A
  • Ashtiyani, M
Journal of Biomedical Physics & Engineering 2018 Journal Article, cited 0 times
Website

Glioma Tumors&rsquo; Classification Using Deep-Neural-Network-Based Features with SVM Classifier

  • Latif, Ghazanfar
  • Ben Brahim, Ghassen
  • Iskandar, D. N. F. Awang
  • Bashar, Abul
  • Alghazo, Jaafar
Diagnostics 2022 Journal Article, cited 0 times
Website
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient's life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.

DeepTumor: Framework for Brain MR Image Classification, Segmentation and Tumor Detection

  • Latif, G.
Diagnostics (Basel) 2022 Journal Article, cited 1 times
Website
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain's required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature.

Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

  • Lassen, BC
  • Jacobs, C
  • Kuhnigk, JM
  • van Ginneken, B
  • van Rikxoort, EM
2015 Journal Article, cited 25 times
Website
The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of subsolid nodules in clinical routine.

Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients

  • Lassau, N.
  • Ammari, S.
  • Chouzenoux, E.
  • Gortais, H.
  • Herent, P.
  • Devilder, M.
  • Soliman, S.
  • Meyrignac, O.
  • Talabard, M. P.
  • Lamarque, J. P.
  • Dubois, R.
  • Loiseau, N.
  • Trichelair, P.
  • Bendjebbar, E.
  • Garcia, G.
  • Balleyguier, C.
  • Merad, M.
  • Stoclin, A.
  • Jegou, S.
  • Griscelli, F.
  • Tetelboum, N.
  • Li, Y.
  • Verma, S.
  • Terris, M.
  • Dardouri, T.
  • Gupta, K.
  • Neacsu, A.
  • Chemouni, F.
  • Sefta, M.
  • Jehanno, P.
  • Bousaid, I.
  • Boursin, Y.
  • Planchet, E.
  • Azoulay, M.
  • Dachary, J.
  • Brulport, F.
  • Gonzalez, A.
  • Dehaene, O.
  • Schiratti, J. B.
  • Schutte, K.
  • Pesquet, J. C.
  • Talbot, H.
  • Pronier, E.
  • Wainrib, G.
  • Clozel, T.
  • Barlesi, F.
  • Bellin, M. F.
  • Blum, M. G. B.
2021 Journal Article, cited 20 times
Website
The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach.

4DCT imaging to assess radiomics feature stability: An investigation for thoracic cancers

  • Larue, Ruben THM
  • Van De Voorde, Lien
  • van Timmeren, Janna E
  • Leijenaar, Ralph TH
  • Berbée, Maaike
  • Sosef, Meindert N
  • Schreurs, Wendy MJ
  • van Elmpt, Wouter
  • Lambin, Philippe
Radiotherapy and Oncology 2017 Journal Article, cited 7 times
Website
BACKGROUND AND PURPOSE: Quantitative tissue characteristics derived from medical images, also called radiomics, contain valuable prognostic information in several tumour-sites. The large number of features available increases the risk of overfitting. Typically test-retest CT-scans are used to reduce dimensionality and select robust features. However, these scans are not always available. We propose to use different phases of respiratory-correlated 4D CT-scans (4DCT) as alternative. MATERIALS AND METHODS: In test-retest CT-scans of 26 non-small cell lung cancer (NSCLC) patients and 4DCT-scans (8 breathing phases) of 20 NSCLC and 20 oesophageal cancer patients, 1045 radiomics features of the primary tumours were calculated. A concordance correlation coefficient (CCC) >0.85 was used to identify robust features. Correlation with prognostic value was tested using univariate cox regression in 120 oesophageal cancer patients. RESULTS: Features based on unfiltered images demonstrated greater robustness than wavelet-filtered features. In total 63/74 (85%) unfiltered features and 268/299 (90%) wavelet features stable in the 4D-lung dataset were also stable in the test-retest dataset. In oesophageal cancer 397/1045 (38%) features were robust, of which 108 features were significantly associated with overall-survival. CONCLUSION: 4DCT-scans can be used as alternative to eliminate unstable radiomics features as first step in a feature selection procedure. Feature robustness is tumour-site specific and independent of prognostic value.

3D-Printed Tumor Phantoms for Assessment of In Vivo Fluorescence Imaging Analysis Methods

  • LaRochelle, E. P. M.
  • Streeter, S. S.
  • Littler, E. A.
  • Ruiz, A. J.
2022 Journal Article, cited 0 times
Website
PURPOSE: Interventional fluorescence imaging is increasingly being utilized to quantify cancer biomarkers in both clinical and preclinical models, yet absolute quantification is complicated by many factors. The use of optical phantoms has been suggested by multiple professional organizations for quantitative performance assessment of fluorescence guidance imaging systems. This concept can be further extended to provide standardized tools to compare and assess image analysis metrics. PROCEDURES: 3D-printed fluorescence phantoms based on solid tumor models were developed with representative bio-mimicking optical properties. Phantoms were produced with discrete tumors embedded with an NIR fluorophore of fixed concentration and either zero or 3% non-specific fluorophore in the surrounding material. These phantoms were first imaged by two fluorescence imaging systems using two methods of image segmentation, and four assessment metrics were calculated to demonstrate variability in the quantitative assessment of system performance. The same analysis techniques were then applied to one tumor model with decreasing tumor fluorophore concentrations. RESULTS: These anatomical phantom models demonstrate the ability to use 3D printing to manufacture anthropomorphic shapes with a wide range of reduced scattering (mu(s)': 0.24-1.06 mm(-1)) and absorption (mu(a): 0.005-0.14 mm(-1)) properties. The phantom imaging and analysis highlight variability in the measured sensitivity metrics associated with tumor visualization. CONCLUSIONS: 3D printing techniques provide a platform for demonstrating complex biological models that introduce real-world complexities for quantifying fluorescence image data. Controlled iterative development of these phantom designs can be used as a tool to advance the field and provide context for consensus-building beyond performance assessment of fluorescence imaging platforms, and extend support for standardizing how quantitative metrics are extracted from imaging data and reported in literature.

Conditional random fields improve the CNN-based prostate cancer classification performance

  • Lapa, Paulo Alberto Fernandes
2019 Thesis, cited 0 times
Website
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality. Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.

Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI

  • Lapa, Paulo
  • Gonçalves, Ivo
  • Rundo, Leonardo
  • Castelli, Mauro
2019 Conference Proceedings, cited 0 times
Website
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.

A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI

  • Lapa, Paulo
  • Castelli, Mauro
  • Gonçalves, Ivo
  • Sala, Evis
  • Rundo, Leonardo
Applied Sciences 2020 Journal Article, cited 0 times

A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme

  • Lao, Jiangwei
  • Chen, Yinsheng
  • Li, Zhi-Cheng
  • Li, Qihua
  • Zhang, Ji
  • Liu, Jing
  • Zhai, Guangtao
Sci RepScientific reports 2017 Journal Article, cited 32 times
Website
Traditional radiomics models mainly rely on explicitly-designed handcrafted features from medical images. This paper aimed to investigate if deep features extracted via transfer learning can generate radiomics signatures for prediction of overall survival (OS) in patients with Glioblastoma Multiforme (GBM). This study comprised a discovery data set of 75 patients and an independent validation data set of 37 patients. A total of 1403 handcrafted features and 98304 deep features were extracted from preoperative multi-modality MR images. After feature selection, a six-deep-feature signature was constructed by using the least absolute shrinkage and selection operator (LASSO) Cox regression model. A radiomics nomogram was further presented by combining the signature and clinical risk factors such as age and Karnofsky Performance Score. Compared with traditional risk factors, the proposed signature achieved better performance for prediction of OS (C-index = 0.710, 95% CI: 0.588, 0.932) and significant stratification of patients into prognostically distinct groups (P < 0.001, HR = 5.128, 95% CI: 2.029, 12.960). The combined model achieved improved predictive performance (C-index = 0.739). Our study demonstrates that transfer learning-based deep features are able to generate prognostic imaging signature for OS prediction and patient stratification for GBM, indicating the potential of deep imaging feature-based biomarker in preoperative care of GBM patients.

A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop

  • Langlotz, Curtis P
  • Allen, Bibb
  • Erickson, Bradley J
  • Kalpathy-Cramer, Jayashree
  • Bigelow, Keith
  • Cook, Tessa S
  • Flanders, Adam E
  • Lungren, Matthew P
  • Mendelson, David S
  • Rudie, Jeffrey D
  • Wang, Ge
  • Kandarpa, Krishna
RadiologyRadiology 2019 Journal Article, cited 1 times
Website
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.

Collaborative and Reproducible Research: Goals, Challenges, and Strategies

  • Langer, S. G.
  • Shih, G.
  • Nagy, P.
  • Landman, B. A.
J Digit Imaging 2018 Journal Article, cited 1 times
Website
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.

A Video Data Based Transfer Learning Approach for Classification of MGMT Status in Brain Tumor MR Images

  • Lang, D. M.
  • Peeken, J. C.
  • Combs, S. E.
  • Wilkens, J. J.
  • Bartzsch, S.
2022 Book Section, cited 0 times
Patient MGMT (O6 methylguanine DNA methyltransferase) status has been identified essential for the responsiveness to chemotherapy in glioblastoma patients and therefore depicts an important clinical factor. Testing for MGMT methylation is invasive, time consuming and costly and lacks a uniform gold standard. We studied MGMT status assessment by multi-parametric magnetic resonance imaging (mpMRI) scans and tested the ability of deep learning for classification of this task. To overcome the limited number of training examples we used a transfer learning approach based on the video clip classification network C3D [30], allowing for full exploitation of three dimensional information in the MR images. MRI sequences were fused using a locally connected layer. Our approach was able to differentiate MGMT methylated from unmethylated patients with an area under the receiver operating characteristics curve (AUC) of 0.689 for the public validation set. On the private test set AUC was given by 0.577. Further studies for assessment of clinical importance and predictive power in terms of survival are needed.

A simple texture feature for retrieval of medical images

  • Lan, Rushi
  • Zhong, Si
  • Liu, Zhenbing
  • Shi, Zhuo
  • Luo, Xiaonan
Multimedia Tools and Applications 2017 Journal Article, cited 2 times
Website
Texture characteristic is an important attribute of medical images, and has been applied in many medical image applications. This paper proposes a simple approach to employ the texture features of medical images for retrieval. The developed approach first conducts image filtering to medical images using different Gabor and Schmid filters, and then uniformly partitions the filtered images into non-overlapping patches. These operations provide extensive local texture information of medical images. The bag-of-words model is finally used to obtain feature representations of the images. Compared with several existing features, the proposed one is more discriminative and efficient. Experiments on two benchmark medical CT image databases have demonstrated the effectiveness of the proposed approach.

Textural Analysis of Tumour Imaging: A Radiomics Approach

  • Lambrecht, Joren
2017 Thesis, cited 0 times
Website
Conventionally, tumour characteristic are assessed by performing a biopsy. These biopsies are invasive and submissive to the problem of tumour heterogeneity. However, analysis of imaging data may render the need for such biopsies obsolete. This master’s dissertation describes in what matter images of tumour masses can be post-processed to classify the tumours in a variety of respective clinical response classes. Tumour images obtained using both computed tomography and magnetic resonance imaging are analysed. The analysis of these images is done using a radiomics approach. This approach will convert the imaging data into a high dimensional mineable feature space. The features considered are first-order statistics, texture features, wavelet-based features and shape parameters. Post-processing techniques applied on this feature space include k-means clustering, assessment of stability and prognostic performance and machine learning techniques. Both random forests and neural networks are included. Results from these analyses show that the radiomics features can be correlated with different clinical response classes as well as serve as input data to create predictive models with correct prediction rates up to 63.9 % in CT and 66.0 % in MRI. Furthermore, a radiomics signature can be created that consists of four features and is capable of predicting clinical response factors with almost the same accuracy as obtained using the entire data space. Keywords - Radiomics, texture analysis, lung tumour, CT, brain tumour, MRI, clustering, random forest, neural network, machine learning, radiomics signature, biopsy, tumour heterogeneity

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Website
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

Binary Classification for Lung Nodule Based on Channel Attention Mechanism

  • Lai, Khai Dinh
  • Le, Thai Hoang
  • Nguyen, Thuy Thanh
2021 Conference Proceedings, cited 0 times
Website
In order to effectively handle the problem of tumor detection on the LUNA16 dataset, we present a new methodology for data augmentation to address the issue of imbalance between the number of positive and negative candidates in this study. Furthermore, a new deep learning model - ASS (a model that combines Convnet sub-attention with Softmax loss) is also proposed and evaluated on patches with different sizes of the LUNA16. Data enrichment techniques are implemented in two ways: off-line augmentation increases the number of images based on the image under consideration, and on-line augmentation increases the number of images by rotating the image at four angles (0°, 90°, 180°, and 270°). We build candidate boxes of various sizes based on the coordinates of each candidate, and these candidate boxes are used to demonstrate the usefulness of the suggested ASS model. The results of cross-testing (with four cases: case 1, ASS trained and tested on a dataset of size 50 × 50; case 2, using ASS trained on a dataset of size 50 × 50 to test a dataset of size 100 × 100; case 3, ASS trained and tested on a dataset of size 100 × 100 and case 4, using ASS trained on a dataset of size 100 × 100 to test a dataset of size 50 × 50) show that the proposed ASS model is feasible.

A Radiogenomic multimodal and whole-transcriptome sequencing for preoperative prediction of axillary lymph node metastasis and drug therapeutic response in breast cancer: a retrospective, machine learning And international multi-cohort study

  • Lai, J.
  • Chen, Z.
  • Liu, J.
  • Zhu, C.
  • Huang, H.
  • Yi, Y.
  • Cai, G.
  • Liao, N.
2024 Journal Article, cited 0 times
Website
BACKGROUND: Axillary lymph nodes (ALN) status serves as a crucial prognostic indicator in breast cancer (BC). The aim of this study was to construct a radiogenomic multimodal model, based on machine learning (ML) and whole-transcriptome sequencing (WTS), to accurately preoperative evaluate the risk of ALN metastasis (ALNM), drug therapeutic response and avoid unnecessary axillary surgery in BC patients. METHODS: In this study, we conducted a retrospective analysis of 1078 BC patients from The Cancer Genome Atlas (TCGA), The Cancer Imaging Archive (TCIA), and Foshan cohort. These patients were divided into the TCIA cohort(N=103), TCIA validation cohort(N=51), Duke cohort(N=138), Foshan cohort(N=106), and TCGA cohort(N=680). Radiological features were extracted from BC radiological images and differentially expressed gene expression was calibrated using WTS technology. A support vector machine (SVM) model was employed to screen radiological and genetic features, and a multimodal model was established based on radiogenomic and clinical pathological features to predict ALNM and stratify. The accuracy of the model predictions was assessed using the area under the curve (AUC) and the clinical benefit was measured using decision curve analysis (DCA). Risk stratification analysis of BC patients was performed by gene set enrichment analysis (GSEA), differential comparison of immune checkpoint gene expression, and drug sensitivity testing. RESULTS: For the prediction of ALNM, rad-score was able to significantly differentiate between ALN- and ALN+ patients in both the Duke and Foshan cohorts (P<0.05). Similarly, the gene-score was able to significantly differentiate between ALN- and ALN+ patients in the TCGA cohort (P<0.05). The radiogenomic multimodal nomogram demonstrated satisfactory performance in the TCIA cohort (AUC 0.82, 95% CI: 0.74-0.91) and TCIA validation cohort (AUC 0.77, 95% CI: 0.63-0.91). In the risk sub-stratification analysis, there were significant differences in gene pathway enrichment between high and low-risk groups (P<0.05). Additionally, different risk groups may exhibit varying treatment responses to chemotherapy (including Doxorubicin, Methotrexate and Lapatinib) (P<0.05). CONCLUSION: Overall, the radiogenomic multimodal model employs multimodal data, including radiological images, genetic and clinicopathological typing. The radiogenomic multimodal nomogram can precisely predict ALNM and drug therapeutic response in BC patients.

Knowledge Distillation for Brain Tumor Segmentation

  • Lachinov, Dmitrii
  • Shipunova, Elena
  • Turlapov, Vadim
2020 Book Section, cited 0 times
The segmentation of brain tumors in multimodal MRIs is one of the most challenging tasks in medical image analysis. The recent state of the art algorithms solving this task are based on machine learning approaches and deep learning in particular. The amount of data used for training such models and its variability is a keystone for building an algorithm with high representation power. In this paper, we study the relationship between the performance of the model and the amount of data employed during the training process. On the example of brain tumor segmentation challenge, we compare the model trained with labeled data provided by challenge organizers, and the same model trained in omni-supervised manner using additional unlabeled data annotated with the ensemble of heterogeneous models. As a result, a single model trained with additional data achieves performance close to the ensemble of multiple models and outperforms individual methods.

Diagnostic Accuracy and Reliability of Deep Learning-Based Human Papillomavirus Status Prediction in Oropharyngeal Cancer

  • La Greca Saint-Esteven, Agustina
  • Marchiori, Chiara
  • Bogowicz, Marta
  • Barranco-García, Javier
  • Khodabakhshi, Zahra
  • Konukoglu, Ender
  • Riesterer, Oliver
  • Balermpas, Panagiotis
  • Hüllner, Martin
  • Malossi, A. Cristiano I.
  • Guckenberger, Matthias
  • van Timmeren, Janita E.
  • Tanadini-Lang, Stephanie
2023 Book Section, cited 0 times
Website
Oropharyngeal cancer (OPC) patients with associated human papillomavirus (HPV) infection generally present more favorable outcomes than HPV-negative patients and, consequently, their treatment with radiation therapy may be potentially de-escalated. The diagnostic accuracy of a deep learning (DL) model to predict HPV status on computed tomography (CT) images was evaluated in this study, together with its ability to perform unsupervised heatmap-based localization of relevant regions in OPC and HPV infection, i.e., the primary tumor and lymph nodes, as a measure of its reliability. The dataset consisted of 767 patients from one internal and two public collections from The Cancer Imaging Archive and was split into training, validation and test sets using the ratio 60–20–20. Images were resampled to a resolution of 2 mm3 and a sub-volume of 96 pixels3 was automatically cropped, which spanned from the nose until the start of the lungs. Models Genesis was fine-tuned for the classification task. Grad-CAM and Score-CAM were applied to the test subjects that belonged to the internal cohort (n = 24), and the overlap and Dice coefficients between the resulting heatmaps and the planning target volumes (PTVs) were calculated. Final train/validation/test area-under-the-curve (AUC) values of 0.9/0.87/0.87, accuracies of 0.83/0.82/0.79, and F1-scores of 0.83/0.79/0.74 were achieved. The reliability analysis showed an increased focus on dental artifacts in HPV-positive patients, whereas promising overlaps and moderate Dice coefficients with the PTVs were obtained for HPV-negative cases. These findings prove the necessity of performing reliability studies before a DL model is implemented in a real clinical setting, even if there is optimal diagnostic accuracy.

A 2.5D convolutional neural network for HPV prediction in advanced oropharyngeal cancer

  • La Greca Saint-Esteven, A.
  • Bogowicz, M.
  • Konukoglu, E.
  • Riesterer, O.
  • Balermpas, P.
  • Guckenberger, M.
  • Tanadini-Lang, S.
  • van Timmeren, J. E.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
BACKGROUND: Infection with human papilloma virus (HPV) is one of the most relevant prognostic factors in advanced oropharyngeal cancer (OPC) treatment. In this study we aimed to assess the diagnostic accuracy of a deep learning-based method for HPV status prediction in computed tomography (CT) images of advanced OPC. METHOD: An internal dataset and three public collections were employed (internal: n = 151, HNC1: n = 451; HNC2: n = 80; HNC3: n = 110). Internal and HNC1 datasets were used for training, whereas HNC2 and HNC3 collections were used as external test cohorts. All CT scans were resampled to a 2 mm(3) resolution and a sub-volume of 72x72x72 pixels was cropped on each scan, centered around the tumor. Then, a 2.5D input of size 72x72x3 pixels was assembled by selecting the 2D slice containing the largest tumor area along the axial, sagittal and coronal planes, respectively. The convolutional neural network employed consisted of the first 5 modules of the Xception model and a small classification network. Ten-fold cross-validation was applied to evaluate training performance. At test time, soft majority voting was used to predict HPV status. RESULTS: A final training mean [range] area under the curve (AUC) of 0.84 [0.76-0.89], accuracy of 0.76 [0.64-0.83] and F1-score of 0.74 [0.62-0.83] were achieved. AUC/accuracy/F1-score values of 0.83/0.75/0.69 and 0.88/0.79/0.68 were achieved on the HNC2 and HNC3 test sets, respectively. CONCLUSION: Deep learning was successfully applied and validated in two external cohorts to predict HPV status in CT images of advanced OPC, proving its potential as a support tool in cancer precision medicine.

Combining Generative Models for Multifocal Glioma Segmentation and Registration

  • Kwon, Dongjin
  • Shinohara, Russell T
  • Akbari, Hamed
  • Davatzikos, Christos
2014 Book Section, cited 55 times
Website
In this paper, we propose a new method for simultaneously segmenting brain scans of glioma patients and registering these scans to a normal atlas. Performing joint segmentation and registration for brain tumors is very challenging when tumors include multifocal masses and have complex shapes with heterogeneous textures. Our approach grows tumors for each mass from multiple seed points using a tumor growth model and modifies a normal atlas into one with tumors and edema using the combined results of grown tumors. We also generate a tumor shape prior via the random walk with restart, utilizing multiple tumor seeds as initial foreground information. We then incorporate this shape prior into an EM framework which estimates the mapping between the modified atlas and the scans, posteriors for each tissue labels, and the tumor growth model parameters. We apply our method to the BRATS 2013 leaderboard dataset to evaluate segmentation performance. Our method shows the best performance among all participants.

COVID-19 Lesion Segmentation Framework for the Contrast-Enhanced CT in the Absence of Contrast-Enhanced CT Annotations

  • Kvasnytsia, Maryna
  • Berenguer, Abel Díaz
  • Sahli, Hichem
  • Vandemeulebroucke, Jef
2023 Book Section, cited 0 times
Website
Medical imaging is a dynamic domain where new acquisition protocols are regularly developed and employed to meet changing clinical needs. Deep learning models for medical image segmentation have proven to be a valuable tool for medical image processing. Creating such a model from scratch requires a lot of effort in terms of annotating new types of data and model training. Therefore, the amount of annotated training data for the new imaging protocol might still be limited. In this work we propose a framework for segmentation of images acquired with a new imaging protocol(contrast-enhanced lung CT) that does not require annotating training data in the new target domain. Instead, the framework leverages the previously developed models, data and annotations in a related source domain. Using contrast-enhanced lung CT data as a target data we demonstrate that unpaired image translation from the non-contrast enhanced source data, combined with self-supervised pretraining achieves 0.726 Dice Score for the COVID-19 lesion segmentation task on the target data, without the necessity to annotate any target data for the model training.

Conditional Generative Adversarial Networks for low-dose CT image denoising aiming at preservation of critical image content

  • Kusters, K. C.
  • Zavala-Mondragon, L. A.
  • Bescos, J. O.
  • Rongen, P.
  • de With, P. H. N.
  • van der Sommen, F.
Annu Int Conf IEEE Eng Med Biol Soc 2021 Journal Article, cited 0 times
Website
X-ray Computed Tomography (CT) is an imaging modality where patients are exposed to potentially harmful ionizing radiation. To limit patient risk, reduced-dose protocols are desirable, which inherently lead to an increased noise level in the reconstructed CT scans. Consequently, noise reduction algorithms are indispensable in the reconstruction processing chain. In this paper, we propose to leverage a conditional Generative Adversarial Networks (cGAN) model, to translate CT images from low-to-routine dose. However, when aiming to produce realistic images, such generative models may alter critical image content. Therefore, we propose to employ a frequency-based separation of the input prior to applying the cGAN model, in order to limit the cGAN to high-frequency bands, while leaving low-frequency bands untouched. The results of the proposed method are compared to a state-of-the-art model within the cGAN model as well as in a single-network setting. The proposed method generates visually superior results compared to the single-network model and the cGAN model in terms of quality of texture and preservation of fine structural details. It also appeared that the PSNR, SSIM and TV metrics are less important than a careful visual evaluation of the results. The obtained results demonstrate the relevance of defining and separating the input image into desired and undesired content, rather than blindly denoising entire images. This study shows promising results for further investigation of generative models towards finding a reliable deep learning-based noise reduction algorithm for low-dose CT acquisition.

Semi-Supervised Learning with Pseudo-Labeling for Pancreatic Cancer Detection on CT Scans

  • Kurasova, Olga
  • Medvedev, Viktor
  • Šubonienė, Aušra
  • Dzemyda, Gintautas
  • Gulla, Aistė
  • Samuilis, Artūras
  • Jagminas, Džiugas
  • Strupas, Kęstutis
2023 Conference Paper, cited 0 times
Deep learning techniques have recently gained increasing attention not only among computer science researchers but are also being applied in a wide range of fields. However, deep learning models demand huge amounts of data. Furthermore, fully supervised learning requires labeled data to solve classification, recognition, and segmentation problems. Data labeling and annotation in the medical domain are time-consuming and labor-intensive. Semi-supervised learning has demonstrated the ability to improve deep learning performance when labeled data is scarce. However, it is still an open and challenging question on how to leverage not only labeled data but also the huge amount of unlabeled data. In this paper, the problem of pancreatic cancer detection on CT scans is addressed by a semi-supervised learning approach based on pseudo-labeling. Preliminary results are promising and show the potential of semi-supervised deep learning to detect pancreatic cancer at an early stage with a limited amount of labeled data.

Multi-center validation of an artificial intelligence system for detection of COVID-19 on chest radiographs in symptomatic patients

  • Kuo, M. D.
  • Chiu, K. W. H.
  • Wang, D. S.
  • Larici, A. R.
  • Poplavskiy, D.
  • Valentini, A.
  • Napoli, A.
  • Borghesi, A.
  • Ligabue, G.
  • Fang, X. H. B.
  • Wong, H. K. C.
  • Zhang, S.
  • Hunter, J. R.
  • Mousa, A.
  • Infante, A.
  • Elia, L.
  • Golemi, S.
  • Yu, L. H. P.
  • Hui, C. K. M.
  • Erickson, B. J.
Eur Radiol 2022 Journal Article, cited 0 times
Website
OBJECTIVES: While chest radiograph (CXR) is the first-line imaging investigation in patients with respiratory symptoms, differentiating COVID-19 from other respiratory infections on CXR remains challenging. We developed and validated an AI system for COVID-19 detection on presenting CXR. METHODS: A deep learning model (RadGenX), trained on 168,850 CXRs, was validated on a large international test set of presenting CXRs of symptomatic patients from 9 study sites (US, Italy, and Hong Kong SAR) and 2 public datasets from the US and Europe. Performance was measured by area under the receiver operator characteristic curve (AUC). Bootstrapped simulations were performed to assess performance across a range of potential COVID-19 disease prevalence values (3.33 to 33.3%). Comparison against international radiologists was performed on an independent test set of 852 cases. RESULTS: RadGenX achieved an AUC of 0.89 on 4-fold cross-validation and an AUC of 0.79 (95%CI 0.78-0.80) on an independent test cohort of 5,894 patients. Delong's test showed statistical differences in model performance across patients from different regions (p < 0.01), disease severity (p < 0.001), gender (p < 0.001), and age (p = 0.03). Prevalence simulations showed the negative predictive value increases from 86.1% at 33.3% prevalence, to greater than 98.5% at any prevalence below 4.5%. Compared with radiologists, McNemar's test showed the model has higher sensitivity (p < 0.001) but lower specificity (p < 0.001). CONCLUSION: An AI model that predicts COVID-19 infection on CXR in symptomatic patients was validated on a large international cohort providing valuable context on testing and performance expectations for AI systems that perform COVID-19 prediction on CXR. KEY POINTS: * An AI model developed using CXRs to detect COVID-19 was validated in a large multi-center cohort of 5,894 patients from 9 prospectively recruited sites and 2 public datasets. * Differences in AI model performance were seen across region, disease severity, gender, and age. * Prevalence simulations on the international test set demonstrate the model's NPV is greater than 98.5% at any prevalence below 4.5%.

Circular LSTM for Low-Dose Sinograms Inpainting

  • Kuo, Chin
  • Wei, Tzu-Ti
  • Chen, Jen-Jee
  • Tseng, Yu-Chee
IEEE Access 2023 Journal Article, cited 0 times
Computed tomography (CT) is usually accompanied by a long scanning time and substantial patient radiation exposure. Sinograms are the basis for constructing CT scans; however, continuous sinograms may highly overlap, resulting in extra radiation exposure. This paper proposes a deep learning model to inpaint a sparse-view sinogram sequence. Because a sinogram sequence around the human body is circular in nature, we propose a circular LSTM (CirLSTM) architecture that feeds position-relevant information to our model. To evaluate the performance of our proposed method, we compared the results of our inpainted sinograms with ground truth sinograms using evaluation metrics, including the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The SSIM values for both our proposed method and the state-of-the-art method range from 0.998 to 0.999, indicating that the prediction of structures is not challenging for either method. Our proposed CirLSTM achieves PSNR values ranging from 49 to 52, outperforming all the other compared methods. These results demonstrate the feasibility of using only interleaved sinograms to construct a complete sinogram sequence and to generate high-quality CT images. Furthermore, we validated the proposed model across different body portions and CT machine models. The results show that CirLSTM outperforms all other methods in both the across-body segment validation and across-machine validation scenarios.

A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images

  • Kunkyab, T.
  • Bahrami, Z.
  • Zhang, H.
  • Liu, Z.
  • Hyde, D.
J Appl Clin Med Phys 2024 Journal Article, cited 0 times
Website
PURPOSE: Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS: Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS: The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS: Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.

Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images

  • Kumar, V.
  • Prabha, C.
  • Sharma, P.
  • Mittal, N.
  • Askar, S. S.
  • Abouhawwash, M.
BMC Med Imaging 2024 Journal Article, cited 0 times
Website
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.

An Enhanced Convolutional Neural Architecture with Residual Module for Mri Brain Image Classification System

  • Kumar, S Mohan
  • Yadav, K.P.
Turkish Journal of Physiotherapy and Rehabilitation 2021 Journal Article, cited 0 times
Website
Deep Neural Network (DNN) has played an important role in the analysis of image and signal processing. It has the ability to abstract features very deeply. In the field of medical image processing DNN has provided a recognition method for classifying the abnormality of the medical images. In this paper, DNN based Magnetic Resonance Imaging (MRI) brain image classification with modified residual module named Pyramid Design of Residual (PDR) system is developed. The conventional residual module is arranged in a pyramid like architecture. The MRI image classification tests performed on REpository of Molecular BRAin Neoplasia DaTa (REMBRANDT) database demonstrated that the DNN-PDR system can improve the accuracy. The classification test results also show that there is notable improvement in terms of accuracy (99.5%), specificity (100%) and sensitivity (99%). A comparison between the DNN-PDR system and the existing systems are also given.

A principal component fusion-based thresholded bin-stretching for CT image enhancement

  • Kumar, Sonu
  • Bhandari, Ashish Kumar
Signal, Image and Video Processing 2023 Journal Article, cited 0 times
Computed tomography (CT) images play an important role in the medical field to diagnose unhealthy organs, the structure of the inner body, and other diseases. The acquisition of CT images is a challenging task because a sufficient amount of electromagnetic wave is required to capture better contrast images, but for some unavoidable reason, CT machine captures degraded images like low contrast, dark images, and noisy images. So, the enhancement of the CT images is required to visualize the internal body structure. For enhancing the degraded CT image, a novel enhancement technique is produced on the basis of multilevel Thresholding (MLT)-based bin-stretching with power law transform (PLT). Initially, the distorted CT image is processed using an MLT-based bin-stretching approach to improve the contrast of the image. After that, a median filter is applied to the processed image using MLT-based bin-stretching to eliminate the impulse noise. Now, adaptive PLT is applied to the processed filtered image to improve the overall contrast of the image. Finally, contrast improved image and processed image by histogram equalization are fused using the principle component analysis method to control the over-improved portion of the image using PLT. The enhanced image is found in the form of a fused image. The qualitative and quantitative parameters are much better than the other recently introduced enhancement methods.

Recasted nonlinear complex diffusion method for removal of Rician noise from breast MRI images

  • Kumar, Pradeep
  • Srivastava, Subodh
  • Padma Sai, Y.
The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 2021 Journal Article, cited 0 times
Website
The evolution of magnetic resonance imaging (MRI) leads to the study of the internal anatomy of the breast. It maps the physical features along with functional characteristics of selected regions. However, its mapping accuracy is affected by the presence of Rician noise. This noise limits the qualitative and quantitative measures of breast image. This paper proposes recasted nonlinear complex diffusion filter for sharpening the details and removal of Rician noise. It follows maximum likelihood estimation along with optimal parameter selection of complex diffusion where the overall functionality is balanced by regularization parameters. To make recasted nonlinear complex diffusion, the edge threshold constraint “k” of diffusion coefficient is reformed. It is replaced by the standard deviation of the image. It offers a wide range of threshold as variability present in the image with respect to edge. It also provides an automatic selection of “k” instead of user-based value. A series of evaluation has been conducted with respect to different noise ratios further quality improvement of MRI. The qualitative and quantitative assessments of evaluations are tested for the Reference Image Database to Evaluate Therapy Response (RIDER) Breast database. The proposed method is also compared with other existing methods. The quantitative assessment includes the parameters of the full-reference image, human visual system, and no-reference image. It is observed that the proposed method is capable of preserving edges, sharpening the details, and removal of Rician noise.

Computer-Aided Diagnosis of Life-Threatening Diseases

  • Kumar, Pramod
  • Ambekar, Sameer
  • Roy, Subarna
  • Kunchur, Pavan
2019 Book Section, cited 0 times
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.

Lung Nodule Classification Using Deep Features in CT Images

  • Kumar, Devinder
  • Wong, Alexander
  • Clausi, David A
2015 Conference Proceedings, cited 114 times
Website
Early detection of lung cancer can help in a sharp decrease in the lung cancer mortality rate, which accounts for more than 17% percent of the total cancer related deaths. A large number of cases are encountered by radiologists on a daily basis for initial diagnosis. Computer-aided diagnosis (CAD) systems can assist radiologists by offering a "second opinion" and making the whole process faster. We propose a CAD system which uses deep features extracted from an auto encoder to classify lung nodules as either malignant or benign. We use 4303 instances containing 4323 nodules from the National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) dataset to obtain an overall accuracy of 75.01% with a sensitivity of 83.35% and false positive of 0.39/patient over a 10 fold cross validation.

Medical image segmentation using modified fuzzy c mean based clustering

  • Kumar, Dharmendra
  • Solanki, Anil Kumar
  • Ahlawat, Anil
  • Malhotra, Sukhnandan
2020 Conference Proceedings, cited 0 times
Website
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.

Empirical evaluation of filter pruning methods for acceleration of convolutional neural network

  • Kumar, Dheeraj
  • Mehta, Mayuri A.
  • Joshi, Vivek C.
  • Oza, Rachana S.
  • Kotecha, Ketan
  • Lin, Jerry Chun-Wei
Multimedia Tools and Applications 2023 Journal Article, cited 0 times
Training and inference of deep convolutional neural networks are usually slow due to the depth of the network and the number of parameters in the network. Although high-performance processors usually accelerate the training of these networks, their use on resource-constrained devices is still limited. Several compression-based acceleration methods have been presented to optimize the performance of neural networks. However, their use and adaptation are still limited due to their adverse effects on the network structure. Therefore, different filter pruning methods have been proposed to keep the network structure intact. To better solve the above limitations, we first propose a detailed classification of model acceleration method to explain the different ways of enhancing the inference performance of the convolutional neural network. Second, we present a broad classification of filter pruning methods including the comparison of these methods. Third, we present an empirical evaluation of four filter pruning methods to understand the effects of filter pruning on model accuracy and parameter reduction. Fourth, we perform several experiments with ResNet20, a pre-trained CNN, and with the proposed custom CNN to show the effect of filter pruning on them. ResNet20 is used to address the multiclass classification using CIFAR 10 dataset and custom CNN is used to address the binary classification using Leukaemia image classification dataset that includes low-information medical images. The experimental results show that among the four filter pruning methods, the soft filter pruning method best preserves the accuracy of the original model for both ResNet20 and the custom CNN. In addition, the sampling-based filter pruning method shows the highest reduction of 99.8% in parameters on custom CNN. The overall results show a reasonable pruning ratio within five training epochs for both the pre-trained CNN and custom CNN. In addition, our results show that pruning redundant filters significantly reduces the model size, and number of floating point operations.

Discovery radiomics for pathologically-proven computed tomography lung cancer prediction

  • Kumar, Devinder
  • Chung, Audrey G
  • Shaifee, Mohammad J
  • Khalvati, Farzad
  • Haider, Masoom A
  • Wong, Alexander
2017 Conference Proceedings, cited 30 times
Website
Lung cancer is the leading cause for cancer related deaths. As such, there is an urgent need for a streamlined process that can allow radiologists to provide diagnosis with greater efficiency and accuracy. A powerful tool to do this is radiomics: a high-dimension imaging feature set. In this study, we take the idea of radiomics one step further by introducing the concept of discovery radiomics for lung cancer prediction using CT imaging data. In this study, we realize these custom radiomic sequencers as deep convolutional sequencers using a deep convolutional neural network learning architecture. To illustrate the prognostic power and effectiveness of the radiomic sequences produced by the discovered sequencer, we perform cancer prediction between malignant and benign lesions from 97 patients using the pathologically-proven diagnostic data from the LIDC-IDRI dataset. Using the clinically provided pathologically-proven data as ground truth, the proposed framework provided an average accuracy of 77.52% via 10-fold cross-validation with a sensitivity of 79.06% and specificity of 76.11%, surpassing the state-of-the art method.

A visual analytics approach using the exploration of multidimensional feature spaces for content-based medical image retrieval

  • Kumar, Ashnil
  • Nette, Falk
  • Klein, Karsten
  • Fulham, Michael
  • Kim, Jinman
IEEE Journal of Biomedical and Health Informatics 2014 Journal Article, cited 27 times
Website

Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data

  • Kumar, Ashnil
  • Kim, Jinman
  • Cai, Weidong
  • Fulham, Michael
  • Feng, Dagan
2013 Journal Article, cited 109 times
Website
Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.

Analysis of CT DICOM Image Segmentation for Abnormality Detection

  • Kulkarni, Rashmi
  • Bhavani, K.
International Journal of Engineering and Manufacturing 2019 Journal Article, cited 0 times
Website
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.

Comparing the performance of a deep learning-based lung gross tumour volume segmentation algorithm before and after transfer learning in a new hospital

  • Kulkarni, Chaitanya
  • Sherkhane, Umesh
  • Jaiswar, Vinay
  • Mithun, Sneha
  • Mysore Siddu, Dinesh
  • Rangarajan, Venkatesh
  • Dekker, Andre
  • Traverso, Alberto
  • Jha, Ashish
  • Wee, Leonard
2024 Journal Article, cited 0 times
Website
Abstract Objectives Radiation therapy for lung cancer requires a gross tumour volume (GTV) to be carefully outlined by a skilled radiation oncologist (RO) to accurately pinpoint high radiation dose to a malignant mass while simultaneously minimizing radiation damage to adjacent normal tissues. This is manually intensive and tedious however, it is feasible to train a deep learning (DL) neural network that could assist ROs to delineate the GTV. However, DL trained on large openly accessible data sets might not perform well when applied to a superficially similar task but in a different clinical setting. In this work, we tested the performance of DL automatic lung GTV segmentation model trained on open-access Dutch data when used on Indian patients from a large public tertiary hospital, and hypothesized that generic DL performance could be improved for a specific local clinical context, by means of modest transfer-learning on a small representative local subset. Methods X-ray computed tomography (CT) series in a public data set called “NSCLC-Radiomics” from The Cancer Imaging Archive was first used to train a DL-based lung GTV segmentation model (Model 1). Its performance was assessed using a different open access data set (Interobserver1) of Dutch subjects plus a private Indian data set from a local tertiary hospital (Test Set 2). Another Indian data set (Retrain Set 1) was used to fine-tune the former DL model using a transfer learning method. The Indian data sets were taken from CT of a hybrid scanner based in nuclear medicine, but the GTV was drawn by skilled Indian ROs. The final (after fine-tuning) model (Model 2) was then re-evaluated in “Interobserver1” and “Test Set 2.” Dice similarity coefficient (DSC), precision, and recall were used as geometric segmentation performance metrics. Results Model 1 trained exclusively on Dutch scans showed a significant fall in performance when tested on “Test Set 2.” However, the DSC of Model 2 recovered by 14 percentage points when evaluated in the same test set. Precision and recall showed a similar rebound of performance after transfer learning, in spite of using a comparatively small sample size. The performance of both models, before and after the fine-tuning, did not significantly change the segmentation performance in “Interobserver1.” Conclusions A large public open-access data set was used to train a generic DL model for lung GTV segmentation, but this did not perform well initially in the Indian clinical context. Using transfer learning methods, it was feasible to efficiently and easily fine-tune the generic model using only a small number of local examples from the Indian hospital. This led to a recovery of some of the geometric segmentation performance, but the tuning did not appear to affect the performance of the model in another open-access data set. Advances in knowledge Caution is needed when using models trained on large volumes of international data in a local clinical setting, even when that training data set is of good quality. Minor differences in scan acquisition and clinician delineation preferences may result in an apparent drop in performance. However, DL models have the advantage of being efficiently “adapted” from a generic to a locally specific context, with only a small amount of fine-tuning by means of transfer learning on a small local institutional data set.

A Deep Learning-Aided Automated Method for Calculating Metabolic Tumor Volume in Diffuse Large B-Cell Lymphoma

  • Kuker, Russ A.
  • Lehmkuhl, David
  • Kwon, Deukwoo
  • Zhao, Weizhao
  • Lossos, Izidore S.
  • Moskowitz, Craig H.
  • Alderuccio, Juan Pablo
  • Yang, Fei
Cancers 2022 Journal Article, cited 0 times
Website
Metabolic tumor volume (MTV) is a robust prognostic biomarker in diffuse large B-cell lymphoma (DLBCL). The available semiautomatic software for calculating MTV requires manual input limiting its routine application in clinical research. Our objective was to develop a fully automated method (AM) for calculating MTV and to validate the method by comparing its results with those from two nuclear medicine (NM) readers. The automated method designed for this study employed a deep convolutional neural network to segment normal physiologic structures from the computed tomography (CT) scans that demonstrate intense avidity on positron emission tomography (PET) scans. The study cohort consisted of 100 patients with newly diagnosed DLBCL who were randomly selected from the Alliance/CALGB 50303 (NCT00118209) trial. We observed high concordance in MTV calculations between the AM and readers with Pearson’s correlation coefficients and interclass correlations comparing reader 1 to AM of 0.9814 (p < 0.0001) and 0.98 (p < 0.001; 95%CI = 0.96 to 0.99), respectively; and comparing reader 2 to AM of 0.9818 (p < 0.0001) and 0.98 (p < 0.0001; 95%CI = 0.96 to 0.99), respectively. The Bland–Altman plots showed only relatively small systematic errors between the proposed method and readers for both MTV and maximum standardized uptake value (SUVmax). This approach may possess the potential to integrate PET-based biomarkers in clinical trials.

Automated Koos Classification of Vestibular Schwannoma

  • Kujawa, Aaron
  • Dorent, Reuben
  • Connor, Steve
  • Oviedova, Anna
  • Okasha, Mohamed
  • Grishchuk, Diana
  • Ourselin, Sebastien
  • Paddick, Ian
  • Kitchen, Neil
  • Vercauteren, Tom
  • Shapey, Jonathan
Frontiers in Radiology 2022 Journal Article, cited 0 times
Website
Objective: The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management. Methods: We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons. Results: Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 ± 0.05, F1 = 89.3 ± 3.0%, accuracy = 89.3 ± 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 ± 0.08, F1 = 89.1 ± 5.2, accuracy = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases. Conclusions: We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.

Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2016 Journal Article, cited 17 times
Website
The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient.

An Level Set Evolution Morphology Based Segmentation of Lung Nodules and False Nodule Elimination by 3D Centroid Shift and Frequency Domain DC Constant Analysis

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
International Journal of u- and e- Service, Science and Technology 2016 Journal Article, cited 0 times
Website
A Level Set Evolution with Morphology (LSEM) based segmentation algorithm is proposed in this work to segment all the possible lung nodules from a series of CT scan images. All the segmented nodule candidates were not cancerous in nature. Initially the vessels and calcifications were also segmented as nodule candidates. The structural feature analysis was carried out to remove the vessels. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule’s resultant position did not usually deviate. The calcifications were eliminated by frequency domain analysis. DC constant of nodule candidates were computed in frequency domain. The nodule candidates with high DC constant value could be the calcifications as the calcification patterns were homogeneous in nature. This algorithm was applied on a database of 40 patient cases with 58 malignant nodules. The algorithms proposed in this paper precisely detected 55 malignant nodules and failed to detect 3 with a sensitivity of 95%. Further, this algorithm correctly eliminated 778 tissue clusters that were initially segmented as nodules, however, 79 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 1.98 per patient.

Performance Analysis of Denoising in MR Images with Double Density Dual Tree Complex Wavelets, Curvelets and NonSubsampled Contourlet Transforms

  • Krishnakumar, V
  • Parthiban, Latha
2014 Journal Article, cited 0 times
Digital images are extensively used by the medical doctors during different stages of disease diagnosis and treatment process. In the medical field, noise occurs in an image during two phases: acquisition and transmission. During the acquisition phase, noise is induced into an image, due to manufacturing defects, improper functioning of internal components, minute component failures and manual handling errors of the electronic scanning devices such as PECT/SPECT, MRI/CT scanners. Nowadays, healthcare organizations are beginning to consider cloud computing solutions for managing and sharing huge volume of medical data. This leads to the possibility of transmitting different types of medical data including CT, MR images, patient details and much more information through internet. Due to the presence of noise in the transmission channel, some unwanted signals are added to the transmitted medical data. Image denoising algorithms are employed to reduce the unwanted modifications of the pixels in an image. In this paper, the performance of denoising methods with two dimensional transformations of nonsubsampled contourlets (NSCT), curvelets, double density dual tree complex wavelets (DD-DTCWT) are compared and analysed using the image quality measures such as peak signal to noise ratio, root mean square error, structural similarity index. In this paper, 200 MR images of brain (3T MRI scan), heart and breast are selected for testing the noise reduction techniques with above transformations. The results shows that the NSCT gives good PSNR values for random and impulse noises. DD-DTCWT has good noise suppressing capability for speckle and Rician noises. Both NSCT and DD-DTCWT copes well in images affected by poisson noises. The best PSNR value obtained for salt and pepper and additive white Guassian noises are 21.29 and 56.45 respectively. For speckle noises, DD-DTCWT gives 33.46 and it is better than NSCT and curvelet. The values 33.50 and 33.56 are the top PSNRs of NSCT and DD-DTCWT for poisson noises.

Generation of hemipelvis surface geometry based on statistical shape modelling and contralateral mirroring

  • Krishna, Praveen
  • Robinson, Dale L.
  • Bucknill, Andrew
  • Lee, Peter Vee Sin
Biomechanics and Modeling in Mechanobiology 2022 Journal Article, cited 0 times
Website
Personalised fracture plates manufactured using 3D printing offer an improved treatment option for unstable pelvic ring fractures that may not be adequately secured using off-the-shelf components. To design fracture plates that secure the bone fragments in their pre-fracture positions, the fractures must be reduced virtually using medical imaging-based reconstructions, a time-consuming process involving segmentation and repositioning of fragments until surface congruency is achieved. This study compared statistical shape models (SSMs) and contralateral mirroring as automated methods to reconstruct the hemipelvis using varying amounts of bone surface geometry. The training set for the geometries was obtained from pelvis CT scans of 33 females. The root-mean-squared error (RMSE) was quantified across the entire surface of the hemipelvis and within specific regions, and deviations of pelvic landmarks were computed from their positions in the intact hemipelvis. The reconstruction of the entire hemipelvis surfaced based on contralateral mirroring had an RMSE of 1.21 ± 0.29 mm, whereas for SSMs based on the entire hemipelvis surface, the RMSE was 1.11 ± 0.29 mm, a difference that was not significant (p = 0.32). Moreover, all hemipelvis reconstructions based on the full or partial bone geometries had RMSEs and landmark deviations from contralateral mirroring that were significantly lower (p < 0.05) or statistically equivalent to the SSMs. These results indicate that contralateral mirroring tends to be more accurate than SSMs for reconstructing unilateral pelvic fractures. SSMs may still be a viable method for hemipelvis fracture reconstruction in situations where contralateral geometries are not available, such as bilateral pelvic factures, or for highly asymmetric pelvic anatomies.

Medical (CT) image generation with style

  • Krishna, Arjun
  • Mueller, Klaus
2019 Conference Proceedings, cited 0 times

Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept

  • Krieger, Miriam
  • Giger, Alina
  • Salomir, Rares
  • Bieri, Oliver
  • Celicanin, Zarko
  • Cattin, Philippe C
  • Lomax, Antony J
  • Weber, Damien C
  • Zhang, Ye
Radiotherapy and Oncology 2020 Journal Article, cited 0 times
Website

An Efficient Pipeline for Abdomen Segmentation in CT Images

  • Koyuncu, H.
  • Ceylan, R.
  • Sivri, M.
  • Erdogan, H.
J Digit Imaging 2018 Journal Article, cited 4 times
Website
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.

Lupsix: A Cascade Framework for Lung Parenchyma Segmentation in Axial CT Images

  • Koyuncu, Hasan
International Journal of Intelligent Systems and Applications in Engineering 2018 Journal Article, cited 0 times
Website

Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT

  • Koyasu, S.
  • Nishio, M.
  • Isoda, H.
  • Nakamoto, Y.
  • Togashi, K.
Ann Nucl Med 2020 Journal Article, cited 3 times
Website
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.

The quest for'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

  • Kowalik-Urbaniak, Ilona
  • Brunet, Dominique
  • Wang, Jiheng
  • Koff, David
  • Smolarski-Koff, Nadine
  • Vrscay, Edward R
  • Wallace, Bill
  • Wang, Zhou
2014 Conference Proceedings, cited 0 times
Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index and curve fitting method can provide SSIM and MSE thresholds for acceptable compression ratios.

Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer

  • Kovacs, B.
  • Netzer, N.
  • Baumgartner, M.
  • Schrader, A.
  • Isensee, F.
  • Weisser, C.
  • Wolf, I.
  • Gortz, M.
  • Jaeger, P. F.
  • Schutz, V.
  • Floca, R.
  • Gnirs, R.
  • Stenzinger, A.
  • Hohenfellner, M.
  • Schlemmer, H. P.
  • Bonekamp, D.
  • Maier-Hein, K. H.
2023 Journal Article, cited 0 times
Website
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.

A large dataset of white blood cells containing cell locations and types, along with segmented nuclei and cytoplasm

  • Kouzehkanan, Z. M.
  • Saghari, S.
  • Tavakoli, S.
  • Rostami, P.
  • Abaszadeh, M.
  • Mirzadeh, F.
  • Satlsar, E. S.
  • Gheidishahran, M.
  • Gorgi, F.
  • Mohammadi, S.
  • Hosseini, R.
2022 Journal Article, cited 0 times
Website
Accurate and early detection of anomalies in peripheral white blood cells plays a crucial role in the evaluation of well-being in individuals and the diagnosis and prognosis of hematologic diseases. For example, some blood disorders and immune system-related diseases are diagnosed by the differential count of white blood cells, which is one of the common laboratory tests. Data is one of the most important ingredients in the development and testing of many commercial and successful automatic or semi-automatic systems. To this end, this study introduces a free access dataset of normal peripheral white blood cells called Raabin-WBC containing about 40,000 images of white blood cells and color spots. For ensuring the validity of the data, a significant number of cells were labeled by two experts. Also, the ground truths of the nuclei and cytoplasm are extracted for 1145 selected cells. To provide the necessary diversity, various smears have been imaged, and two different cameras and two different microscopes were used. We did some preliminary deep learning experiments on Raabin-WBC to demonstrate how the generalization power of machine learning methods, especially deep neural networks, can be affected by the mentioned diversity. Raabin-WBC as a public data in the field of health can be used for the model development and testing in different machine learning tasks including classification, detection, segmentation, and localization.

Detection and Segmentation of Brain Tumors from MRI Using U-Nets

  • Kotowski, Krzysztof
  • Nalepa, Jakub
  • Dudzik, Wojciech
2020 Book Section, cited 0 times
In this paper, we exploit a cascaded U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. The total processing time of a single input volume amounts to around 15 s using a single GPU. The preliminary experiments over the BraTS’19 validation set revealed that our approach delivers high-quality tumor delineation and offers instant segmentation.

Detecting liver cirrhosis in computed tomography scans using clinically-inspired and radiomic features

  • Kotowski, K.
  • Kucharski, D.
  • Machura, B.
  • Adamski, S.
  • Gutierrez Becker, B.
  • Krason, A.
  • Zarudzki, L.
  • Tessier, J.
  • Nalepa, J.
Comput Biol Med 2023 Journal Article, cited 1 times
Website
Hepatic cirrhosis is an increasing cause of mortality in developed countries-it is the pathological sequela of chronic liver diseases, and the final liver fibrosis stage. Since cirrhosis evolves from the asymptomatic phase, it is of paramount importance to detect it as quickly as possible, because entering the symptomatic phase commonly leads to hospitalization and can be fatal. Understanding the state of the liver based on the abdominal computed tomography (CT) scans is tedious, user-dependent and lacks reproducibility. We tackle these issues and propose an end-to-end and reproducible approach for detecting cirrhosis from CT. It benefits from the introduced clinically-inspired features that reflect the patient's characteristics which are often investigated by experienced radiologists during the screening process. Such features are coupled with the radiomic ones extracted from the liver, and from the suggested region of interest which captures the liver's boundary. The rigorous experiments, performed over two heterogeneous clinical datasets (two cohorts of 241 and 32 patients) revealed that extracting radiomic features from the liver's rectified contour is pivotal to enhance the classification abilities of the supervised learners. Also, capturing clinically-inspired image features significantly improved the performance of such models, and the proposed features were consistently selected as the important ones. Finally, we showed that selecting the most discriminative features leads to the Pareto-optimal models with enhanced feature-level interpretability, as the number of features was dramatically reduced (280x) from thousands to tens.

Segmenting Brain Tumors from MRI Using Cascaded 3D U-Nets

  • Kotowski, Krzysztof
  • Adamski, Szymon
  • Malara, Wojciech
  • Machura, Bartosz
  • Zarudzki, Lukasz
  • Nalepa, Jakub
2021 Book Section, cited 0 times
In this paper, we exploit a cascaded 3D U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from multi-modal magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. To provide high-quality generalization, we investigate several regularization techniques that help improve the segmentation performance obtained for the unseen scans, and benefit from the expert knowledge of a senior radiologist captured in a form of several post-processing routines. Our preliminary experiments elaborated over the BraTS’20 validation set revealed that our approach delivers high-quality tumor delineation.

The impact of inter-observer variation in delineation on robustness of radiomics features in non-small cell lung cancer

  • Kothari, G.
  • Woon, B.
  • Patrick, C. J.
  • Korte, J.
  • Wee, L.
  • Hanna, G. G.
  • Kron, T.
  • Hardcastle, N.
  • Siva, S.
2022 Journal Article, cited 0 times
Website
Artificial intelligence and radiomics have the potential to revolutionise cancer prognostication and personalised treatment. Manual outlining of the tumour volume for extraction of radiomics features (RF) is a subjective process. This study investigates robustness of RF to inter-observer variation (IOV) in contouring in lung cancer. We utilised two public imaging datasets: 'NSCLC-Radiomics' and 'NSCLC-Radiomics-Interobserver1' ('Interobserver'). For 'NSCLC-Radiomics', we created an additional set of manual contours for 92 patients, and for 'Interobserver', there were five manual and five semi-automated contours available for 20 patients. Dice coefficients (DC) were calculated for contours. 1113 RF were extracted including shape, first order and texture features. Intraclass correlation coefficient (ICC) was computed to assess robustness of RF to IOV. Cox regression analysis for overall survival (OS) was performed with a previously published radiomics signature. The median DC ranged from 0.81 ('NSCLC-Radiomics') to 0.85 ('Interobserver'-semi-automated). The median ICC for the 'NSCLC-Radiomics', 'Interobserver' (manual) and 'Interobserver' (semi-automated) were 0.90, 0.88 and 0.93 respectively. The ICC varied by feature type and was lower for first order and gray level co-occurrence matrix (GLCM) features. Shape features had a lower median ICC in the 'NSCLC-Radiomics' dataset compared to the 'Interobserver' dataset. Survival analysis showed similar separation of curves for three of four RF apart from 'original_shape_Compactness2', a feature with low ICC (0.61). The majority of RF are robust to IOV, with first order, GLCM and shape features being the least robust. Semi-automated contouring improves feature stability. Decreased robustness of a feature is significant as it may impact upon the features' prognostic capability.

Visual attention condenser model for multiple disease detection from heterogeneous medical image modalities

  • Kotei, Evans
  • Thirunavukarasu, Ramkumar
Multimedia Tools and Applications 2023 Journal Article, cited 0 times
The World Health Organization (WHO) has identified breast cancer and tuberculosis (TB) as major global health issues. While breast cancer is a top killer of women, TB is an infectious disease caused by a single bacterium with a high mortality rate. Since both TB and breast cancer are curable, early screening ensures treatment. Medical imaging modalities, such as chest X-ray radiography and ultrasound, are widely used for diagnosing TB and breast cancer. Artificial intelligence (AI) techniques are applied to supplement the screening process for effective and early treatment due to the global shortage of radiologists and oncologists. These techniques fast-track the screening process leading to early detection and treatment. Deep learning (DL) is the most used technique producing outstanding results. Despite the success of these DL models in the automatic detection of TB and breast cancer, the suggested models are task-specific, meaning they are disease-oriented. Again, the complexity and weight of the DL applications make it difficult to apply the models on edge devices. Motivated by this, a Multi Disease Visual Attention Condenser Network (MD-VACNet) got proposed for multiple disease identification from different medical image modalities. The network architecture got designed automatically through a machine-driven design exploration with generative synthesis. The proposed MD-VACNet is a lightweight stand-alone visual recognition deep neural network based on VAC with a self-attention mechanism to run on edge devices. In the experiment, TB was identified based on chest X-ray images and breast cancer was based on ultrasound images. The suggested model achieved a 98.99% accuracy score, a 99.85% sensitivity score, and a 98.20% specificity score on the x-ray radiographs for TB diagnosis. The model also produced a cutting-edge performance on breast cancer classification into benign and malignant, with accuracy, sensitivity and specificity scores of 98.47%, 98.42%, and 98.31%, respectively. Regarding model architectural complexity, MD-VACNet is simple and lightweight for edge device implementation.

Examining the Validity of Input Lung CT Images Submitted to the AI-Based Computerized Diagnosis

  • Kosareva, Aleksandra A.
  • Paulenka, Dzmitry A.
  • Snezhko, Eduard V.
  • Bratchenko, Ivan A.
  • Kovalev, Vassili A.
2022 Journal Article, cited 0 times
Website
A well-designed CAD tool should respond to input requests, user actions, and perform input checks. Thus, an important element of such a tool is the pre-processing of incoming data and screening out those data that cannot be processed by the application. In this paper, we consider non-trivial methods of chest computed tomography (CT) images verifications: modality and human chest checks. We review sources to develop training datasets, describe architectures of convolution neural networks (CNN), clarify pre-processing and augmentation processes of chest CT scans and show results of training. The developed application showed good results: 100% classification accuracy on the test dataset for modality check and 89% classification accuracy on the test dataset for checking of lungs presence. Analysis of wrong predictions showed that the model performs poorly on biopsy of lungs. In general, the developed input data validation model shows good results on the designed datasets for CT image modality check and for checking of lungs presence.

Is sarcopenia a predictor of overall survival in primary IDH-wildtype GBM patients with and without MGMT promoter hypermethylation?

  • Korkmaz, Serhat
  • Demirel, Emin
2023 Journal Article, cited 0 times
Background: In this study, we aimed to examine the success of temporal muscle thickness (TMT) and masseter muscle thickness (MMT) in predicting overall survival (OS) in primary IDH-wild glioblastoma (GBM) patients with and without MGMT promoter hypermethylation through publicly available datasets. Methods: We included 345 primary IDH-wild GBM patients with known MGMT promoter hypermethylation status who underwent gross-total resection and standard treatment, whose data were obtained from the open datasets. TMT was evaluated on axial thin section postcontrast T1-weighted images, and MMT was evaluated on axial T2-weighted images. The median TMT and MMT were used to determine the cut-off point. Results: The findings showed that median TMT 9.5 mm and median MMT 12.7 mm determined the cut-off value in predicting survival. Both TMT and MMT values less than the median muscle thickness were negatively associated with OS (TMT<9.5: HR 3.63 CI 2.34–4.23, p <0.001, MMT<12.7: HR 3.53 CI 2.27–4.07, p <0.001). When patients were classified according to MGMT positivity, the findings showed MGMT-negative patients (TMT<9.5: HR 2.54 CI 1.89–3.56, p <0.001, MMT<12.7: HR 2.65 CI 2.07–3.62, p <0.001) and MGMT-positive patients (TMT<9.5: HR 3.84 CI 2.48–4.28, p <0.001, MMT<12.7: HR 3.73 CI 2.98–4.71, p <0.001). Conclusion: Both TMT and MMT successfully predict survival in primary GBM patients. In addition, it can successfully predict survival in patients with and without MGMT promoter hypermethylation.

Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning

  • Korfiatis, Panagiotis
  • Kline, Timothy L
  • Erickson, Bradley J
Tomography 2016 Journal Article, cited 16 times
Website
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated. The proposed technique was within the interobserver variability with respect to Dice, Jaccard, and true positive fraction. The developed method can be used to produce automatic segmentations of tumor regions corresponding to signal-increased fluid-attenuated inversion recovery regions.

Validation of a convolutional neural network for the automated creation of curved planar reconstruction images along the main pancreatic duct

  • Koretsune, Y.
  • Sone, M.
  • Sugawara, S.
  • Wakatsuki, Y.
  • Ishihara, T.
  • Hattori, C.
  • Fujisawa, Y.
  • Kusumoto, M.
2022 Journal Article, cited 0 times
Website
PURPOSE: To evaluate the accuracy and time-efficiency of newly developed software in automatically creating curved planar reconstruction (CPR) images along the main pancreatic duct (MPD), which was developed based on a 3-dimensional convolutional neural network, and compare them with those of conventional manually generated CPR ones. MATERIALS AND METHODS: A total of 100 consecutive patients with MPD dilatation (>/= 3 mm) who underwent contrast-enhanced computed tomography between February 2021 and July 2021 were included in the study. Two radiologists independently performed blinded qualitative analysis of automated and manually created CPR images. They rated overall image quality based on a four-point scale and weighted kappa analysis was employed to compare between manually created and automated CPR images. A quantitative analysis of the time required to create CPR images and the total length of the MPD measured from CPR images was performed. RESULTS: The kappa value was 0.796, and a good correlation was found between the manually created and automated CPR images. The average time to create automated and manually created CPR images was 61.7 s and 174.6 s, respectively (P < 0.001). The total MPD length of the automated and manually created CPR images was 110.5 and 115.6 mm, respectively (P = 0.059). CONCLUSION: The automated CPR software significantly reduced reconstruction time without compromising image quality.

A Study on the Geometrical Limits and Modern Approaches to External Beam Radiotherapy

  • Kopchick, Benjamin
2020 Thesis, cited 0 times
Website
Radiation therapy is integral to treating cancer and improving survival probability. Improving treatment methods and modalities can lead to significant impacts on the life quality of cancer patients. One such method is stereotactic radiotherapy. Stereotactic radiotherapy is a form of External Beam Radiotherapy (EBRT). It delivers a highly conformal dose of radiation to a target from beams arranged at many different angles. The goal of any radiotherapy treatment is to deliver radiation only to the cancerous cells while maximally sparing other tissues. However, such a perfect treatment outcome is difficult to achieve due to the physical limitations of EBRT. The quality of treatment is dependent on the characteristics of these beams and the number of angles of which radiation is delivered. However, as technology and techniques have improved, the dependence on the quality of beams and beam coverage may have become less critical. This thesis investigates different geometric aspects of stereotactic radiotherapy and their impacts on treatment quality. The specific aims are: (1) To explore the treatment outcome of a virtual stereotactic delivery where no geometric limit exists in the sense of physical collisions. This allows for the full solid angle treatment space to be investigated and to explore if a large solid angle space is necessary to improve treatment. (2) To evaluate the effect of a reduced solid angle with a specific radiotherapy device using real clinical cases. (3) To investigate how the quality of a single beam influences treatment outcome when multiple overlapping beams are in use. (4) To study the feasibility of using a novel treatment method of lattice radiotherapy with an existing stereotactic device for treating breast cancer. All these aims were investigated with the use of inverse planning optimization and Monte-Carlo based particle transport simulations.

Deep Machine Learning Histopathological Image Analysis for Renal Cancer Detection

  • Koo, Jia Chun
  • Hum, Yan Chai
  • Lai, Khin Wee
  • Yap, Wun-She
  • Manickam, Swaminathan
  • Tee, Yee Kai
2022 Conference Paper, cited 0 times
Renal cancer is one of the top causes of cancer-related deaths among men globally. Early detection of renal cancer is crucial because it can significantly improve the probability of survival rate. However, assessing the histopathological renal tissues is a labor-intensive job and traditionally, this is done manually by a pathologist, leading to a high possibility of misdetection and/or misdiagnosis especially in the early stages and prone to inter-pathologist variations. The development of an automatic histopathological diagnosis of renal cancer can greatly reduce the bias and provide accurate characterization of diseases even though the nature of pathology and microscopy are highly complex and complicated. This paper investigated the use of deep learning methods to develop a binary histopathological image classification model (cancer or normal). 783 whole slide images of renal tissue were processed into patches using PyHIST tool at 5x magnification power before feeding them to the deep learning models. Five pre-trained deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet, were trained with transfer learning on the CPTAC-CCRCC dataset and their performances were evaluated. EfficientNetB0 achieved the state-of-the-art accuracy (97%), specificity (94%), F1-score (98%) and AUC (96%) but slightly inferior recall (98%) when compared to the best published results in the literature. These findings showed that the proposed deep learning approach can effectively classify the histopathological images of renal tissue into tumor and non-tumor classes to make pathology diagnosis more efficient and less labor intensive.

A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis

  • Konz, N.
  • Buda, M.
  • Gu, H.
  • Saha, A.
  • Yang, J.
  • Chledowski, J.
  • Park, J.
  • Witowski, J.
  • Geras, K. J.
  • Shoshan, Y.
  • Gilboa-Solomon, F.
  • Khapun, D.
  • Ratner, V.
  • Barkan, E.
  • Ozery-Flato, M.
  • Marti, R.
  • Omigbodun, A.
  • Marasinou, C.
  • Nakhaei, N.
  • Hsu, W.
  • Sahu, P.
  • Hossain, M. B.
  • Lee, J.
  • Santos, C.
  • Przelaskowski, A.
  • Kalpathy-Cramer, J.
  • Bearce, B.
  • Cha, K.
  • Farahani, K.
  • Petrick, N.
  • Hadjiiski, L.
  • Drukker, K.
  • Armato, S. G., 3rd
  • Mazurowski, M. A.
JAMA Netw Open 2023 Journal Article, cited 0 times
Website
IMPORTANCE: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.

Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome

  • Kontopodis, Eleftherios
  • Venianaki, Maria
  • Manikis, George C
  • Nikiforaki, Katerina
  • Salvetti, Ovidio
  • Papadaki, Efrosini
  • Papadakis, Georgios Z
  • Karantanas, Apostolos H
  • Marias, Kostas
IEEE J Biomed Health Inform 2019 Journal Article, cited 0 times
Website
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.

A Quantum-Inspired Self-Supervised Network model for automatic segmentation of brain MR images

  • Konar, Debanjan
  • Bhattacharyya, Siddhartha
  • Gandhi, Tapan Kr
  • Panigrahi, Bijaya Ketan
Applied Soft Computing 2020 Journal Article, cited 1 times
Website
The classical self-supervised neural network architectures suffer from slow convergence problem and incorporation of quantum computing in classical self-supervised networks is a potential solution towards it. In this article, a fully self-supervised novel quantum-inspired neural network model referred to as Quantum-Inspired Self-Supervised Network (QIS-Net) is proposed and tailored for fully automatic segmentation of brain MR images to obviate the challenges faced by deeply supervised Convolutional Neural Network (CNN) architectures. The proposed QIS-Net architecture is composed of three layers of quantum neuron (input, intermediate and output) expressed as qbits. The intermediate and output layers of the QIS-Net architecture are inter-linked through bi-directional propagation of quantum states, wherein the image pixel intensities (quantum bits) are self-organized in between these two layers without any external supervision or training. Quantum observation allows to obtain the true output once the superimposed quantum states interact with the external environment. The proposed self-supervised quantum-inspired network model has been tailored for and tested on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data sets for detecting complete tumor and reported promising accuracy and reasonable dice similarity scores in comparison with the unsupervised Fuzzy C-Means clustering, self-trained QIBDS Net, Opti-QIBDS Net, deeply supervised U-Net and Fully Convolutional Neural Networks (FCNNs).

Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy

  • Koike, Yuhei
  • Akino, Yuichi
  • Sumida, Iori
  • Shiomi, Hiroya
  • Mizuno, Hirokazu
  • Yagi, Masashi
  • Isohashi, Fumiaki
  • Seo, Yuji
  • Suzuki, Osamu
  • Ogawa, Kazuhiko
J Radiat Res 2019 Journal Article, cited 0 times
Website
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

Creation and curation of the society of imaging informatics in Medicine Hackathon Dataset

  • Kohli, Marc
  • Morrison, James J
  • Wawira, Judy
  • Morgan, Matthew B
  • Hostetter, Jason
  • Genereaux, Brad
  • Hussain, Mohannad
  • Langer, Steve G
2018 Journal Article, cited 4 times
Website

A Baseline for Predicting Glioblastoma Patient Survival Time with Classical Statistical Models and Primitive Features Ignoring Image Information

  • Kofler, Florian
  • Paetzold, Johannes C.
  • Ezhov, Ivan
  • Shit, Suprosanna
  • Krahulec, Daniel
  • Kirschke, Jan S.
  • Zimmer, Claus
  • Wiestler, Benedikt
  • Menze, Bjoern H.
2020 Book Section, cited 0 times
Gliomas are the most prevalent primary malignant brain tumors in adults. Until now an accurate and reliable method to predict patient survival time based on medical imaging and meta-information has not been developed [3]. Therefore, the survival time prediction task was introduced to the Multimodal Brain Tumor Segmentation Challenge (BraTS) to facilitate research in survival time prediction. Here we present our submissions to the BraTS survival challenge based on classical statistical models to which we feed the provided metadata as features. We intentionally ignore the available image information to explore how patient survival can be predicted purely by metadata. We achieve our best accuracy on the validation set using a simple median regression model taking only patient age into account. We suggest using our model as a baseline to benchmark the added predictive value of sophisticated features for survival time prediction.

Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
Acta Radiol 2019 Journal Article, cited 0 times
Website
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.

Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Ates, Ece
  • Kilickesmez, Ozgur
AJR Am J Roentgenol 2019 Journal Article, cited 0 times
Website
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.

Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status

  • Kocak, B.
  • Durmaz, E. S.
  • Ates, E.
  • Sel, I.
  • Turgut Gunes, S.
  • Kaya, O. K.
  • Zeynalova, A.
  • Kilickesmez, O.
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.

Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Ates, Ece
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
American Journal of Roentgenology 2019 Journal Article, cited 0 times
Website
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.

Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas

  • Kocak, Burak
  • Ates, Ece
  • Durmaz, Emine Sebnem
  • Ulusan, Melis Baykara
  • Kilickesmez, Ozgur
European Radiology 2019 Journal Article, cited 0 times
Website

Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors

  • Koç, Alpaslan
  • Güveniş, Albert
2020 Journal Article, cited 0 times
Website
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.

RadiomicsJ: a library to compute radiomic features

  • Kobayashi, T.
Radiol Phys Technol 2022 Journal Article, cited 0 times
Website
Despite the widely recognized need for radiomics research, the development and use of full-scale radiomics-based predictive models in clinical practice remains scarce. This is because of the lack of well-established methodologies for radiomic research and the need to develop systems to support radiomic feature calculations and predictive model use. Several excellent programs for calculating radiomic features have been developed. However, there are still issues such as the types of image features, variations in the calculated results, and the limited system environment in which to run the program. Against this background, we developed RadiomicsJ, an open-source radiomic feature computation library. RadiomicsJ will not only be a new research tool to enhance the efficiency of radiomics research but will also become a knowledge resource for medical imaging feature studies through its release as an open-source program.

Decomposing normal and abnormal features of medical images for content-based image retrieval of glioma imaging

  • Kobayashi, K.
  • Hataya, R.
  • Kurose, Y.
  • Miyake, M.
  • Takahashi, M.
  • Nakagawa, A.
  • Harada, T.
  • Hamamoto, R.
Med Image Anal 2021 Journal Article, cited 2 times
Website
In medical imaging, the characteristics purely derived from a disease should reflect the extent to which abnormal findings deviate from the normal features. Indeed, physicians often need corresponding images without abnormal findings of interest or, conversely, images that contain similar abnormal findings regardless of normal anatomical context. This is called comparative diagnostic reading of medical images, which is essential for a correct diagnosis. To support comparative diagnostic reading, content-based image retrieval (CBIR) that can selectively utilize normal and abnormal features in medical images as two separable semantic components will be useful. In this study, we propose a neural network architecture to decompose the semantic components of medical images into two latent codes: normal anatomy code and abnormal anatomy code. The normal anatomy code represents counterfactual normal anatomies that should have existed if the sample is healthy, whereas the abnormal anatomy code attributes to abnormal changes that reflect deviation from the normal baseline. By calculating the similarity based on either normal or abnormal anatomy codes or the combination of the two codes, our algorithm can retrieve images according to the selected semantic component from a dataset consisting of brain magnetic resonance images of gliomas. Moreover, it can utilize a synthetic query vector combining normal and abnormal anatomy codes from two different query images. To evaluate whether the retrieved images are acquired according to the targeted semantic component, the overlap of the ground-truth labels is calculated as metrics of the semantic consistency. Our algorithm provides a flexible CBIR framework by handling the decomposed features with qualitatively and quantitatively remarkable results.

PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines

  • Kiser, K. J.
  • Ahmed, S.
  • Stieb, S.
  • Mohamed, A. S. R.
  • Elhalawani, H.
  • Park, P. Y. S.
  • Doyle, N. S.
  • Wang, B. J.
  • Barman, A.
  • Li, Z.
  • Zheng, W. J.
  • Fuller, C. D.
  • Giancardo, L.
Med Phys 2020 Journal Article, cited 0 times
Website
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.

Two-Step U-Nets for Brain Tumor Segmentation and Random Forest with Radiomics for Survival Time Prediction

  • Kim, Soopil
  • Luna, Miguel
  • Chikontwe, Philip
  • Park, Sang Hyun
2020 Book Section, cited 0 times
In this paper, a two-step convolutional neural network (CNN) for brain tumor segmentation in brain MR images with a random forest regressor for survival prediction of high-grade glioma subjects are proposed. The two-step CNN consists of three 2D U-nets for utilizing global information on axial, coronal, and sagittal axes, and a 3D U-net that uses local information in 3D patches. In our two-step setup, an initial segmentation probability map is first obtained using the ensemble 2D U-nets; second, a 3D U-net takes as input both the MR image and initial segmentation map to generate the final segmentation. Following segmentation, radiomics features from T1-weighted, T2-weighted, contrast enhanced T1-weighted, and T2-FLAIR images are extracted with the segmentation results as a prior. Lastly, a random forest regressor is used for survival time prediction. Moreover, only a small number of features selected by the random forest regressor are used to avoid overfitting. We evaluated the proposed methods on the BraTS 2019 challenge dataset. For the segmentation task, we obtained average dice scores of 0.74, 0.85 and 0.80 for enhanced tumor core, whole tumor, and tumor core, respectively. In the survival prediction task, an average accuracy of 50.5% was obtained showing the effectiveness of the proposed methods.

ICP Algorithm Based Liver Rigid Registration Method Using Liver and Liver Vessel Surface Mesh

  • Kim, Soohyun
  • Koo, Kyoyeong
  • Park, Taeyong
  • Lee, Jeongjin
2023 Conference Paper, cited 0 times
To improve the survival rate of hepatocellular carcinoma (HCC), early diagnosis and treatment are essential. Early diagnosis HCC often involves comparing and analyzing hundreds of computed tomography (CT) images, which is a kind of subjective judgment and is also a time-consuming process. In this paper, we propose a liver rigid registration method using liver and liver vessel surface mesh to enable fast and objective diagnosis of HCC. The proposed method involves segmenting the liver and liver vessel regions from abdominal CT images, generating surface meshes, and performing liver rigid registration based on the Iterative Closest Point (ICP) algorithm using the generated meshes. We evaluate the accuracy of the proposed method through experiments, demonstrating its potential for rapid and objective diagnosis of HCC. The performance evaluations show that the proposed method aids in the early diagnosis and treatment of HCC fast and objectively.

Training of deep convolutional neural nets to extract radiomic signatures of tumors

  • Kim, J.
  • Seo, S.
  • Ashrafinia, S.
  • Rahmim, A.
  • Sossi, V.
  • Klyuzhin, I.
2019 Journal Article, cited 0 times
Website
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

  • Kim, Incheol
  • Rajaraman, Sivaramakrishnan
  • Antani, Sameer
Diagnostics (Basel) 2019 Journal Article, cited 0 times
Website
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

Pulse Sequence Dependence of a Simple and Interpretable Deep Learning Method for Detection of Clinically Significant Prostate Cancer Using Multiparametric MRI

  • Kim, H.
  • Margolis, D. J. A.
  • Nagar, H.
  • Sabuncu, M. R.
Acad Radiol 2022 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVES: Multiparametric magnetic resonance imaging (mpMRI) is increasingly used for risk stratification and localization of prostate cancer (PCa). Thanks to the great success of deep learning models in computer vision, the potential application for early detection of PCa using mpMRI is imminent. MATERIALS AND METHODS: Deep learning analysis of the PROSTATEx dataset. RESULTS: In this study, we show a simple convolutional neural network (CNN) with mpMRI can achieve high performance for detection of clinically significant PCa (csPCa), depending on the pulse sequences used. The mpMRI model with T2-ADC-DWI achieved 0.90 AUC score in the held-out test set, not significantly better than the model using K(trans) instead of DWI (AUC 0.89). Interestingly, the model incorporating T2-ADC- K(trans) better estimates grade. We also describe a saliency "heat" map. Our results show that csPCa detection models with mpMRI may be leveraged to guide clinical management strategies. CONCLUSION: Convolutional neural networks incorporating multiple pulse sequences show high performance for detection of clinically-significant prostate cancer, and the model including dynamic contrast-enhanced information correlates best with grade.

Modification of population based arterial input function to incorporate individual variation

  • Kim, Harrison
Magn Reson Imaging 2018 Journal Article, cited 2 times
Website
This technical note describes how to modify a population-based arterial input function to incorporate variation among the individuals. In DCE-MRI, an arterial input function (AIF) is often distorted by pulsated inflow effect and noise. A population-based AIF (pAIF) has high signal-to-noise ratio (SNR), but cannot incorporate the individual variation. AIF variation is mainly induced by variation in cardiac output and blood volume of the individuals, which can be detected by the full width at half maximum (FWHM) during the first passage and the amplitude of AIF, respectively. Thus pAIF scaled in time and amplitude fitting to the individual AIF may serve as a high SNR AIF incorporating the individual variation. The proposed method was validated using DCE-MRI images of 18 prostate cancer patients. Root mean square error (RMSE) of pAIF from individual AIFs was 0.88+/-0.48mM (mean+/-SD), but it was reduced to 0.25+/-0.11mM after pAIF modification using the proposed method (p<0.0001).

Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer

  • Kim, Ga Ram
  • Ku, You Jin
  • Kim, Jun Ho
  • Kim, Eun-Kyung
2020 Journal Article, cited 0 times
Website

Associations between gene expression profiles of invasive breast cancer and Breast Imaging Reporting and Data System MRI lexicon

  • Kim, Ga Ram
  • Ku, You Jin
  • Cho, Soon Gu
  • Kim, Sei Joong
  • Min, Byung Soh
Annals of Surgical Treatment and Research 2017 Journal Article, cited 3 times
Website
Purpose: To evaluate whether the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon could reflect the genomic information of breast cancers and to suggest intuitive imaging features as biomarkers. Methods: Matched breast MRI data from The Cancer Imaging Archive and gene expression profile from The Cancer Genome Atlas of 70 invasive breast cancers were analyzed. Magnetic resonance images were reviewed according to the BI-RADS MRI lexicon of mass morphology. The cancers were divided into 2 groups of gene clustering by gene set enrichment analysis. Clinicopathologic and imaging characteristics were compared between the 2 groups. Results: The luminal subtype was predominant in the group 1 gene set and the triple-negative subtype was predominant in the group 2 gene set (55 of 56, 98.2% vs. 9 of 14, 64.3%). Internal enhancement descriptors were different between the 2 groups; heterogeneity was most frequent in group 1 (27 of 56, 48.2%) and rim enhancement was dominant in group 2 (10 of 14, 71.4%). In group 1, the gene sets related to mammary gland development were overexpressed whereas the gene sets related to mitotic cell division were overexpressed in group 2. Conclusion: We identified intuitive imaging features of breast MRI associated with distinct gene expression profiles using the standard imaging variables of BI-RADS. The internal enhancement pattern on MRI might reflect specific gene expression profiles of breast cancers, which can be recognized by visual distinction.

Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging

  • Kim, Donnie
  • Wang, Nicholas C
  • Ravikumar, Visweswaran
  • Raghuram, DR
  • Li, Jinju
  • Patel, Ankit
  • Wendt, Richard E
  • Rao, Ganesh
  • Rao, Arvind
Frontiers in Computational Neuroscience 2019 Journal Article, cited 0 times

Validation of MRI-Based Models to Predict MGMT Promoter Methylation in Gliomas: BraTS 2021 Radiogenomics Challenge

  • Kim, B. H.
  • Lee, H.
  • Choi, K. S.
  • Nam, J. G.
  • Park, C. K.
  • Park, S. H.
  • Chung, J. W.
  • Choi, S. H.
Cancers (Basel) 2022 Journal Article, cited 1 times
Website
O6-methylguanine-DNA methyl transferase (MGMT) methylation prediction models were developed using only small datasets without proper external validation and achieved good diagnostic performance, which seems to indicate a promising future for radiogenomics. However, the diagnostic performance was not reproducible for numerous research teams when using a larger dataset in the RSNA-MICCAI Brain Tumor Radiogenomic Classification 2021 challenge. To our knowledge, there has been no study regarding the external validation of MGMT prediction models using large-scale multicenter datasets. We tested recent CNN architectures via extensive experiments to investigate whether MGMT methylation in gliomas can be predicted using MR images. Specifically, prediction models were developed and validated with different training datasets: (1) the merged (SNUH + BraTS) (n = 985); (2) SNUH (n = 400); and (3) BraTS datasets (n = 585). A total of 420 training and validation experiments were performed on combinations of datasets, convolutional neural network (CNN) architectures, MRI sequences, and random seed numbers. The first-place solution of the RSNA-MICCAI radiogenomic challenge was also validated using the external test set (SNUH). For model evaluation, the area under the receiver operating characteristic curve (AUROC), accuracy, precision, and recall were obtained. With unexpected negative results, 80.2% (337/420) and 60.0% (252/420) of the 420 developed models showed no significant difference with a chance level of 50% in terms of test accuracy and test AUROC, respectively. The test AUROC and accuracy of the first-place solution of the BraTS 2021 challenge were 56.2% and 54.8%, respectively, as validated on the SNUH dataset. In conclusion, MGMT methylation status of gliomas may not be predictable with preoperative MR images even using deep learning.

CNN-based CT denoising with an accurate image domain noise insertion technique

  • Kim, Byeongjoon
  • Divel, Sarah E.
  • Pelc, Norbert J.
  • Baek, Jongduk
  • Bosmans, Hilde
  • Zhao, Wei
  • Yu, Lifeng
2021 Conference Paper, cited 0 times
Website
Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.

Synthesis of Hybrid Data Consisting of Chest Radiographs and Tabular Clinical Records Using Dual Generative Models for COVID-19 Positive Cases

  • Kikuchi, T.
  • Hanaoka, S.
  • Nakao, T.
  • Takenaga, T.
  • Nomura, Y.
  • Mori, H.
  • Yoshikawa, T.
2024 Journal Article, cited 0 times
Website
To generate synthetic medical data incorporating image-tabular hybrid data by merging an image encoding/decoding model with a table-compatible generative model and assess their utility. We used 1342 cases from the Stony Brook University Covid-19-positive cases, comprising chest X-ray radiographs (CXRs) and tabular clinical data as a private dataset (pDS). We generated a synthetic dataset (sDS) through the following steps: (I) dimensionally reducing CXRs in the pDS using a pretrained encoder of the auto-encoding generative adversarial networks (alphaGAN) and integrating them with the correspondent tabular clinical data; (II) training the conditional tabular GAN (CTGAN) on this combined data to generate synthetic records, encompassing encoded image features and clinical data; and (III) reconstructing synthetic images from these encoded image features in the sDS using a pretrained decoder of the alphaGAN. The utility of sDS was assessed by the performance of the prediction models for patient outcomes (deceased or discharged). For the pDS test set, the area under the receiver operating characteristic (AUC) curve was calculated to compare the performance of prediction models trained separately with pDS, sDS, or a combination of both. We created an sDS comprising CXRs with a resolution of 256 x 256 pixels and tabular data containing 13 variables. The AUC for the outcome was 0.83 when the model was trained with the pDS, 0.74 with the sDS, and 0.87 when combining pDS and sDS for training. Our method is effective for generating synthetic records consisting of both images and tabular clinical data.

Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment

  • Kihara, S.
  • Koike, Y.
  • Takegawa, H.
  • Anetai, Y.
  • Nakamura, S.
  • Tanigawa, N.
  • Koizumi, M.
2022 Journal Article, cited 0 times
Website
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 +/- 0.03 and 0.76 +/- 0.05) and a significantly lower mean AHD value (3.0 +/- 0.5 mm vs 3.5 +/- 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.

Tumor segmentation via enhanced area growth algorithm for lung CT images

  • Khorshidi, A.
BMC Med Imaging 2023 Journal Article, cited 0 times
Website
BACKGROUND: Since lung tumors are in dynamic conditions, the study of tumor growth and its changes is of great importance in primary diagnosis. METHODS: Enhanced area growth (EAG) algorithm is introduced to segment the lung tumor in 2D and 3D modes on 60 patients CT images from four different databases by MATLAB software. The contrast augmentation, color intensity and maximum primary tumor radius determination, thresholding, start and neighbor points' designation in an array, and then modifying the points in the braid on average are the early steps of the proposed algorithm. To determine the new tumor boundaries, the maximum distance from the color-intensity center point of the primary tumor to the modified points is appointed via considering a larger target region and new threshold. The tumor center is divided into different subsections and then all previous stages are repeated from new designated points to define diverse boundaries for the tumor. An interpolation between these boundaries creates a new tumor boundary. The intersections with the tumor boundaries are firmed for edge correction phase, after drawing diverse lines from the tumor center at relevant angles. Each of the new regions is annexed to the core region to achieve a segmented tumor surface by meeting certain conditions. RESULTS: The multipoint-growth-starting-point grouping fashioned a desired consequence in the precise delineation of the tumor. The proposed algorithm enhanced tumor identification by more than 16% with a reasonable accuracy acceptance rate. At the same time, it largely assurances the independence of the last outcome from the starting point. By significance difference of p < 0.05, the dice coefficients were 0.80 +/- 0.02 and 0.92 +/- 0.03, respectively, for primary and enhanced algorithms. Lung area determination alongside automatic thresholding and also starting from several points along with edge improvement may reduce human errors in radiologists' interpretation of tumor areas and selection of the algorithm's starting point. CONCLUSIONS: The proposed algorithm enhanced tumor detection by more than 18% with a sufficient acceptance ratio of accuracy. Since the enhanced algorithm is independent of matrix size and image thickness, it is very likely that it can be easily applied to other contiguous tumor images. TRIAL REGISTRATION: PAZHOUHAN, PAZHOUHAN98000032. Registered 4 January 2021, http://pazhouhan.gerums.ac.ir/webreclist/view.action?webreclist_code=19300.

Non-small cell lung carcinoma histopathological subtype phenotyping using high-dimensional multinomial multiclass CT radiomics signature

  • Khodabakhshi, Z.
  • Mostafaei, S.
  • Arabi, H.
  • Oveisi, M.
  • Shiri, I.
  • Zaidi, H.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
OBJECTIVE: The aim of this study was to identify the most important features and assess their discriminative power in the classification of the subtypes of NSCLC. METHODS: This study involved 354 pathologically proven NSCLC patients including 134 squamous cell carcinoma (SCC), 110 large cell carcinoma (LCC), 62 not other specified (NOS), and 48 adenocarcinoma (ADC). In total, 1433 radiomics features were extracted from 3D volumes of interest drawn on the malignant lesion identified on CT images. Wrapper algorithm and multivariate adaptive regression splines were implemented to identify the most relevant/discriminative features. A multivariable multinomial logistic regression was employed with 1000 bootstrapping samples based on the selected features to classify four main subtypes of NSCLC. RESULTS: The results revealed that the texture features, specifically gray level size zone matrix features (GLSZM), were the significant indicators of NSCLC subtypes. The optimized classifier achieved an average precision, recall, F1-score, and accuracy of 0.710, 0.703, 0.706, and 0.865, respectively, based on the selected features by the wrapper algorithm. CONCLUSIONS: Our CT radiomics approach demonstrated impressive potential for the classification of the four main histological subtypes of NSCLC, It is anticipated that CT radiomics could be useful in treatment planning and precision medicine.

Application of Homomorphic Encryption on Neural Network in Prediction of Acute Lymphoid Leukemia

  • Khilji, Ishfaque Qamar
  • Saha, Kamonashish
  • Amin, Jushan
  • Iqbal, Muhammad
International Journal of Advanced Computer Science and Applications 2020 Journal Article, cited 0 times
Machine learning is now becoming a widely used mechanism and applying it in certain sensitive fields like medical and financial data has only made things easier. Accurate Diagnosis of cancer is essential in treating it properly. Medical tests regarding cancer in recent times are quite expensive and not available in many parts of the world. CryptoNets, on the other hand, is an exhibit of the use of Neural-Networks over data encrypted with Homomorphic Encryption. This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions in case of Acute Lymphoid Leukemia (ALL). By using CryptoNets, the patients or doctors in need of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider (hospital or model owner). Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted all throughout the process and finally sending the prediction to the user who can decrypt the results. During the process the service provider (hospital or the model owner) gains no knowledge about the data that was used or the result since everything is encrypted throughout the process. Our work proposes a Neural Network model which will be able to predict ALL-Acute Lymphoid Leukemia with approximate 80% accuracy using the C_NMC Challenge dataset. Prior to building our own model, we used the dataset and pre-process it using a different approach. We then ran on different machine learning and Neural Network models like VGG16, SVM, AlexNet, ResNet50 and compared the validation accuracies of these models with our own model which lastly gives better accuracy than the rest of the models used. We then use our own pre-trained Neural Network to make predictions using CryptoNets. We were able to achieve an encrypted prediction of about 78% which is close to what we achieved when validating our own CNN model that has a validation accuracy of 80% for prediction of Acute Lymphoid Leukemia (ALL).

3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme

  • Khened, Mahendra
  • Anand, Vikas Kumar
  • Acharya, Gagan
  • Shah, Nameeta
  • Krishnamurthi, Ganapathy
2019 Conference Proceedings, cited 0 times
Website

Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network

  • Khan, Zia
  • Yahya, Norashikin
  • Alsaih, Khaled
  • Meriaudeau, Fabrice
2019 Conference Paper, cited 0 times
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.

Preliminary Detection and Analysis of Lung Cancer on CT images using MATLAB: A Cost-effective Alternative

  • Khan, Md Daud Hossain
  • Ahmed, Mansur
  • Bach, Christian
Journal of Biomedical Engineering and Medical Imaging 2016 Journal Article, cited 0 times
Cancer is the second leading cause of death worldwide. Lung cancer possesses the highest mortality, with non-small cell lung cancer (NSCLC) being its most prevalent subtype of lung cancer. Despite gradual reduction in incidence, approximately 585720 new cancer patients were diagnosed in 2014, with majority from low-and-middleincome countries (LMICs). Limited availability of diagnostic equipment, poorly trained medical staff, late revelation of symptoms and classification of the exact lung cancer subtype and overall poor patient access to medical providers result in late or terminal stage diagnosis and delay of treatment. Therefore, the need for an economic, simple, fast computed image-processing system to aid decisions regarding staging and resection, especially for LMICs is clearly imminent. In this study, we developed a preliminary program using MATLAB that accurately detects cancer cells in CT images of lungs of affected patients, measures area of region of interest (ROI) or tumor mass and helps determine nodal spread. A preset value for nodal spread was used, which can be altered accordingly.

Automatic Segmentation and Shape, Texture-based Analysis of Glioma Using Fully Convolutional Network

  • Khan, Mohammad Badhruddouza
  • Saha, Pranto Soumik
  • Roy,Amit Dutta
2021 Conference Paper, cited 0 times
Website
Lower-grade glioma is a type of brain tumor that is usually found in the human brain and spinal cord. Early detection and accurate diagnosis of lower-grade glioma can reduce the fatal risk of the affected patients. An essential step for lower-grade glioma analysis is MRI Image Segmentation. Manual segmentation processes are time-consuming and depend on the expertise of the pathologist. In this study, three different deep-learning-based automatic segmentation models were used to segment the tumor-affected region from the MRI slice. The segmentation accuracy of the three models-U-Net, FCN, and U-Net with ResNeXt50 backbone were respectively 80%, 84%, and 91%. Two shape-based features- (angular standard deviation, marginal fluctuation) and six texture-based features (entropy, local binary pattern, homogeneity, contrast, correlation, energy) were extracted from the segmented images to find the association with seven existing genomic data types. It was found out that there was a significant association between the genomic data type-microRNA cluster and texture-based feature-entropy case and genomic data type-RNA sequence cluster with shape-based feature-angular standard deviation case. In both of these cases, the p values were observed less than 0.05 for the Fisher exact test.

VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images

  • Khan, M. A.
  • Rajinikanth, V.
  • Satapathy, S. C.
  • Taniar, D.
  • Mohanty, J. R.
  • Tariq, U.
  • Damasevicius, R.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.

Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists

  • Khan, M. A.
  • Ashraf, I.
  • Alhaisoni, M.
  • Damasevicius, R.
  • Scherer, R.
  • Rehman, A.
  • Bukhari, S. A. C.
Diagnostics (Basel) 2020 Journal Article, cited 216 times
Website
Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.

3D-MRI Brain Tumor Detection Model Using Modified Version of Level Set Segmentation Based on Dragonfly Algorithm

  • Khalil, H. A.
  • Darwish, S.
  • Ibrahim, Y. M.
  • Hassan, O. F.
2020 Journal Article, cited 31 times
Website
Accurate brain tumor segmentation from 3D Magnetic Resonance Imaging (3D-MRI) is an important method for obtaining information required for diagnosis and disease therapy planning. Variation in the brain tumor's size, structure, and form is one of the main challenges in tumor segmentation, and selecting the initial contour plays a significant role in reducing the segmentation error and the number of iterations in the level set method. To overcome this issue, this paper suggests a two-step dragonfly algorithm (DA) clustering technique to extract initial contour points accurately. The brain is extracted from the head in the preprocessing step, then tumor edges are extracted using the two-step DA, and these extracted edges are used as an initial contour for the MRI sequence. Lastly, the tumor region is extracted from all volume slices using a level set segmentation method. The results of applying the proposed technique on 3D-MRI images from the multimodal brain tumor segmentation challenge (BRATS) 2017 dataset show that the proposed method for brain tumor segmentation is comparable to the state-of-the-art methods.

A U-Net Ensemble for breast lesion segmentation in DCE MRI

  • Khaled, R
  • Vidal, Joel
  • Vilanova, Joan C
  • Martí, Robert
2022 Journal Article, cited 0 times
Website

Arterial input function and tracer kinetic model-driven network for rapid inference of kinetic maps in Dynamic Contrast-Enhanced MRI (AIF-TK-net)

  • Kettelkamp, Joseph
  • Lingala, Sajan Goud
2020 Conference Paper, cited 0 times
Website
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.

Prostate Cancer Diagnosis Based on Cascaded Convolutional Neural Networks

  • LIU Ke-wen
  • LIU Zi-long
  • WANG Xiang-yu
  • CHEN Li
  • LI Zhao
  • WU Guang-yao
  • LIU Chao-yang
2020 Journal Article, cited 1 times
Website
Interpreting magnetic resonance imaging (MRI) data by radiologists is time consuming and demands special expertise. Diagnosis of prostate cancer (PCa) with deep learning can also be time consuming and data storage consuming. This work presents an automated method for PCa detection based on cascaded convolutional neural network (CNN), including pre-network and post-network. The pre-network is based on a Faster-RCNN and trained with prostate images in order to separate the prostate from nearby tissues; the ResNet-based post-network is for PCa diagnosis, which is connected by bottlenecks and improved by applying batch normalization (BN) and global average pooling (GAP). The experimental results demonstrated that the cascaded CNN proposed had a good classification results on the in-house datasets, with less training time and computation resources.

Multi-institutional Prognostic Modeling in Head and Neck Cancer: Evaluating Impact and Generalizability of Deep Learning and Radiomics

  • Kazmierski, Michal
  • Welch, Mattea
  • Kim, Sejin
  • McIntosh, Chris
  • Rey-McIntyre, Katrina
  • Huang, Shao Hui
  • Patel, Tirth
  • Tadic, Tony
  • Milosevic, Michael
  • Liu, Fei-Fei
  • Ryczkowski, Adam
  • Kazmierska, Joanna
  • Ye, Zezhong
  • Plana, Deborah
  • Aerts, Hugo J.W.L.
  • Kann, Benjamin H.
  • Bratman, Scott V.
  • Hope, Andrew J.
  • Haibe-Kains, Benjamin
2023 Journal Article, cited 0 times
Website
Artificial intelligence (AI) and machine learning (ML) are becoming critical in developing and deploying personalized medicine and targeted clinical trials. Recent advances in ML have enabled the integration of wider ranges of data including both medical records and imaging (radiomics). However, the development of prognostic models is complex as no modeling strategy is universally superior to others and validation of developed models requires large and diverse datasets to demonstrate that prognostic models developed (regardless of method) from one dataset are applicable to other datasets both internally and externally. Using a retrospective dataset of 2,552 patients from a single institution and a strict evaluation framework that included external validation on three external patient cohorts (873 patients), we crowdsourced the development of ML models to predict overall survival in head and neck cancer (HNC) using electronic medical records (EMR) and pretreatment radiological images. To assess the relative contributions of radiomics in predicting HNC prognosis, we compared 12 different models using imaging and/or EMR data. The model with the highest accuracy used multitask learning on clinical data and tumor volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction, outperforming models relying on clinical data only, engineered radiomics, or complex deep neural network architecture. However, when we attempted to extend the best performing models from this large training dataset to other institutions, we observed significant reductions in the performance of the model in those datasets, highlighting the importance of detailed population-based reporting for AI/ML model utility and stronger validation frameworks. We have developed highly prognostic models for overall survival in HNC using EMRs and pretreatment radiological images based on a large, retrospective dataset of 2,552 patients from our institution.Diverse ML approaches were used by independent investigators. The model with the highest accuracy used multitask learning on clinical data and tumor volume.External validation of the top three performing models on three datasets (873 patients) with significant differences in the distributions of clinical and demographic variables demonstrated significant decreases in model performance.ML combined with simple prognostic factors outperformed multiple advanced CT radiomics and deep learning methods. ML models provided diverse solutions for prognosis of patients with HNC but their prognostic value is affected by differences in patient populations and require extensive validation.

Computer-aided detection of brain tumors using image processing techniques

  • Kazdal, Seda
  • Dogan, Buket
  • Camurcu, Ali Yilmaz
2015 Conference Proceedings, cited 3 times
Website
Computer-aided detection applications has managed to make significant contributions to medical world in today's technology. In this study, the detection of brain tumors in magnetic resonance images was performed. This study proposes a computer aided detection system that is based on morphological reconstruction and rule based detection of tumors that using the morphological features of the regions of interest. The steps involved in this study are: the pre-processing stage, the segmentation stage, the stage of identification of the region of interest and the stage of detection of tumors. With these methods applied on 497 magnetic resonance image slices of 10 patients, the performance of the computer aided detection system is achieved 84,26% accuracy.

The Combination of Low Skeletal Muscle Mass and High Tumor Interleukin-6 Associates with Decreased Survival in Clear Cell Renal Cell Carcinoma

  • Kays, J. K.
  • Koniaris, L. G.
  • Cooper, C. A.
  • Pili, R.
  • Jiang, G.
  • Liu, Y.
  • Zimmers, T. A.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Clear cell renal carcinoma (ccRCC) is frequently associated with cachexia which is itself associated with decreased survival and quality of life. We examined relationships among body phenotype, tumor gene expression, and survival. Demographic, clinical, computed tomography (CT) scans and tumor RNASeq for 217 ccRCC patients were acquired from the Cancer Imaging Archive and The Cancer Genome Atlas (TCGA). Skeletal muscle and fat masses measured from CT scans and tumor cytokine gene expression were compared with survival by univariate and multivariate analysis. Patients in the lowest skeletal muscle mass (SKM) quartile had significantly shorter overall survival versus the top three SKM quartiles. Patients who fell into the lowest quartiles for visceral adipose mass (VAT) and subcutaneous adipose mass (SCAT) also demonstrated significantly shorter overall survival. Multiple tumor cytokines correlated with mortality, most strongly interleukin-6 (IL-6); high IL-6 expression was associated with significantly decreased survival. The combination of low SKM/high IL-6 was associated with significantly lower overall survival compared to high SKM/low IL-6 expression (26.1 months vs. not reached; p < 0.001) and an increased risk of mortality (HR = 5.95; 95% CI = 2.86-12.38). In conclusion, tumor cytokine expression, body composition, and survival are closely related, with low SKM/high IL-6 expression portending worse prognosis in ccRCC.

Evaluating Mobile Tele-radiology Performance for the Task of Analyzing Lung Lesions on CT Images

  • Kaya, O.
  • Kara, E.
  • Inan, I.
  • Kara, E.
  • Matur, M.
  • Guvenis, A.
2021 Conference Proceedings, cited 0 times
Website
The accurate detection of lung lesions as well as the precise measurement of their sizes on Computed Tomography (CT) images is known to be crucial for the response to therapy assessment of cancer patients. The goal of this study is to investigate the feasibility of using mobile tele-radiology for this task in order to improve efficiency in radiology. Lung CT Images were obtained from The Cancer Imaging Archive (TCIA). The Bland-Altman analysis method was used to compare and assess conventional radiology and mobile radiology based lesion size measurements. Percentage of correctly detected lesions at the right image locations was also recorded. Sizes of 183 lung lesions between 5 and 52 mm in CT images were measured by two experienced radiologists. Bland-Altman plots were drawn, and limits of agreements (LOA) were determined as 0.025 and 0.975 percentiles (−1.00, 0.00), (−1.39, 0.00). For lesions of 10 mm and higher, these intervals were found to be much smaller than the decision interval (−30% and +20%) recommended by the RECIST 1.1 criteria. In average, observers accurately detected 98.2% of the total 271 lesions on the medical monitor, while they detected 92.8% of the nodules on the iPhone. In conclusion, mobile tele-radiology can be a feasible alternative for the accurate measurement of lung lesions on CT images. A higher resolution display technology such as iPad may be preferred in order to detect new small <5 mm lesions more accurately. Further studies are needed to confirm these results with more mobile technologies and types of lesions. Keywords Lung CT Lung lesions Lesion size measurement Tumor burden measurement Measurement uncertainties Tele-radiology Bland-Altman method Non-parametric method

Malignancy prediction by using characteristic-based fuzzy sets: A preliminary study

  • Kaya, Aydin
  • Can, Ahmet Burak
2015 Conference Proceedings, cited 0 times
Website

eFis: A Fuzzy Inference Method for Predicting Malignancy of Small Pulmonary Nodules

  • Kaya, Aydın
  • Can, Ahmet Burak
2014 Book Section, cited 3 times
Website
Predicting malignancy of small pulmonary nodules from computer tomography scans is a difficult and important problem to diagnose lung cancer. This paper presents a rule based fuzzy inference method for predicting malignancy rating of small pulmonary nodules. We use the nodule characteristics provided by Lung Image Database Consortium dataset to determine malignancy rating. The proposed fuzzy inference method uses outputs of ensemble classifiers and rules from radiologist agreements on the nodules. The results are evaluated over classification accuracy performance and compared with single classifier methods. We observed that the preliminary results are very promising and system is open to development.

Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients

  • Kawahara, D.
  • Tsuneda, M.
  • Ozawa, S.
  • Okamoto, H.
  • Nakamura, M.
  • Nishio, T.
  • Nagata, Y.
J Appl Clin Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Adaptive radiotherapy requires auto-segmentation in patients with head and neck (HN) cancer. In the current study, we propose an auto-segmentation model using a generative adversarial network (GAN) on magnetic resonance (MR) images of HN cancer for MR-guided radiotherapy (MRgRT). MATERIAL AND METHODS: In the current study, we used a dataset from the American Association of Physicists in Medicine MRI Auto-Contouring (RT-MAC) Grand Challenge 2019. Specifically, eight structures in the MR images of HN region, namely submandibular glands, lymph node level II and level III, and parotid glands, were segmented with the deep learning models using a GAN and a fully convolutional network with a U-net. These images were compared with the clinically used atlas-based segmentation. RESULTS: The mean Dice similarity coefficient (DSC) of the U-net and GAN models was significantly higher than that of the atlas-based method for all the structures (p < 0.05). Specifically, the maximum Hausdorff distance (HD) was significantly lower than that in the atlas method (p < 0.05). Comparing the 2.5D and 3D U-nets, the 3D U-net was superior in segmenting the organs at risk (OAR) for HN patients. The DSC was highest for 0.75-0.85, and the HD was lowest within 5.4 mm of the 2.5D GAN model in all the OARs. CONCLUSIONS: In the current study, we investigated the auto-segmentation of the OAR for HN patients using U-net and GAN models on MR images. Our proposed model is potentially valuable for improving the efficiency of HN RT treatment planning.

Supervised Dimension-Reduction Methods for Brain Tumor Image Data Analysis

  • Kawaguchi, Atsushi
2017 Book Section, cited 1 times
Website
The purpose of this study was to construct a risk score for glioblastomas based on magnetic resonance imaging (MRI) data. Tumor identification requires multimodal voxel-based imaging data that are highly dimensional, and multivariate models with dimension reduction are desirable for their analysis. We propose a two-step dimension-reduction method using a radial basis function–supervised multi-block sparse principal component analysis (SMS–PCA) method. The method is first implemented through the basis expansion of spatial brain images, and the scores are then reduced through regularized matrix decomposition in order to produce simultaneous data-driven selections of related brain regions supervised by univariate composite scores representing linear combinations of covariates such as age and tumor location. An advantage of the proposed method is that it identifies the associations of brain regions at the voxel level, and supervision is helpful in the interpretation.

Radiological Atlas for Patient Specific Model Generation

  • Kawa, Jacek
  • Juszczyk, Jan
  • Pyciński, Bartłomiej
  • Badura, Paweł
  • Pietka, Ewa
2014 Book Section, cited 11 times
Website
The paper presents the development of a radiological atlas employed in an abdomen patient specific model verification. After a patient specific model introduction, the development of a radiological atlas is discussed. Unprocessed database, containing DICOM images and radiological diagnosis presented. This database is processed manually to retrieve the required information. Organs and pathologies are determined and each study is tagged with specific labels, e.g. ‘liver normal’, ‘liver tumor’, ‘liver cancer’, ‘spleen normal’, ‘spleen absence’, etc. Selected structures are additionally segmented. Masks are stored as gold standard. Web service based network system is provided to permit PACS-driven retrieval of image data matching desired criteria. Image series as well as ground truth images may be retrieved for benchmark or model-development purposes. The database is evaluated.

Volumetric analysis framework for accurate segmentation and classification (VAF-ASC) of lung tumor from CT images

  • Kavitha, M. S.
  • Shanthini, J.
  • Karthikeyan, N.
Soft Computing 2020 Journal Article, cited 0 times
Lung tumor can be typically stated as the abnormal cell growth in lungs that may cause severe threat to patient health, since lung is a significant organ which comprises associated network of blood veins and lymphatic canals. The earlier detection and classification of lung tumor creates a greater impact on increasing the survival rate of patients. For analysis, the Computed Tomography (CT) lung images are broadly used, since it gives information about the various lung regions. The prediction of tumor contour, position, and volume plays an imperative role in accurate segmentation and classification of tumor cells. This will aid in successful tumor stage detection and treatment phases. With that concern, this paper develops a Volumetric Analysis Framework for Accurate Segmentation and Classification of lung tumors. The volumetric analysis framework comprises the estimation of length, thickness, and height of the detected tumor cell for achieving précised results. Though there are many models for tumor detection from 2D CT inputs, it is very important to develop a method for lung nodule separation from noisy background. For that, this paper connectivity and locality features of the lung image pixels. Moreover, morphological processing techniques are incorporated for removing the additional noises and airways. Tumor segmentation has been accomplished by the k-means clustering approach. Tumor Nodule Metastasis classification based-volumetric analysis is performed for accurate results. The Volumetric Analysis Framework provides better results with respect to factors such as accuracy rate of tumor diagnosis, reduced computation time, and appropriate tumor stage classification.

ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling

  • Kavitha, M. S.
  • Shanthini, J.
  • Bhavadharini, R. M.
2020 Journal Article, cited 0 times
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.

ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques

  • Kavitha, MS
  • Shanthini, J
  • Sabitha, R
Journal of Medical Systems 2019 Journal Article, cited 0 times
Website

A joint intensity and edge magnitude-based multilevel thresholding algorithm for the automatic segmentation of pathological MR brain images

  • Kaur, Taranjit
  • Saini, Barjinder Singh
  • Gupta, Savita
Neural Computing and Applications 2016 Journal Article, cited 1 times
Website

An automated slice sorting technique for multi-slice computed tomography liver cancer images using convolutional network

  • Kaur, Amandeep
  • Chauhan, Ajay Pal Singh
  • Aggarwal, Ashwani Kumar
Expert Systems with Applications 2021 Journal Article, cited 1 times
Website
An early detection and diagnosis of liver cancer can help the radiation therapist in choosing the target area and the amount of radiation dose to be delivered to the patients. The radiologists usually spend a lot of time in selecting the most relevant slices from thousands of scans, which are usually obtained from multi-slice CT scanners. The purpose of this paper multi-organ classification of 3D CT images of liver cancer suspected patients by convolution network. A dataset consisting of 63503 CT images of liver cancer patients taken from The Cancer Imaging Archive (TCIA) has been used to validate the proposed method. The method is a CNN for classification of CT liver cancer images. The classification results in terms of accuracy, precision, sensitivity, specificity, true positive rate, false negative rate, and F1 score have been computed. The results manifest a high validation accuracy of 99.1%, when convolution network is trained with the data augmented volume slices as compared to accuracy of 98.7% with that obtained original volume slices. The overall test accuracy for data augmented volume slice dataset is 93.1% superior to other volume slices. The main contribution of this work is that it will help the radiation therapist to focus on a small subset of CT image data. This is achieved by segregating the whole set of 63503 CT images into three categories based on the likelihood of the spread of cancer to other organs in liver cancer suspected patients. Consequently, only 19453 CT images had liver visible in them, making rest of 44050 CT images less relevant for liver cancer detection. The proposed method will help in the rapid diagnosis and treatment of liver cancer patients.

Radiomic analysis identifies tumor subtypes associated with distinct molecular and microenvironmental factors in head and neck squamous cell carcinoma

  • Katsoulakis, Evangelia
  • Yu, Yao
  • Apte, Aditya P.
  • Leeman, Jonathan E.
  • Katabi, Nora
  • Morris, Luc
  • Deasy, Joseph O.
  • Chan, Timothy A.
  • Lee, Nancy Y.
  • Riaz, Nadeem
  • Hatzoglou, Vaios
  • Oh, Jung Hun
Oral Oncology 2020 Journal Article, cited 0 times
Website
Purpose To identify whether radiomic features from pre-treatment computed tomography (CT) scans can predict molecular differences between head and neck squamous cell carcinoma (HNSCC) using The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Methods 77 patients from the TCIA with HNSCC had imaging suitable for analysis. Radiomic features were extracted and unsupervised consensus clustering was performed to identify subtypes. Genomic data was extracted from the matched patients in the TCGA database. We explored relationships between radiomic features and molecular profiles of tumors, including the tumor immune microenvironment. A machine learning method was used to build a model predictive of CD8 + T-cells. An independent cohort of 83 HNSCC patients was used to validate the radiomic clusters. Results We initially extracted 104 two-dimensional radiomic features, and after feature stability tests and removal of volume dependent features, reduced this to 67 features for subsequent analysis. Consensus clustering based on these features resulted in two distinct clusters. The radiomic clusters differed by primary tumor subsite (p = 0.0096), HPV status (p = 0.0127), methylation-based clustering results (p = 0.0025), and tumor immune microenvironment. A random forest model using radiomic features predicted CD8 + T-cells independent of HPV status with R2 = 0.30 (p < 0.0001) on cross validation. Consensus clustering on the validation cohort resulted in two distinct clusters that differ in tumor subsite (p = 1.3 × 10-7) and HPV status (p = 4.0 × 10-7). Conclusion Radiomic analysis can identify biologic features of tumors such as HPV status and T-cell infiltration and may be able to provide other information in the near future to help with patient stratification.

“Radiotranscriptomics”: A synergy of imaging and transcriptomics in clinical assessment

  • Katrib, Amal
  • Hsu, William
  • Bui, Alex
  • Xing, Yi
Quantitative Biology 2016 Journal Article, cited 0 times

Identification of Tumor area from Brain MR Image

  • Kasım, Ömer
  • Kuzucuoğlu, Ahmet Emin
2016 Conference Paper, cited 1 times
Website
The analysis of Magnetic Resonance Image has an important role in definite detection of Brain Tumor. The shape, location and size of tumor are examined by Radiology specialist to diagnose and plan treatment. In the intense work pace, it's not possible to get results quickly. At this scheme, unnoticed information can be recovered by an image processing algorithm. In this study, at database images which are collected from REMBRANT were cleared from noise, transformed with Karhunen Loeve Transform to gray level and segmented with Pott's Markov Random Field Model. This hybrid algorithm minimizes the data loss, contrast and noise problems. After segmentation stage, shape and statistical analysis are performed to get feature vector about Region of Interest. The images are classified as existing tumor or not existing tumor. The algorithm can recognize the presence of tumor with 100% and tumor's area with 95% accuracy. The results are reported to help the specialists.

Secure medical image encryption with Walsh-Hadamard transform and lightweight cryptography algorithm

  • Kasim, Ömer
2022 Journal Article, cited 0 times
Website
It is important to ensure the privacy and security of the medical images that are produced with electronic health records. Security is ensured by encrypting and transmitting the electronic health records, and privacy is provided according to the integrity of the data and the decryption of data with the user role. Both the security and privacy of medical images are provided with the innovative use of lightweight cryptology (LWC) and Walsh-Hadamard transform (WHT) in this study. Unlike the light cryptology algorithm used in encryption, the hex key in the algorithm is obtained in two parts. The first part is used as the public key and the second part as the user-specific private key. This eliminated the disadvantage of the symmetric encryption algorithm. After the encryption was performed with a two-part hex key, the Walsh-Hadamard transform was applied to the encrypted image. In the Walsh-Hadamard transform, the Hadamard matrix was rotated with certain angles according to the user role. This allowed the encoded medical image to be obtained as a vector. The proposed method was verified with the results of the number of pixel change rates and unified average changing intensity measurement parameters and histogram analysis. The results showed that the method is more successful than the lightweight cryptology method and the proposed methods in the literature to solve security and privacy of the data in medical applications with user roles.

Radiogenomic correlation for prognosis in patients with glioblastoma multiformae

  • Karnayana, Pallavi Machaiah
2013 Thesis, cited 0 times
Website

Predicting the Grade of Clear Cell Renal Cell Carcinoma from CT Images Using Random Subspace-KNN and Random Forest Classifiers

  • Karagoz, Ahmet
  • Guvenis, Albert
2021 Conference Paper, cited 0 times
Website
Accurate and non-invasive determination of the International Society of Urological Pathology (ISUP) based tumor grade is important for the effective management of patients with clear cell renal cell carcinoma (cc-RCC). In this study, the radiomic analysis of 3D computed tomography (CT) images are used to determine ISUP grades of cc-RCC patients by exploring machine learning (ML) methods that can address small ISUP grade image datasets. 143 cc-RCC patient studies from The Cancer Imaging Archive (TCIA) USA were used in the study. 1133 radiomic features were extracted from the normalized 3D segmented CT images. Correlation coefficient analysis, Random Forest feature importance analysis and backward elimination methods were used consecutively to reduce the number of features. 15 out of 1133 features were selected. A k-nearest neighbors (KNN) classifier with random subspaces and a Random Forest classifier were implemented. Model performances were evaluated independently on the unused 20% of the original imbalanced data. ISUP grades were predicted by a KNN classifier under random subspaces with an accuracy of 90% and area under the curve (AUC) of 0.88 using the test data. Grades were predicted by a Random Forest classifier with an accuracy of 83% and AUC of 0.80 using the test data. In conclusion, ensemble classifiers can be used to predict the ISUP grade of cc-RCC tumors from CT images with sufficient reliability. Larger datasets and new types of features are currently being investigated.

Deep Learning-Based Radiomics for Prognostic Stratification of Low-Grade Gliomas Using a Multiple-Gene Signature

  • Karabacak, Mert
  • Ozkara, Burak B.
  • Senparlak, Kaan
  • Bisdas, Sotirios
Applied Sciences 2023 Journal Article, cited 0 times
Website
Low-grade gliomas are a heterogeneous group of infiltrative neoplasms. Radiomics allows the characterization of phenotypes with high-throughput extraction of quantitative imaging features from radiologic images. Deep learning models, such as convolutional neural networks (CNNs), offer well-performing models and a simplified pipeline by automatic feature learning. In our study, MRI data were retrospectively obtained from The Cancer Imaging Archive (TCIA), which contains MR images for a subset of the LGG patients in The Cancer Genome Atlas (TCGA). Corresponding molecular genetics and clinical information were obtained from TCGA. Three genes included in the genetic signatures were WEE1, CRTAC1, and SEMA4G. A CNN-based deep learning model was used to classify patients into low and high-risk groups, with the median gene signature risk score as the cut-off value. The data were randomly split into training and test sets, with 61 patients in the training set and 20 in the test set. In the test set, models using T1 and T2 weighted images had an area under the receiver operating characteristic curve of 73% and 79%, respectively. In conclusion, we developed a CNN-based model to predict non-invasively the risk stratification provided by the prognostic gene signature in LGGs. Numerous previously discovered gene signatures and novel genetic identifiers that will be developed in the future may be utilized with this method.

Multi-Institutional Validation of Deep Learning for Pretreatment Identification of Extranodal Extension in Head and Neck Squamous Cell Carcinoma

  • Kann, B. H.
  • Hicks, D. F.
  • Payabvash, S.
  • Mahajan, A.
  • Du, J.
  • Gupta, V.
  • Park, H. S.
  • Yu, J. B.
  • Yarbrough, W. G.
  • Burtness, B. A.
  • Husain, Z. A.
  • Aneja, S.
J Clin Oncol 2020 Journal Article, cited 5 times
Website
PURPOSE: Extranodal extension (ENE) is a well-established poor prognosticator and an indication for adjuvant treatment escalation in patients with head and neck squamous cell carcinoma (HNSCC). Identification of ENE on pretreatment imaging represents a diagnostic challenge that limits its clinical utility. We previously developed a deep learning algorithm that identifies ENE on pretreatment computed tomography (CT) imaging in patients with HNSCC. We sought to validate our algorithm performance for patients from a diverse set of institutions and compare its diagnostic ability to that of expert diagnosticians. METHODS: We obtained preoperative, contrast-enhanced CT scans and corresponding pathology results from two external data sets of patients with HNSCC: an external institution and The Cancer Genome Atlas (TCGA) HNSCC imaging data. Lymph nodes were segmented and annotated as ENE-positive or ENE-negative on the basis of pathologic confirmation. Deep learning algorithm performance was evaluated and compared directly to two board-certified neuroradiologists. RESULTS: A total of 200 lymph nodes were examined in the external validation data sets. For lymph nodes from the external institution, the algorithm achieved an area under the receiver operating characteristic curve (AUC) of 0.84 (83.1% accuracy), outperforming radiologists' AUCs of 0.70 and 0.71 (P = .02 and P = .01). Similarly, for lymph nodes from the TCGA, the algorithm achieved an AUC of 0.90 (88.6% accuracy), outperforming radiologist AUCs of 0.60 and 0.82 (P < .0001 and P = .16). Radiologist diagnostic accuracy improved when receiving deep learning assistance. CONCLUSION: Deep learning successfully identified ENE on pretreatment imaging across multiple institutions, exceeding the diagnostic ability of radiologists with specialized head and neck experience. Our findings suggest that deep learning has utility in the identification of ENE in patients with HNSCC and has the potential to be integrated into clinical decision making.

The contribution of axillary lymph node volume to recurrence-free survival status in breast cancer patients with sub-stratification by molecular subtypes and pathological complete response

  • Kang, James
  • Li, Haifang
  • Cattell, Renee
  • Talanki, Varsha
  • Cohen, Jules A.
  • Bernstein, Clifford S.
  • Duong, Tim
Breast Cancer Research 2020 Journal Article, cited 0 times
Website
Purpose This study sought to examine the contribution of axillary lymph node (LN) volume to recurrence-free survival (RFS) in breast cancer patients with sub-stratification by molecular subtypes, and full or nodal PCR. Methods The largest LN volumes per patient at pre-neoadjuvant chemotherapy on standard clinical breast 1.5-Tesla MRI, 3 molecular subtypes, full, breast, and nodal PCR, and 10-year RFS were tabulated (N = 110 patients from MRIs of I-SPY-1 TRIAL). A volume threshold of two standard deviations was used to categorize large versus small LNs for sub stratification. In addition, “normal” node volumes were determined from a different cohort of 218 axillary LNs. Results LN volume (4.07 ± 5.45 cm3) were significantly larger than normal axillary LN volumes (0.646 ± 0.657 cm3, P = 10− 16). Full and nodal pathologic complete response (PCR) was not dependent on pre-neoadjuvant chemotherapy nodal volume (P > .05). The HR+/HER2– group had smaller axillary LN volumes than the HER2 + and triple-negative groups (P < .05). Survival was not dependent on pre-treatment axillary LN volumes alone (P = .29). However, when substratified by PCR, the large LN group with full (P = .011) or nodal PCR (P = .0026) both showed better recurrence-free survival than the small LN group. There was significant difference in RFS when the small node group was separated by the 3 molecular subtypes (P = .036) but not the large node group (P = .97). Conclusions This study found an interaction of axillary lymph node volume, pathological complete responses, and molecular subtypes that inform recurrence-free survival status. Improved characterization of the axillary lymph nodes has the potential to improve the management of breast cancer patients.

3D multi-view convolutional neural networks for lung nodule classification

  • Kang, Guixia
  • Liu, Kui
  • Hou, Beibei
  • Zhang, Ningbo
PLoS One 2017 Journal Article, cited 7 times
Website

A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction

  • Kang, E.
  • Min, J.
  • Ye, J. C.
Med Phys 2017 Journal Article, cited 568 times
Website
PURPOSE: Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. METHOD: We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. RESULTS: Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." CONCLUSIONS: To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research.

LRR-CED: low-resolution reconstruction-aware convolutional encoder-decoder network for direct sparse-view CT image reconstruction

  • Kandarpa, V. S. S.
  • Perelli, A.
  • Bousse, A.
  • Visvikis, D.
Phys Med Biol 2022 Journal Article, cited 0 times
Website
Objective. Sparse-view computed tomography (CT) reconstruction has been at the forefront of research in medical imaging. Reducing the total x-ray radiation dose to the patient while preserving the reconstruction accuracy is a big challenge. The sparse-view approach is based on reducing the number of rotation angles, which leads to poor quality reconstructed images as it introduces several artifacts. These artifacts are more clearly visible in traditional reconstruction methods like the filtered-backprojection (FBP) algorithm.Approach. Over the years, several model-based iterative and more recently deep learning-based methods have been proposed to improve sparse-view CT reconstruction. Many deep learning-based methods improve FBP-reconstructed images as a post-processing step. In this work, we propose a direct deep learning-based reconstruction that exploits the information from low-dimensional scout images, to learn the projection-to-image mapping. This is done by concatenating FBP scout images at multiple resolutions in the decoder part of a convolutional encoder-decoder (CED). Main results. This approach is investigated on two different networks, based on Dense Blocks and U-Net to show that a direct mapping can be learned from a sinogram to an image. The results are compared to two post-processing deep learning methods (FBP-ConvNet and DD-Net) and an iterative method that uses a total variation (TV) regularization. Significance. This work presents a novel method that uses information from both sinogram and low-resolution scout images for sparse-view CT image reconstruction. We also generalize this idea by demonstrating results with two different neural networks. This work is in the direction of exploring deep learning across the various stages of the image reconstruction pipeline involving data correction, domain transfer and image improvement.

Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging

  • Kanber, B.
  • Ruffle, J.
  • Cardoso, J.
  • Ourselin, S.
  • Ciccarelli, O.
Neuroinformatics 2019 Journal Article, cited 0 times
Website
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.

Weakly-supervised learning for lung carcinoma classification using deep learning

  • Kanavati, Fahdi
  • Toyokawa, Gouji
  • Momosaki, Seiya
  • Rambeau, Michael
  • Kozuma, Yuka
  • Shoji, Fumihiro
  • Yamazaki, Koji
  • Takeo, Sadanori
  • Iizuka, Osamu
  • Tsuneki, Masayuki
Sci RepScientific reports 2020 Journal Article, cited 52 times
Website

Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Thomas, Ginu A
  • Zinn, Pascal O
  • Megalooikonomou, Vasileios
  • Colen, Rivka R
Computer Methods and Programs in Biomedicine 2017 Journal Article, cited 16 times
Website
Background and objective: The O6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation has been shown to be associated with improved outcomes in patients with glioblastoma (GBM) and may be a predictive marker of sensitivity to chemotherapy. However, determination of the MGMT promoter methylation status requires tissue obtained via surgical resection or biopsy. The aim of this study was to assess the ability of quantitative and qualitative imaging variables in predicting MGMT methylation status noninvasively. Methods: A retrospective analysis of MR images from GBM patients was conducted. Multivariate prediction models were obtained by machine-learning methods and tested on data from The Cancer Genome Atlas (TCGA) database. Results: The status of MGMT promoter methylation was predicted with an accuracy of up to 73.6%. Experimental analysis showed that the edema/necrosis volume ratio, tumor/necrosis volume ratio, edema volume, and tumor location and enhancement characteristics were the most significant variables in respect to the status of MGMT promoter methylation in GBM. Conclusions: The obtained results provide further evidence of an association between standard preoperative MRI variables and MGMT methylation status in GBM.

A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Davatzikos, Christos
  • Sgarbas, Kyriakos N
  • Megalooikonomou, Vasileios
Biomedical Signal Processing and Control 2015 Journal Article, cited 15 times
Website
Objective Magnetic resonance imaging (MRI) is the primary imaging technique for evaluation of the brain tumor progression before and after radiotherapy or surgery. The purpose of the current study is to exploit conventional MR modalities in order to identify and segment brain images with neoplasms. Methods Four conventional MR sequences, namely, T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid attenuation inversion recovery, are combined with machine learning techniques to extract global and local information of brain tissues and model the healthy and neoplastic imaging profiles. Healthy tissue clustering, outlier detection and geometric and spatial constraints are applied to perform a first segmentation which is further improved by a modified multiparametric Random Walker segmentation method. The proposed framework is applied on clinical data from 57 brain tumor patients (acquired by different scanners and acquisition parameters) and on 25 synthetic MR images with tumors. Assessment is performed against expert-defined tissue masks and is based on sensitivity analysis and Dice coefficient. Results The results demonstrate that the proposed multiparametric framework differentiates neoplastic tissues with accuracy similar to most current approaches while it achieves lower computational cost and higher degree of automation. Conclusion This study might provide a decision-support tool for neoplastic tissue segmentation, which can assist in treatment planning for tumor resection or focused radiotherapy.

Imaging-based stratification of adult gliomas prognosticates survival and correlates with the 2021 WHO classification

  • Kamble, A. N.
  • Agrawal, N. K.
  • Koundal, S.
  • Bhargava, S.
  • Kamble, A. N.
  • Joyner, D. A.
  • Kalelioglu, T.
  • Patel, S. H.
  • Jain, R.
Neuroradiology 2022 Journal Article, cited 0 times
Website
BACKGROUND: Because of the lack of global accessibility, delay, and cost-effectiveness of genetic testing, there is a clinical need for an imaging-based stratification of gliomas that can prognosticate survival and correlate with the 2021-WHO classification. METHODS: In this retrospective study, adult primary glioma patients with pre-surgery/pre-treatment MRI brain images having T2, FLAIR, T1, T1 post-contrast, DWI sequences, and survival information were included in TCIA training-dataset (n = 275) and independent validation-dataset (n = 200). A flowchart for imaging-based stratification of adult gliomas(IBGS) was created in consensus by three authors to encompass all adult glioma types. Diagnostic features used were T2-FLAIR mismatch sign, central necrosis with peripheral enhancement, diffusion restriction, and continuous cortex sign. Roman numerals (I, II, and III) denote IBGS types. Two independent teams of three and two radiologists, blinded to genetic, histology, and survival information, manually read MRI into three types based on the flowchart. Overall survival-analysis was done using age-adjusted Cox-regression analysis, which provided both hazard-ratio (HR) and area-under-curve (AUC) for each stratification system(IBGS and 2021-WHO). The sensitivity and specificity of each IBSG type were analyzed with cross-table to identify the corresponding 2021-WHO genotype. RESULTS: Imaging-based stratification was statistically significant in predicting survival in both datasets with good inter-observer agreement (age-adjusted Cox-regression, AUC > 0.5, k > 0.6, p < 0.001). IBGS type-I, type-II, and type-III gliomas had good specificity in identifying IDHmut 1p19q-codel oligodendroglioma (training - 97%, validation - 85%); IDHmut 1p19q non-codel astrocytoma (training - 80%, validation - 85.9%); and IDHwt glioblastoma (training - 76.5%, validation- 87.3%) respectively (p-value < 0.01). CONCLUSIONS: Imaging-based stratification of adult diffuse gliomas predicted patient survival and correlated well with 2021-WHO glioma classification.

A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study

  • Kalpathy-Cramer, Jayashree
  • Zhao, Binsheng
  • Goldgof, Dmitry
  • Gu, Yuhua
  • Wang, Xingwei
  • Yang, Hao
  • Tan, Yongqiang
  • Gillies, Robert
  • Napel, Sandy
2016 Journal Article, cited 18 times
Website
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 mul to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.

Radiomics of Lung Nodules: A Multi-Institutional Study of Robustness and Agreement of Quantitative Imaging Features

  • Kalpathy-Cramer, J.
  • Mamomov, A.
  • Zhao, B.
  • Lu, L.
  • Cherezov, D.
  • Napel, S.
  • Echegaray, S.
  • Rubin, D.
  • McNitt-Gray, M.
  • Lo, P.
  • Sieren, J. C.
  • Uthoff, J.
  • Dilger, S. K.
  • Driscoll, B.
  • Yeung, I.
  • Hadjiiski, L.
  • Cha, K.
  • Balagurunathan, Y.
  • Gillies, R.
  • Goldgof, D.
Tomography: a journal for imaging research 2016 Journal Article, cited 19 times
Website

Pulmonary Nodule Classification in Lung Cancer from 3D Thoracic CT Scans Using fastai and MONAI

  • Kaliyugarasan, Satheshkumar
  • Lundervold, Arvid
  • Lundervold, Alexander Selvikvåg
International Journal of Interactive Multimedia and Artificial Intelligence 2021 Journal Article, cited 0 times
Website
We construct a convolutional neural network to classify pulmonary nodules as malignant or benign in the context of lung cancer. To construct and train our model, we use our novel extension of the fastai deep learning framework to 3D medical imaging tasks, combined with the MONAI deep learning library. We train and evaluate the model using a large, openly available data set of annotated thoracic CT scans. Our model achieves a nodule classification accuracy of 92.4% and a ROC AUC of 97% when compared to a “ground truth” based on multiple human raters subjective assessment of malignancy. We further evaluate our approach by predicting patient-level diagnoses of cancer, achieving a test set accuracy of 75%. This is higher than the 70% obtained by aggregating the human raters assessments. Class activation maps are applied to investigate the features used by our classifier, enabling a rudimentary level of explainability for what is otherwise close to “black box” predictions. As the classification of structures in chest CT scans is useful across a variety of diagnostic and prognostic tasks in radiology, our approach has broad applicability. As we aimed to construct a fully reproducible system that can be compared to new proposed methods and easily be adapted and extended, the full source code of our work is available at https://github.com/MMIV-ML/Lung-CT-fastai-2020.

Multicenter CT phantoms public dataset for radiomics reproducibility tests

  • Kalendralis, Petros
  • Traverso, Alberto
  • Shi, Zhenwei
  • Zhovannik, Ivan
  • Monshouwer, Rene
  • Starmans, Martijn P A
  • Klein, Stefan
  • Pfaehler, Elisabeth
  • Boellaard, Ronald
  • Dekker, Andre
  • Wee, Leonard
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.

FAIR-compliant clinical, radiomics and DICOM metadata of RIDER, interobserver, Lung1 and head-Neck1 TCIA collections

  • Kalendralis, Petros
  • Shi, Zhenwei
  • Traverso, Alberto
  • Choudhury, Ananya
  • Sloep, Matthijs
  • Zhovannik, Ivan
  • Starmans, Martijn P A
  • Grittner, Detlef
  • Feltens, Peter
  • Monshouwer, Rene
  • Klein, Stefan
  • Fijten, Rianne
  • Aerts, Hugo
  • Dekker, Andre
  • van Soest, Johan
  • Wee, Leonard
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: One of the most frequently cited radiomics investigations showed that features automatically extracted from routine clinical images could be used in prognostic modeling. These images have been made publicly accessible via The Cancer Imaging Archive (TCIA). There have been numerous requests for additional explanatory metadata on the following datasets - RIDER, Interobserver, Lung1, and Head-Neck1. To support repeatability, reproducibility, generalizability, and transparency in radiomics research, we publish the subjects' clinical data, extracted radiomics features, and digital imaging and communications in medicine (DICOM) headers of these four datasets with descriptive metadata, in order to be more compliant with findable, accessible, interoperable, and reusable (FAIR) data management principles. ACQUISITION AND VALIDATION METHODS: Overall survival time intervals were updated using a national citizens registry after internal ethics board approval. Spatial offsets of the primary gross tumor volume (GTV) regions of interest (ROIs) associated with the Lung1 CT series were improved on the TCIA. GTV radiomics features were extracted using the open-source Ontology-Guided Radiomics Analysis Workflow (O-RAW). We reshaped the output of O-RAW to map features and extraction settings to the latest version of Radiomics Ontology, so as to be consistent with the Image Biomarker Standardization Initiative (IBSI). Digital imaging and communications in medicine metadata was extracted using a research version of Semantic DICOM (SOHARD, GmbH, Fuerth; Germany). Subjects' clinical data were described with metadata using the Radiation Oncology Ontology. All of the above were published in Resource Descriptor Format (RDF), that is, triples. Example SPARQL queries are shared with the reader to use on the online triples archive, which are intended to illustrate how to exploit this data submission. DATA FORMAT: The accumulated RDF data are publicly accessible through a SPARQL endpoint where the triples are archived. The endpoint is remotely queried through a graph database web application at http://sparql.cancerdata.org. SPARQL queries are intrinsically federated, such that we can efficiently cross-reference clinical, DICOM, and radiomics data within a single query, while being agnostic to the original data format and coding system. The federated queries work in the same way even if the RDF data were partitioned across multiple servers and dispersed physical locations. POTENTIAL APPLICATIONS: The public availability of these data resources is intended to support radiomics features replication, repeatability, and reproducibility studies by the academic community. The example SPARQL queries may be freely used and modified by readers depending on their research question. Data interoperability and reusability are supported by referencing existing public ontologies. The RDF data are readily findable and accessible through the aforementioned link. Scripts used to create the RDF are made available at a code repository linked to this submission: https://gitlab.com/UM-CDS/FAIR-compliant_clinical_radiomics_and_DICOM_metadata.

Artificial intelligence applications in radiotherapy: The role of the FAIR data principles.

  • Kalendralis, Petros
2022 Thesis, cited 0 times
Website
Radiotherapy is one of the main treatment modalities used for cancer. Nowadays, due to emerging artificial intelligence (AI) technologies, radiotherapy has become a broader field. This thesis investigated how AI can make the life of doctors, physicists and researchers easier. This thesis also showed that clinical routine tasks, such as quality assurance tests, can be automated. Researchers can reuse machine-readable data, while physicists can validate and improve novel treatment techniques such as proton therapy. The abovementioned three pillars contribute to the improvement of patients care (personalised radiotherapy). In conclusion, this technological revolution requires a re-thinking of the original professional figures in radiotherapy and the design of AI studies. This thesis concluded to the fact that radiotherapy professionals and researchers can improve their ability to perform tasks, having AI as a supplementary helping tool.

Detection of lung tumor using dual tree complex wavelet transform and co‐active adaptive neuro fuzzy inference system classification approach

  • Kailasam, Manoj Senthil
  • Thiagarajan, MeeraDevi
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2021 Journal Article, cited 0 times
Website
The automatic detection and location of the tumor regions in lung images is more important to provide timely medical treatments to patients in order to save their lives. In this article, machine learning-based lung tumor detection, classification and segmentation algorithm is proposed. The tumor classification phase first smooth the source lung computed tomography image using adaptive median filter and then discrete time complex wavelet transform (DT-CWT) is applied on this smoothed lung image to decompose the entire image into a number of sub-bands. Along with the decomposed sub-bands, DWT, pattern, and co-occurrence features are computed and classified using co-active adaptive neuro fuzzy inference system (CANFIS). The tumor segmentation phase uses morphological functions on this classified abnormal lung image to locate the tumor regions. The multi-evaluation parameters are used to evaluate the proposed method. This method is compared with the other state-of-the-art methods on the same lung image from open-access dataset.

TwoPath U-Net for Automatic Brain Tumor Segmentation from Multimodal MRI Data

  • Kaewrak, Keerati
  • Soraghan, John
  • Di Caterina, Gaetano
  • Grose, Derek
2021 Book Section, cited 0 times
A novel encoder-decoder deep learning network called TwoPath U-Net for multi-class automatic brain tumor segmentation task is presented. The network uses cascaded local and global feature extraction paths in the down-sampling path of the network which allows the network to learn different aspects of both the low-level feature and high-level features. The proposed network architecture using a full image and patches input technique was used on the BraTS2020 training dataset. We tested the network performance using the BraTS2019 validation dataset and obtained the mean dice score of 0.76, 0.64, and 0.58 and the Hausdorff distance 95% of 25.05, 32.83, and 37.57 for the whole tumor, tumor core and enhancing tumor regions.

Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapur’s thresholding: a study

  • Kadry, Seifedine
  • Rajinikanth, V
  • Raja, N Sri Madhava
  • Hemanth, D Jude
  • Hannon, Naeem MS
  • Raj, Alex Noel Joseph
Evolutionary Intelligence 2021 Journal Article, cited 0 times

Extraction of Tumour in Breast MRI using Joint Thresholding and Segmentation – A Study

  • Kadry, Seifedine
  • Damaševičius, Robertas
  • Taniar, David
  • Rajinikanth, Venkatesan
  • Lawal,Isah A.
2021 Conference Proceedings, cited 0 times
Website
Breast Cancer (BC) is one of the harsh conditions, which largely affects the women group. Due to its significance, a range of procedures are available for premature detection and early treatment to save the patient. The clinical level diagnosis of BC will be done using; (i) Image supported detection and (ii) Core-Needle-Biopsy (CNB) assisted confirmation. The proposed work aim to develop a computerized scheme to detect the Breast-Tumor-Section (BTS) from the breast MRI slices. This work implements a joint thresholding and segmentation methodology to enhance and extract the BTS from the 2D MRI slices. A tri-level thresholding based on Slime-Mould-Algorithm and Shannon's-Entropy (SMA+SE) is implemented to enhance the BTS and Watershed-Segmentation (WS) is implemented to mine the BTS. After extracting the BTS, a study between the BTS and Ground-Truth image is performed and the necessary Image-Performance-Values (IPV) are computed. In this work the axial, coronal and sagittal slices of 2D breast MRI are separately examined and the attained results are presented.

Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics

  • Kadoya, Noriyuki
  • Tanaka, Shohei
  • Kajikawa, Tomohiro
  • Tanabe, Shunpei
  • Abe, Kota
  • Nakajima, Yujiro
  • Yamamoto, Takaya
  • Takahashi, Noriyoshi
  • Takeda, Kazuya
  • Dobashi, Suguru
  • Takeda, Ken
  • Nakane, Kazuaki
  • Jingu, Keiichi
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.

Computer-aided diagnostic system kinds and pulmonary nodule detection efficacy

  • Kadhim, Omar Raad
  • Motlak, Hassan Jassim
  • Abdalla, Kasim Karam
International Journal of Electrical and Computer Engineering (IJECE) 2022 Journal Article, cited 0 times
Website
This paper summarizes the literature on computer-aided detection (CAD) systems used to identify and diagnose lung nodules in images obtained with computed tomography (CT) scanners. The importance of developing such systems lies in the fact that the process of manually detecting lung nodules is painstaking and sequential work for radiologists, as it takes a long time. Moreover, the pulmonary nodules have multiple appearances and shapes, and the large number of slices generated by the scanner creates great difficulty in accurately locating the lung nodules. The handcraft nodules detection process can be caused by messing some nodules spicily when these nodules' diameter be less than 10 mm. So, the CAD system is an essential assistant to the radiologist in this case of nodule detection, and it contributed to reducing time consumption in nodules detection; moreover, it applied more accuracy in this field. The objective of this paper is to follow up on current and previous work on lung cancer detection and lung nodule diagnosis. This literature dealt with a group of specialized systems in this field quickly and showed the methods used in them. It dealt with an emphasis on a system based on deep learning involving neural convolution networks.

Radiographic assessment of contrast enhancement and T2/FLAIR mismatch sign in lower grade gliomas: correlation with molecular groups

  • Juratli, Tareq A
  • Tummala, Shilpa S
  • Riedl, Angelika
  • Daubner, Dirk
  • Hennig, Silke
  • Penson, Tristan
  • Zolal, Amir
  • Thiede, Christian
  • Schackert, Gabriele
  • Krex, Dietmar
Journal of Neuro-Oncology 2018 Journal Article, cited 0 times
Website

Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research

  • Junior, José Raniery Ferreira
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
2016 Journal Article, cited 14 times
Website

Algorithmic transparency and interpretability measures improve radiologists' performance in BI-RADS 4 classification

  • Jungmann, F.
  • Ziegelmayer, S.
  • Lohoefer, F. K.
  • Metz, S.
  • Muller-Leisse, C.
  • Englmaier, M.
  • Makowski, M. R.
  • Kaissis, G. A.
  • Braren, R. F.
Eur Radiol 2022 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the perception of different types of AI-based assistance and the interaction of radiologists with the algorithm's predictions and certainty measures. METHODS: In this retrospective observer study, four radiologists were asked to classify Breast Imaging-Reporting and Data System 4 (BI-RADS4) lesions (n = 101 benign, n = 99 malignant). The effect of different types of AI-based assistance (occlusion-based interpretability map, classification, and certainty) on the radiologists' performance (sensitivity, specificity, questionnaire) were measured. The influence of the Big Five personality traits was analyzed using the Pearson correlation. RESULTS: Diagnostic accuracy was significantly improved by AI-based assistance (an increase of 2.8% +/- 2.3%, 95 %-CI 1.5 to 4.0 %, p = 0.045) and trust in the algorithm was generated primarily by the certainty of the prediction (100% of participants). Different human-AI interactions were observed ranging from nearly no interaction to humanization of the algorithm. High scores in neuroticism were correlated with higher persuasibility (Pearson's r = 0.98, p = 0.02), while higher consciousness and change of accuracy showed an inverse correlation (Pearson's r = -0.96, p = 0.04). CONCLUSION: Trust in the algorithm's performance was mostly dependent on the certainty of the predictions in combination with a plausible heatmap. Human-AI interaction varied widely and was influenced by personality traits. KEY POINTS: * AI-based assistance significantly improved the diagnostic accuracy of radiologists in classifying BI-RADS 4 mammography lesions. * Trust in the algorithm's performance was mostly dependent on the certainty of the prediction in combination with a reasonable heatmap. * Personality traits seem to influence human-AI collaboration. Radiologists with specific personality traits were more likely to change their classification according to the algorithm's prediction than others.

Brain Tumor Segmentation Using Dual-Path Attention U-Net in 3D MRI Images

  • Jun, Wen
  • Haoxiang, Xu
  • Wang, Zhang
2021 Book Section, cited 0 times
Semantic segmentation plays an essential role in brain tumor diagnosis and treatment planning. Yet, manual segmentation is a time-consuming task. That fact leads to hire the Deep Neural Networks to segment brain tumor. In this work, we proposed a variety of 3D U-Net, which can achieve comparable segmentation accuracy with less graphic memory cost. To be more specific, our model employs a modified attention block to refine the feature map representation along the skip-connection bridge, which consists of parallelly connected spatial and channel attention blocks. Dice coefficients for enhancing tumor, whole tumor, and tumor core reached 0.752, 0.879 and 0.779 respectively on the BRATS- 2020 valid dataset.

ONCOhabitats Glioma Segmentation Model

  • Juan-Albarracín, Javier
  • Fuster-Garcia, Elies
  • del Mar Álvarez-Torres, María
  • Chelebian, Eduard
  • García-Gómez, Juan M.
2020 Book Section, cited 0 times
ONCOhabitats is an open online service that provides a fully automatic analysis of tumor vascular heterogeneity in gliomas based on multiparametric MRI. Having a model capable of accurately segment pathological tissues is critical to generate a robust analysis of vascular heterogeneity. In this study we present the segmentation model embedded in ONCOhabitats and its performance obtained on the BRATS 2019 dataset. The model implements an residual-Inception U-Net convolutional neural network, incorporating several pre- and post- processing stages. A relabeling strategy has been applied to improve the segmentation of the necrosis of high-grade gliomas and the non-enhancing tumor of low-grade gliomas. The model was trained using 335 cases from the BraTS 2019 challenge training dataset and evaluated with 125 cases from the validation set and 166 cases from the test set. The results on the validation dataset in terms of the mean/median Dice coefficient are 0.73/0.85 in the enhancing tumor region, 0.90/0.92 in the whole tumor, and 0.78/0.89 in the tumor core. The Dice results obtained in the independent test are 0.78/0.84, 0.88/0.92 and 0.83/0.92 respectively for the same sub-compartments of the lesion.

Estimation of an Image Biomarker for Distant Recurrence Prediction in NSCLC Using Proliferation-Related Genes

  • Ju, H. M.
  • Kim, B. C.
  • Lim, I.
  • Byun, B. H.
  • Woo, S. K.
Int J Mol Sci 2023 Journal Article, cited 0 times
Website
This study aimed to identify a distant-recurrence image biomarker in NSCLC by investigating correlations between heterogeneity functional gene expression and fluorine-18-2-fluoro-2-deoxy-D-glucose positron emission tomography ((18)F-FDG PET) image features of NSCLC patients. RNA-sequencing data and (18)F-FDG PET images of 53 patients with NSCLC (19 with distant recurrence and 34 without recurrence) from The Cancer Imaging Archive and The Cancer Genome Atlas Program databases were used in a combined analysis. Weighted correlation network analysis was performed to identify gene groups related to distant recurrence. Genes were selected for functions related to distant recurrence. In total, 47 image features were extracted from PET images as radiomics. The relationship between gene expression and image features was estimated using a hypergeometric distribution test with the Pearson correlation method. The distant recurrence prediction model was validated by a random forest (RF) algorithm using image texture features and related gene expression. In total, 37 gene modules were identified by gene-expression pattern with weighted gene co-expression network analysis. The gene modules with the highest significance were selected (p-value < 0.05). Nine genes with high protein-protein interaction and area under the curve (AUC) were identified as hub genes involved in the proliferation function, which plays an important role in distant recurrence of cancer. Four image features (GLRLM_SRHGE, GLRLM_HGRE, SUVmean, and GLZLM_GLNU) and six genes were identified to be correlated (p-value < 0.1). AUCs (accuracy: 0.59, AUC: 0.729) from the 47 image texture features and AUCs (accuracy: 0.767, AUC: 0.808) from hub genes were calculated using the RF algorithm. AUCs (accuracy: 0.783, AUC: 0.912) from the four image texture features and six correlated genes and AUCs (accuracy: 0.738, AUC: 0.779) from only the four image texture features were calculated using the RF algorithm. The four image texture features validated by heterogeneity group gene expression were found to be related to cancer heterogeneity. The identification of these image texture features demonstrated that advanced prediction of NSCLC distant recurrence is possible using the image biomarker.

Review of Deep Learning and Interpretability

  • Joshi, Hrushikesh
  • Rajeswari, Kannan
  • Joshi, Sneha
2022 Book Section, cited 0 times
Website

Interactive 3D Virtual Colonoscopic Navigation For Polyp Detection From CT Images

  • Joseph, Jinu
  • Kumar, Rajesh
  • Chandran, Pournami S
  • Vidya, PV
Procedia Computer Science 2017 Journal Article, cited 0 times
Website

Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours

  • Jordan, P.
  • Adamson, P. M.
  • Bhattbhatt, V.
  • Beriwal, S.
  • Shen, S.
  • Radermecker, O.
  • Bose, S.
  • Strain, L. S.
  • Offe, M.
  • Fraley, D.
  • Principi, S.
  • Ye, D. H.
  • Wang, A. S.
  • Van Heteren, J.
  • Vo, N. J.
  • Schmidt, T. G.
Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS: The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES: The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS: This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.

An image registration method for voxel-wise analysis of whole-body oncological PET-CT

  • Jonsson, H.
  • Ekstrom, S.
  • Strand, R.
  • Pedersen, M. A.
  • Molin, D.
  • Ahlstrom, H.
  • Kullberg, J.
2022 Journal Article, cited 0 times
Website
Whole-body positron emission tomography-computed tomography (PET-CT) imaging in oncology provides comprehensive information of each patient's disease status. However, image interpretation of volumetric data is a complex and time-consuming task. In this work, an image registration method targeted towards computer-aided voxel-wise analysis of whole-body PET-CT data was developed. The method used both CT images and tissue segmentation masks in parallel to spatially align images step-by-step. To evaluate its performance, a set of baseline PET-CT images of 131 classical Hodgkin lymphoma (cHL) patients and longitudinal image series of 135 head and neck cancer (HNC) patients were registered between and within subjects according to the proposed method. Results showed that major organs and anatomical structures generally were registered correctly. Whole-body inverse consistency vector and intensity magnitude errors were on average less than 5 mm and 45 Hounsfield units respectively in both registration tasks. Image registration was feasible in time and the nearly automatic pipeline enabled efficient image processing. Metabolic tumor volumes of the cHL patients and registration-derived therapy-related tissue volume change of the HNC patients mapped to template spaces confirmed proof-of-concept. In conclusion, the method established a robust point-correspondence and enabled quantitative visualization of group-wise image features on voxel level.

Spatial mapping of tumor heterogeneity in whole-body PET-CT: a feasibility study

  • Jonsson, H.
  • Ahlstrom, H.
  • Kullberg, J.
2023 Journal Article, cited 0 times
Website
BACKGROUND: Tumor heterogeneity is recognized as a predictor of treatment response and patient outcome. Quantification of tumor heterogeneity across all scales may therefore provide critical insight that ultimately improves cancer management. METHODS: An image registration-based framework for the study of tumor heterogeneity in whole-body images was evaluated on a dataset of 490 FDG-PET-CT images of lung cancer, lymphoma, and melanoma patients. Voxel-, lesion- and subject-level features were extracted from the subjects' segmented lesion masks and mapped to female and male template spaces for voxel-wise analysis. Resulting lesion feature maps of the three subsets of cancer patients were studied visually and quantitatively. Lesion volumes and lesion distances in subject spaces were compared with resulting properties in template space. The strength of the association between subject and template space for these properties was evaluated with Pearson's correlation coefficient. RESULTS: Spatial heterogeneity in terms of lesion frequency distribution in the body, metabolic activity, and lesion volume was seen between the three subsets of cancer patients. Lesion feature maps showed anatomical locations with low versus high mean feature value among lesions sampled in space and also highlighted sites with high variation between lesions in each cancer subset. Spatial properties of the lesion masks in subject space correlated strongly with the same properties measured in template space (lesion volume, R = 0.986, p < 0.001; total metabolic volume, R = 0.988, p < 0.001; maximum within-patient lesion distance, R = 0.997, p < 0.001). Lesion volume and total metabolic volume increased on average from subject to template space (lesion volume, 3.1 +/- 52 ml; total metabolic volume, 53.9 +/- 229 ml). Pair-wise lesion distance decreased on average by 0.1 +/- 1.6 cm and maximum within-patient lesion distance increased on average by 0.5 +/- 2.1 cm from subject to template space. CONCLUSIONS: Spatial tumor heterogeneity between subsets of interest in cancer cohorts can successfully be explored in whole-body PET-CT images within the proposed framework. Whole-body studies are, however, especially prone to suffer from regional variation in lesion frequency, and thus statistical power, due to the non-uniform distribution of lesions across a large field of view.

A First Step Towards an Algorithm for Breast Cancer Reoperation Prediction Using Machine Learning and Mammographic Images

  • Jönsson, Emma
2022 Thesis, cited 0 times
Website
Abstract Cancer is the second leading cause of death worldwide and 30% of all cancer cases among women are breast cancer. A popular treatment is breast-conserving surgery, where only a part of the breast is surgically removed. Surgery is expensive and has a significant impact on the body, and on some women, a reoperation is needed. The aim of this thesis was to see if there is a possibility to predict whether a person will be in need of reoperation with the help of whole mammographic images and deep learning. The data used in this thesis were collected from two different open sources: (1) The Chinese Mammography Database (CMMD) where 1052 benign images and 1090 malignant images were used. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) where 182 benign images and 145 malignant images were used. With those images, both a simple convolutional neural network (CNN) and a transfer learning network using the pre-trained model MobileNet were trained to classify the images as benign or malignant. All the networks were evaluated using learning curves, confusion matrix, accuracy, sensitivity, specificity, AUC and a ROC-curve. The highest results obtained belonged to a transfer learning network that used the pre-trained model MobileNet and trained on the CMMD data set. It got an AUC value of 0.599. Sammanfattning Cancer är idag det näst vanligaste dödsorsaken i världen, där 30% av alla cancerfall bland kvinnor är bröstcancer. En vanlig behandling är bröstbevarande operation, där en bit av bröstet kirurgiskt tas bort. Operationer är både dyrt och har en betydande inverkan på kroppen och för vissa kvinnor krävs en omoperation efter den första operationen. Syftet med detta arbete har varit att undersöka möjligheten att förutsäga om en person kommer att vara i behov av en omoperation med hjälp av hela mammografibilder och maskininlärning. Datan som användes i arbetet hämtades från två olika öppna källor: (1) The Chinese Mammography Database (CMMD) där 1052 benigna bilder och 1090 maligna bilder användes. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) får 182 benigna bilder och 145 maligna bilder användes. Med dessa bilder tränades både ett enkelt konvoluionellt nätverk och ett överförningsinlärningsnätverk med den för-tränade modellen MobileNet för att klassificera bilderna som benigna eller maligna. Alla nätverken utvärderades med inlärningskurvor, confusion matrix, nog grannhet, känslighet, specificitet och en ROC-kurva. De högsta resultaten som erhölls var ett AUC-värde på 0.599 och tillhörde ett överföringsinlärning nätverk som använt den för-tränade modellen MobileNet och tränat på CMMD-datauppsättningen.

Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone

  • Johnson Chacko, Lejo
  • Schmidbauer, Dominik T
  • Handschuh, Stephan
  • Reka, Alen
  • Fritscher, Karl D
  • Raudaschl, Patrik
  • Saba, Rami
  • Handler, Michael
  • Schier, Peter P
  • Baumgarten, Daniel
  • Fischer, Natalie
  • Pechriggl, Elisabeth J
  • Brenner, Erich
  • Hoermann, Romed
  • Glueckert, Rudolf
  • Schrott-Fischer, Anneliese
Frontiers in Neuroscience 2018 Journal Article, cited 4 times
Website
Stable posture and body movement in humans is dictated by the precise functioning of the ampulla organs in the semi-circular canals. Statistical analysis of the interrelationship between bony and membranous compartments within the semi-circular canals is dependent on the visualization of soft tissue structures. Thirty-one human inner ears were prepared, post-fixed with osmium tetroxide and decalcified for soft tissue contrast enhancement. High resolution X-ray microtomography images at 15 mum voxel-size were manually segmented. This data served as templates for centerline generation and cross-sectional area extraction. Our estimates demonstrate the variability of individual specimens from averaged centerlines of both bony and membranous labyrinth. Centerline lengths and cross-sectional areas along these lines were identified from segmented data. Using centerlines weighted by the inverse squares of the cross-sectional areas, plane angles could be quantified. The fit planes indicate that the bony labyrinth resembles a Cartesian coordinate system more closely than the membranous labyrinth. A widening in the membranous labyrinth of the lateral semi-circular canal was observed in some of the specimens. Likewise, the cross-sectional areas in the perilymphatic spaces of the lateral canal differed from the other canals. For the first time we could precisely describe the geometry of the human membranous labyrinth based on a large sample size. Awareness of the variations in the canal geometry of the membranous and bony labyrinth would be a helpful reference in designing electrodes for future vestibular prosthesis and simulating fluid dynamics more precisely.

Prostate cancer prediction from multiple pretrained computer vision model

  • John, Jisha
  • Ravikumar, Aswathy
  • Abraham, Bejoy
Health and Technology 2021 Journal Article, cited 0 times
Website
The prostate gland found among men is a male reproductive gland responsible for separating a thin alkaline fluid that forms a major portion of the ejaculate. The gland has the shape of a small walnut and the cancer caused in this gland is called Prostate Cancer. It has the second highest mortality rate according to studies. Therefore, its detection at the earlier stage when it is still confined to the prostate gland is life saving. This ensures a better chance of successful treatment. The existing preliminary screening approaches for its detection includes prostate specific antigen (PSA) blood test and digital rectal exam (DRE). In the proposed method we use two popular pretrained models for feature extraction, MobileNet and DenseNet. The extracted features are stacked and augmented and fed to a two-stage classifier that provides the prediction. The proposed system is found to have an accuracy of 93.3% and outperforms other traditional approaches.

Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening

  • Jinsakul, Natinai
  • Tsai, Cheng-Fa
  • Tsai, Chia-En
  • Wu, Pensee
Mathematics 2019 Journal Article, cited 0 times
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.

Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks

  • Jin, W.
  • Li, X.
  • Fatehi, M.
  • Hamarneh, G.
2023 Journal Article, cited 1 times
Website
Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. * Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. * Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. * We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.

Guidelines and evaluation of clinical explainable AI in medical image analysis

  • Jin, W.
  • Li, X.
  • Fatehi, M.
  • Hamarneh, G.
Med Image Anal 2023 Journal Article, cited 0 times
Website
Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.

Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset

  • Jin, Hyeongmin
  • Kim, Jong Hyo
Journal of Signal Processing Systems 2020 Journal Article, cited 1 times
Website
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.

Predicting the Stage of Non-small Cell Lung Cancer with Divergence Neural Network Using Pre-treatment Computed Tomography

  • Jieun, Choi
  • Hwan-ho, Cho
  • Hyunjin, Park
2021 Conference Paper, cited 0 times
Website
Determining the stage of non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging includes a professional interpretation of imaging, thus we aimed to build an automatic process with deep learning (DL). We proposed an end-to-end DL method that uses pre-treatment computer tomography images to classify the early- and advanced-stage of NSCLC. DL models were developed and tested to classify the early- and advanced-stage using training (n = 58), validation (n = 7), and testing (n = 17) cohorts obtained from public domains. The network consists of three parts of encoder, decoder, and classification layer. Encoder and decoder layers are trained to reconstruct original images. Classification layers are trained to classify early- and advanced-stage NSCLC patients with a dense layer. Other machine learning-based approaches were compared. Our model achieved accuracy of 0.8824, sensitivity of 1.0, specificity of 0.6, and area under the curve (AUC) of 0.7333 compared with other approaches (AUC 0.5500 - 0.7167) in the test cohort for classifying between early- and advanced-stages. Our DL model to classify NSCLC patients into early-stage and advanced-stage showed promising results and could be useful in future NSCLC research.

Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge 2019 Segmentation Task

  • Jiang, Zeyu
  • Ding, Changxing
  • Liu, Minfeng
  • Tao, Dacheng
2020 Book Section, cited 0 times
In this paper, we devise a novel two-stage cascaded U-Net to segment the substructures of brain tumors from coarse to fine. The network is trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019 training dataset. Experimental results on the testing set demonstrate that the proposed method achieved average Dice scores of 0.83267, 0.88796 and 0.83697, as well as Hausdorff distances (95%) of 2.65056, 4.61809 and 4.13071, for the enhancing tumor, whole tumor and tumor core, respectively. The approach won the 1st place in the BraTS 2019 challenge segmentation task, with more than 70 teams participating in the challenge.

SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer

  • Jiang, Y.
  • Zhang, Y.
  • Lin, X.
  • Dong, J.
  • Cheng, T.
  • Liang, J.
2022 Journal Article, cited 0 times
Website
Brain tumor semantic segmentation is a critical medical image processing work, which aids clinicians in diagnosing patients and determining the extent of lesions. Convolutional neural networks (CNNs) have demonstrated exceptional performance in computer vision tasks in recent years. For 3D medical image tasks, deep convolutional neural networks based on an encoder-decoder structure and skip-connection have been frequently used. However, CNNs have the drawback of being unable to learn global and remote semantic information well. On the other hand, the transformer has recently found success in natural language processing and computer vision as a result of its usage of a self-attention mechanism for global information modeling. For demanding prediction tasks, such as 3D medical picture segmentation, local and global characteristics are critical. We propose SwinBTS, a new 3D medical picture segmentation approach, which combines a transformer, convolutional neural network, and encoder-decoder structure to define the 3D brain tumor semantic segmentation job as a sequence-to-sequence prediction challenge in this research. To extract contextual data, the 3D Swin Transformer is utilized as the network's encoder and decoder, and convolutional operations are employed for upsampling and downsampling. Finally, we achieve segmentation results using an improved Transformer module that we built for increasing detail feature extraction. Extensive experimental results on the BraTS 2019, BraTS 2020, and BraTS 2021 datasets reveal that SwinBTS outperforms state-of-the-art 3D algorithms for brain tumor segmentation on 3D MRI scanned images.

SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer

  • Jiang, Yun
  • Zhang, Yuan
  • Lin, Xin
  • Dong, Jinkun
  • Cheng, Tongtong
  • Liang, Jing
Brain Sciences 2022 Journal Article, cited 0 times
Website
Brain tumor semantic segmentation is a critical medical image processing work, which aids clinicians in diagnosing patients and determining the extent of lesions. Convolutional neural networks (CNNs) have demonstrated exceptional performance in computer vision tasks in recent years. For 3D medical image tasks, deep convolutional neural networks based on an encoder–decoder structure and skip-connection have been frequently used. However, CNNs have the drawback of being unable to learn global and remote semantic information well. On the other hand, the transformer has recently found success in natural language processing and computer vision as a result of its usage of a self-attention mechanism for global information modeling. For demanding prediction tasks, such as 3D medical picture segmentation, local and global characteristics are critical. We propose SwinBTS, a new 3D medical picture segmentation approach, which combines a transformer, convolutional neural network, and encoder–decoder structure to define the 3D brain tumor semantic segmentation job as a sequence-to-sequence prediction challenge in this research. To extract contextual data, the 3D Swin Transformer is utilized as the network’s encoder and decoder, and convolutional operations are employed for upsampling and downsampling. Finally, we achieve segmentation results using an improved Transformer module that we built for increasing detail feature extraction. Extensive experimental results on the BraTS 2019, BraTS 2020, and BraTS 2021 datasets reveal that SwinBTS outperforms state-of-the-art 3D algorithms for brain tumor segmentation on 3D MRI scanned images. Keywords: brain tumor segmentation; Swin Transformer; 3D CNN; depth-wise separable convolution

Improving the Pulmonary Nodule Classification Based on KPCA-CNN Model

  • Jiang, Peichen
2022 Journal Article, cited 0 times
Website
Lung cancer mortality, the main cause of cancer-associated death all over the world, can be reduced by screening risky patients with low-dose computed tomography (CT) scans for lung cancer. In CT screening, radiologists will have to examine millions of CT pictures, putting a great load on them. Convolutional neural networks (CNNs) with deep convolutions have the potential to improve screening efficiency. In the examination of lung cancer screening CT images, estimating the chance of a malignant nodule in a specific location on a CT scan is a critical step. Low dimensional convolutional neural networks and other methods are unable to provide sufficient estimation for this task, even though the most advanced 3-dimensional CNN (3D-CNN) has extremely high computing requirements. This article presents a novel strategy for reducing false positives in automatic pulmonary nodule diagnosis from 3-dimensional CT imaging by merging a kernel Principal Component Analysis (kPCA) approach with a 2-dimensional CNN (2D-CNN). To recreate 3-dimensional CT images, the kPCA method is utilized, with the goal of reducing the dimension of data, minimizing noise from raw sensory data while maintaining neoplastic information. The CNN can diagnose new CT scans with an accuracy of up to 90% when trained with the regenerated data, which is better than existing 2D-CNNs and on par with the best 3D-CNNs. The short duration of training, and certain accuracy shows the potential of the kPCA-CNN to adapt to CT scans with different parameters in practice. The study shows that the kPCA-CNN modeling technique can improve the efficiency of lung cancer diagnosis.

Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas

  • Jiang, Chendan
  • Kong, Ziren
  • Liu, Sirui
  • Feng, Shi
  • Zhang, Yiwei
  • Zhu, Ruizhe
  • Chen, Wenlin
  • Wang, Yuekun
  • Lyu, Yuelei
  • You, Hui
  • Zhao, Dachun
  • Wang, Renzhi
  • Wang, Yu
  • Ma, Wenbin
  • Feng, Feng
Eur J Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.

BiTr-Unet: A CNN-Transformer Combined Network for MRI Brain Tumor Segmentation

  • Jia, Q.
  • Shu, H.
Brainlesion 2021 Book Section, cited 0 times
Website
Convolutional neural networks (CNNs) have achieved remarkable success in automatically segmenting organs or lesions on 3D medical images. Recently, vision transformer networks have exhibited exceptional performance in 2D image classification tasks. Compared with CNNs, transformer networks have an appealing advantage of extracting long-range features due to their self-attention algorithm. Therefore, we propose a CNN-Transformer combined model, called BiTr-Unet, with specific modifications for brain tumor segmentation on multi-modal MRI scans. Our BiTr-Unet achieves good performance on the BraTS2021 validation dataset with median Dice score 0.9335, 0.9304 and 0.8899, and median Hausdor_ distance 2.8284, 2.2361 and 1.4142 for the whole tumor, tumor core, and enhancing tumor, respectively. On the BraTS2021 testing dataset, the corresponding results are 0.9257, 0.9350 and 0.8874 for Dice score, and 3, 2.2361 and 1.4142 for Hausdorff distance. The code is publicly available at https://github.com/JustaTinyDot/BiTr-Unet.

H2NF-Net for Brain Tumor Segmentation Using Multimodal MR Imaging: 2nd Place Solution to BraTS Challenge 2020 Segmentation Task

  • Jia, Haozhe
  • Cai, Weidong
  • Huang, Heng
  • Xia, Yong
2021 Book Section, cited 0 times
In this paper, we propose a Hybrid High-resolution and Non-local Feature Network (H2NF-Net) to segment brain tumor in multimodal MR images. Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions and combines the predictions together as the final segmentation. We trained and evaluated our model on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset. The results on the test set show that the combination of the single and cascaded models achieved average Dice scores of 0.78751, 0.91290, and 0.85461, as well as Hausdorff distances (95%) of 26.57525, 4.18426, and 4.97162 for the enhancing tumor, whole tumor, and tumor core, respectively. Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.

ResDAC-Net: a novel pancreas segmentation model utilizing residual double asymmetric spatial kernels

  • Ji, Z.
  • Liu, J.
  • Mu, J.
  • Zhang, H.
  • Dai, C.
  • Yuan, N.
  • Ganchev, I.
2024 Journal Article, cited 0 times
Website
The pancreas not only is situated in a complex abdominal background but is also surrounded by other abdominal organs and adipose tissue, resulting in blurred organ boundaries. Accurate segmentation of pancreatic tissue is crucial for computer-aided diagnosis systems, as it can be used for surgical planning, navigation, and assessment of organs. In the light of this, the current paper proposes a novel Residual Double Asymmetric Convolution Network (ResDAC-Net) model. Firstly, newly designed ResDAC blocks are used to highlight pancreatic features. Secondly, the feature fusion between adjacent encoding layers fully utilizes the low-level and deep-level features extracted by the ResDAC blocks. Finally, parallel dilated convolutions are employed to increase the receptive field to capture multiscale spatial information. ResDAC-Net is highly compatible to the existing state-of-the-art models, according to three (out of four) evaluation metrics, including the two main ones used for segmentation performance evaluation (i.e., DSC and Jaccard index).

External Validation of Robust Radiomic Signature to Predict 2-Year Overall Survival in Non-Small-Cell Lung Cancer

  • Jha, A. K.
  • Sherkhane, U. B.
  • Mthun, S.
  • Jaiswar, V.
  • Purandare, N.
  • Prabhash, K.
  • Wee, L.
  • Rangarajan, V.
  • Dekker, A.
J Digit Imaging 2023 Journal Article, cited 0 times
Website
Lung cancer is the second most fatal disease worldwide. In the last few years, radiomics is being explored to develop prediction models for various clinical endpoints in lung cancer. However, the robustness of radiomic features is under question and has been identified as one of the roadblocks in the implementation of a radiomic-based prediction model in the clinic. Many past studies have suggested identifying the robust radiomic feature to develop a prediction model. In our earlier study, we identified robust radiomic features for prediction model development. The objective of this study was to develop and validate the robust radiomic signatures for predicting 2-year overall survival in non-small cell lung cancer (NSCLC). This retrospective study included a cohort of 300 stage I-IV NSCLC patients. Institutional 200 patients' data were included for training and internal validation and 100 patients' data from The Cancer Image Archive (TCIA) open-source image repository for external validation. Radiomic features were extracted from the CT images of both cohorts. The feature selection was performed using hierarchical clustering, a Chi-squared test, and recursive feature elimination (RFE). In total, six prediction models were developed using random forest (RF-Model-O, RF-Model-B), gradient boosting (GB-Model-O, GB-Model-B), and support vector(SV-Model-O, SV-Model-B) classifiers to predict 2-year overall survival (OS) on original data as well as balanced data. Model validation was performed using 10-fold cross-validation, internal validation, and external validation. Using a multistep feature selection method, the overall top 10 features were chosen. On internal validation, the two random forest models (RF-Model-O, RF-Model-B) displayed the highest accuracy; their scores on the original and balanced datasets were 0.81 and 0.77 respectively. During external validation, both the random forest models' accuracy was 0.68. In our study, robust radiomic features showed promising predictive performance to predict 2-year overall survival in NSCLC.

CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance

  • Jesson, Andrew
  • Guizard, Nicolas
  • Ghalehjegh, Sina Hamidi
  • Goblot, Damien
  • Soudan, Florian
  • Chapados, Nicolas
2017 Conference Proceedings, cited 18 times
Website
We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem.

Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT

  • Jeon, Kyung Nyeo
  • Goo, Jin Mo
  • Lee, Chang Hyun
  • Lee, Youkyung
  • Choo, Ji Yung
  • Lee, Nyoung Keun
  • Shim, Mi-Suk
  • Lee, In Sun
  • Kim, Kwang Gi
  • Gierada, David S
Investigative radiology 2012 Journal Article, cited 51 times
Website

Deep-learning soft-tissue decomposition in chest radiography using fast fuzzy C-means clustering with CT datasets

  • Jeon, Duhee
  • Lim, Younghwan
  • Lee, Minjae
  • Kim, Guna
  • Cho, Hyosung
2023 Journal Article, cited 0 times
Chest radiography is the most routinely used X-ray imaging technique for screening and diagnosing lung and chest disease, such as lung cancer and pneumonia. However, the clinical interpretation of the hidden and obscured anatomy in chest X-ray images remains challenging because of the bony structures overlapping the lung area. Thus, multi-perspective imaging with a high radiation dose is often required. In this study, to address this problem, we propose a deep-learning soft-tissue decomposition method using fast fuzzy C-means (FFCM) clustering with computed tomography (CT) datasets. In this method, FFCM clustering is used to decompose a CT image into bone and soft-tissue components, which are synthesized into digitally reconstructed radiographs (DRRs) to obtain large amounts of X-ray decomposition datasets as ground truths for training. In the training stage, chest and soft-tissue DRRs are used as input and label data, respectively, for training the network. During the testing, a chest X-ray image is fed into the trained network to output the corresponding soft-tissue image component. To verify the efficacy of the proposed method, we conducted a feasibility study on clinical CT datasets available from the AAPM Lung CT Challenge. According to our results, the proposed method effectively yielded soft-tissue decomposition from chest X-ray images; this is encouraging for reducing the visual complexity of chest X-ray images. Consequently, the finding of our feasibility study indicate that the proposed method can offer a promising outcome for this purpose.

Lung nodule detection from CT scans using 3D convolutional neural networks without candidate selection

  • Jenuwine, Natalia M
  • Mahesh, Sunny N
  • Furst, Jacob D
  • Raicu, Daniela S
2018 Conference Proceedings, cited 0 times
Website

Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier

  • Jensen, C.
  • Carl, J.
  • Boesen, L.
  • Langkilde, N. C.
  • Ostergaard, L. R.
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.

Lung tumor cell classification with lightweight mobileNetV2 and attention-based SCAM enhanced faster R-CNN

  • Jenipher, V. Nisha
  • Radhika, S.
2024 Journal Article, cited 0 times
Website
Early and precise detection of lung tumor cell is paramount for providing adequate medication and increasing the survivability of the patients. To achieve this, the Enhanced Faster R-CNN with MobileNetV2 and SCAM framework is bestowed for improving the diagnostic accuracy of lung tumor cell classification. The U-Net architecture optimized by Stochastic Gradient Descent (SGD) is employed to carry out clinical image segmentation. The developed approach leverages the advantage of the lightweight design MobileNetV2 backbone network and the attention mechanism called Spatial and Channel Attention Module (SCAM) for improving the feature extraction as well as the feature representation and localization process of lung tumor cell. The proposed method integrated a MobileNetV2 backbone network due to its lightweight design for deriving valuable features of the input clinical images to reduce the complexity of the network architecture. Moreover, it also incorporates the attention module SCAM for the creation of spatially and channel wise informative features to enhance the lung tumor cell features representation and also its localization to concentrate on important locations. To assess the efficacy of the method, several high performance lung tumor cell classification techniques ECNN, Lung-Retina Net, CNN-SVM, CCDC-HNN, and MTL-MGAN, and datasets including Lung-PET-CT-Dx dataset, LIDC-IDRI dataset, and Chest CT-Scan images dataset are taken to carry out experimental evaluation. By conducting the comprehensive comparative analysis for different metrics with respect to different methods, the proposed method obtains the impressive performance rate with accuracy of 98.6%, specificity of 96.8%, sensitivity of 97.5%, and precision of 98.2%. Furthermore, the experimental outcomes also reveal that the proposed method reduces the complexity of the network and obtains improved diagnostic outcomes with available annotated data.

Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics

  • Jayaraj, D.
  • Sathiamoorthy, S.
2019 Conference Paper, cited 0 times
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.

Multistage Lung Cancer Detection and Prediction Using Deep Learning

  • Jawarkar, Jay
  • Solanki, Nishit
  • Vaishnav, Meet
  • Vichare, Harsh
  • Degadwala, Sheshang
International Journal of Scientific Research in Science, Engineering and Technology 2021 Journal Article, cited 0 times
Website
Earlier, the progression of the descending lung was the primary driver of the chaos that runs across the world between the two people, with more than a million people dies per year goes by. The cellular breakdown in the lungs has been greatly transferred to the inconvenience that people have looked at for a very predictable amount of time. When an entity suffers a lung injury, they have erratic cells that clump together to form a cyst. A dangerous tumor is a social affair involving terrifying, enhanced cells that can interfere with and strike tissue near them. The area of lung injury in the onset period became necessary. As of now, various systems that undergo a preparedness profile and basic learning methodologies are used for lung risk imaging. For this, CT canal images are used to see and save the adverse lung improvement season from these handles. In this paper, we present an unambiguous method for seeing lung patients in a painful stage. We have considered the shape and surface features of CT channel pictures for the sales. The perspective is done using undeniable learning methodologies and took a gender at their outcome. Keywords : Decision Tree, KNN, RF, DF, Machine Learning

ALNett: A cluster layer deep convolutional neural network for acute lymphoblastic leukemia classification

  • Jawahar, M.
  • H, S.
  • L, J. A.
  • Gandomi, A. H.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
Acute Lymphoblastic Leukemia (ALL) is cancer in which bone marrow overproduces undeveloped lymphocytes. Over 6500 cases of ALL are diagnosed every year in the United States in both adults and children, accounting for around 25% of pediatric cancers, and the trend continues to rise. With the advancements of AI and big data analytics, early diagnosis of ALL can be used to aid the clinical decisions of physicians and radiologists. This research proposes a deep neural network-based (ALNett) model that employs depth-wise convolution with different dilation rates to classify microscopic white blood cell images. Specifically, the cluster layers encompass convolution and max-pooling followed by a normalization process that provides enriched structural and contextual details to extract robust local and global features from the microscopic images for the accurate prediction of ALL. The performance of the model was compared with various pre-trained models, including VGG16, ResNet-50, GoogleNet, and AlexNet, based on precision, recall, accuracy, F1 score, loss accuracy, and receiver operating characteristic (ROC) curves. Experimental results showed that the proposed ALNett model yielded the highest classification accuracy of 91.13% and an F1 score of 0.96 with less computational complexity. ALNett demonstrated promising ALL categorization and outperformed the other pre-trained models.

Wavelet Convolution Neural Network for Classification of Spiculated Findings in Mammograms

  • Jasionowska, Magdalena
  • Gacek, Aleksandra
2019 Book Section, cited 0 times
The subject of this paper is computer-aided recognition of spiculated findings in low-contrast noisy mammograms, such as architectural distortions and spiculated masses. The issue of computer-aided detection still remains unresolved, especially for architectural distortions. The methodology applied was based on wavelet convolution neural network. The originality of the proposed method lies in the way of input image creation. The input images were created as the maximum value maps based on three wavelet decomposition subbands (HL,LH,HH), each describing local details in the original image. Moreover, two types of convolution neural network architecture were optimized and empirically verified. The experimental study was conducted on the basis of 1585 regions of interest (512 512 pixels) taken from the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), containing both normal (1191) and abnormal (406) breast tissue images including clinically confirmed architectural distortions (141) and spiculated masses (265). With the use of wavelet convolutional neural network with a reverse bioorthogonal wavelet, the recognition accuracy of both types of pathologies reached over 87%, whereas the recognition accuracy for architectural distortions was 85% and for spiculated masses - 88%.

Lung Cancer Detection: A Classification Approach Utilizing Oversampling and Support Vector Machines

  • Jara-Gavilanes, Adolfo
  • Robles-Bykbaev, Vladimir
SN Computer Science 2023 Journal Article, cited 0 times
Lung cancer is the type of cancer that causes the most deaths each year. It is also cancer with the lowest survival rate. This represents a health problem worldwide. Lung cancer has two subtypes: Non-Small Cell Lung Cancer (NSCLC) and Small Cell Lung Cancer (SCLC). For doctors, it can be hard to detect and differentiate them. Therefore, in this work, we present a method to help doctors with this issue. It consists of three phases: image preprocessing is the first phase. It starts gathering the data. After that, PET scans are selected. Then, all the scans are converted to grayscale images, and finally, all the images are joined to create a video from each patient’s scan. Next, the data extraction phase starts. In this phase, some frames are extracted from each video, and they are flattened and blended to create a row of information from each frame. Thus, a dataframe is created where each row represents a patient, and each column is a pixel value. To obtain better results, an oversampling technique is applied. In this manner, the classes are balanced. Following this, a dimensionality reduction technique is applied to reduce the number of columns produced by the previous steps and to check if this technique improves the results yielded by each model. Subsequently, the model evaluation phase begins. At this stage, two models are created: a Support Vector Machine (SVM), and a Random Forest. Ultimately, the findings are unveiled, revealing that the SVM emerged as the top-performing model, boasting an impressive 97% accuracy, 98% precision, and 97% sensitivity. Eventually, this method can be applied to detect and classify different diseases that involve PET scans.

Non-invasive tumor genotyping using radiogenomic biomarkers, a systematic review and oncology-wide pathway analysis

  • Jansen, Robin W
  • van Amstel, Paul
  • Martens, Roland M
  • Kooi, Irsan E
  • Wesseling, Pieter
  • de Langen, Adrianus J
  • Menke-Van der Houven, Catharina W
OncotargetOncotarget 2018 Journal Article, cited 0 times
Website

Prediction of liver Dmean for proton beam therapy using deep learning and contour-based data augmentation

  • Jampa-Ngern, S.
  • Kobashi, K.
  • Shimizu, S.
  • Takao, S.
  • Nakazato, K.
  • Shirato, H.
J Radiat Res 2021 Journal Article, cited 0 times
Website
The prediction of liver Dmean with 3-dimensional radiation treatment planning (3DRTP) is time consuming in the selection of proton beam therapy (PBT), and deep learning prediction generally requires large and tumor-specific databases. We developed a simple dose prediction tool (SDP) using deep learning and a novel contour-based data augmentation (CDA) approach and assessed its usability. We trained the SDP to predict the liver Dmean immediately. Five and two computed tomography (CT) data sets of actual patients with liver cancer were used for the training and validation. Data augmentation was performed by artificially embedding 199 contours of virtual clinical target volume (CTV) into CT images for each patient. The data sets of the CTVs and OARs are labeled with liver Dmean for six different treatment plans using two-dimensional calculations assuming all tissue densities as 1.0. The test of the validated model was performed using 10 unlabeled CT data sets of actual patients. Contouring only of the liver and CTV was required as input. The mean relative error (MRE), the mean percentage error (MPE) and regression coefficient between the planned and predicted Dmean was 0.1637, 6.6%, and 0.9455, respectively. The mean time required for the inference of liver Dmean of the six different treatment plans for a patient was 4.47+/-0.13 seconds. We conclude that the SDP is cost-effective and usable for gross estimation of liver Dmean in the clinic although the accuracy should be improved further if we need the accuracy of liver Dmean to be compatible with 3DRTP.

Enhanced-Quality Gan (EQ-GAN) on Lung CT Scans: Toward Truth and Potential Hallucinations

  • Jammes-Floreani, Martin
  • Laine, Andrew F.
  • Angelini, Elsa D.
2021 Conference Proceedings, cited 0 times
Website
Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients' health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.

Explainable Lung Nodule Malignancy Classification from CT Scans

  • Jamdade, Vaishnavi Avinash
2022 Thesis, cited 0 times
Website
We present an AI-assisted approach for classification of malignancy of lung nodules in CT scans for explainable AI-assisted lung cancer screening. We evaluate this explainable classification to estimate lung nodule malignancy against the LIDC-IDRI dataset. The LIDC-IDRI dataset includes biomarkers from Radiologist’s annotations thereby providing a training dataset for nodule malignancy suspicion and other findings. The algorithm employs a 3D Convolutional Neural Network (CNN) to predict both the malignancy suspicion level as well as the biomarker attributes. Some biomarkers such as malignancy and subtlety are ordinal in nature, but others such as internal structure and calcification are categorical. Our approach is uniquely able to predict a multitude of fields such as to not only estimate malignancy but many other correlated biomarker variables. We evaluate the malignancy classification algorithm in several ways including presentation of the accuracy of malignancy screening, as well as comparable metrics for biomarker fields.

Integrative analysis of diffusion-weighted MRI and genomic data to inform treatment of glioblastoma

  • Jajamovich, Guido H
  • Valiathan, Chandni R
  • Cristescu, Razvan
  • Somayajula, Sangeetha
Journal of Neuro-Oncology 2016 Journal Article, cited 4 times
Website
Gene expression profiling from glioblastoma (GBM) patients enables characterization of cancer into subtypes that can be predictive of response to therapy. An integrative analysis of imaging and gene expression data can potentially be used to obtain novel biomarkers that are closely associated with the genetic subtype and gene signatures and thus provide a noninvasive approach to stratify GBM patients. In this retrospective study, we analyzed the expression of 12,042 genes for 558 patients from The Cancer Genome Atlas (TCGA). Among these patients, 50 patients had magnetic resonance imaging (MRI) studies including diffusion weighted (DW) MRI in The Cancer Imaging Archive (TCIA). We identified the contrast enhancing region of the tumors using the pre- and post-contrast T1-weighted MRI images and computed the apparent diffusion coefficient (ADC) histograms from the DW-MRI images. Using the gene expression data, we classified patients into four molecular subtypes, determined the number and composition of genes modules using the gap statistic, and computed gene signature scores. We used logistic regression to find significant predictors of GBM subtypes. We compared the predictors for different subtypes using Mann-Whitney U tests. We assessed detection power using area under the receiver operating characteristic (ROC) analysis. We computed Spearman correlations to determine the associations between ADC and each of the gene signatures. We performed gene enrichment analysis using Ingenuity Pathway Analysis (IPA). We adjusted all p values using the Benjamini and Hochberg method. The mean ADC was a significant predictor for the neural subtype. Neural tumors had a significantly lower mean ADC compared to non-neural tumors ([Formula: see text]), with mean ADC of [Formula: see text] and [Formula: see text] for neural and non-neural tumors, respectively. Mean ADC showed an area under the ROC of 0.75 for detecting neural tumors. We found eight gene modules in the GBM cohort. The mean ADC was significantly correlated with the gene signature related with dendritic cell maturation ([Formula: see text], [Formula: see text]). Mean ADC could be used as a biomarker of a gene signature associated with dendritic cell maturation and to assist in identifying patients with neural GBMs, known to be resistant to aggressive standard of care.

Lung nodule segmentation using Salp Shuffled Shepherd Optimization Algorithm-based Generative Adversarial Network

  • Jain, S.
  • Indora, S.
  • Atal, D. K.
Comput Biol Med 2021 Journal Article, cited 1 times
Website
Lung nodule segmentation is an exciting area of research for the effective detection of lung cancer. One of the significant challenges in detecting lung cancer is Accuracy, which is affected due to the visual deviations and heterogeneity in the lung nodules. Hence, to improve the segmentation process's Accuracy, a Salp Shuffled Shepherd Optimization Algorithm-based Generative Adversarial Network (SSSOA-based GAN) model is developed in this research for lung nodule segmentation. The SSSOA is the hybrid optimization algorithm developed by integrating the Salp Swarm Algorithm (SSA) and shuffled shepherd optimization algorithm (SSOA). The artefacts in the input Computed Tomography (CT) image are removed by performing pre-processing with the help of a Gaussian filter. The pre-processed image is subjected to lung lobe segmentation, which is done with the help of deep joint segmentation for segmenting the appropriate regions. The lung nodule segmentation is performed using the GAN. The GAN is trained using the SSSOA to effectively segment the lung nodule from the lung lobe image. The metrics, such as Dice Coefficient, Accuracy, and Jaccard Similarity, are used to evaluate the performance. The developed SSSOA-based GAN method obtained a maximum Accuracy of 0.9387, a maximum Dice Coefficient of 0.7986, and a maximum Jaccard Similarity of 0.8026, respectively, compared with the existing lung nodule segmentation method.

Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor

  • Jain, R.
  • Poisson, L. M.
  • Gutman, D.
  • Scarpace, L.
  • Hwang, S. N.
  • Holder, C. A.
  • Wintermark, M.
  • Rao, A.
  • Colen, R. R.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Mikkelsen, T.
  • Flanders, A.
RadiologyRadiology 2014 Journal Article, cited 86 times
Website
PURPOSE: To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS: An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS: Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION: Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.

Correlation of perfusion parameters with genes related to angiogenesis regulation in glioblastoma: a feasibility study

  • Jain, R
  • Poisson, L
  • Narang, J
  • Scarpace, L
  • Rosenblum, ML
  • Rempel, S
  • Mikkelsen, T
American Journal of Neuroradiology 2012 Journal Article, cited 39 times
Website
BACKGROUND AND PURPOSE: Integration of imaging and genomic data is critical for a better understanding of gliomas, particularly considering the increasing focus on the use of imaging biomarkers for patient survival and treatment response. The purpose of this study was to correlate CBV and PS measured by using PCT with the genes regulating angiogenesis in GBM. MATERIALS AND METHODS: Eighteen patients with WHO grade IV gliomas underwent pretreatment PCT and measurement of CBV and PS values from enhancing tumor. Tumor specimens were analyzed by TCGA by using Human Gene Expression Microarrays and were interrogated for correlation between CBV and PS estimates across the genome. We used the GO biologic process pathways for angiogenesis regulation to select genes of interest. RESULTS: We observed expression levels for 92 angiogenesis-associated genes (332 probes), 19 of which had significant correlation with PS and 9 of which had significant correlation with CBV (P < .05). Proangiogenic genes such as TNFRSF1A (PS = 0.53, P = .024), HIF1A (PS = 0.62, P = .0065), KDR (CBV = 0.60, P = .0084; PS = 0.59, P = .0097), TIE1 (CBV = 0.54, P = .022; PS = 0.49, P = .039), and TIE2/TEK (CBV = 0.58, P = .012) showed a significant positive correlation; whereas antiangiogenic genes such as VASH2 (PS = -0.72, P = .00011) showed a significant inverse correlation. CONCLUSIONS: Our findings are provocative, with some of the proangiogenic genes showing a positive correlation and some of the antiangiogenic genes showing an inverse correlation with tumor perfusion parameters, suggesting a molecular basis for these imaging biomarkers; however, this should be confirmed in a larger patient population.

Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers

  • Jain, Rajan
  • Poisson, Laila
  • Narang, Jayant
  • Gutman, David
  • Scarpace, Lisa
  • Hwang, Scott N
  • Holder, Chad
  • Wintermark, Max
  • Colen, Rivka R
  • Kirby, Justin
  • Freymann, John
  • Brat, Daniel J
  • Jaffe, Carl
  • Mikkelsen, Tom
RadiologyRadiology 2013 Journal Article, cited 99 times
Website
PURPOSE: To correlate tumor blood volume, measured by using dynamic susceptibility contrast material-enhanced T2*-weighted magnetic resonance (MR) perfusion studies, with patient survival and determine its association with molecular subclasses of glioblastoma (GBM). MATERIALS AND METHODS: This HIPAA-compliant retrospective study was approved by institutional review board. Fifty patients underwent dynamic susceptibility contrast-enhanced T2*-weighted MR perfusion studies and had gene expression data available from the Cancer Genome Atlas. Relative cerebral blood volume (rCBV) (maximum rCBV [rCBV(max)] and mean rCBV [rCBV(mean)]) of the contrast-enhanced lesion as well as rCBV of the nonenhanced lesion (rCBV(NEL)) were measured. Patients were subclassified according to the Verhaak and Phillips classification schemas, which are based on similarity to defined genomic expression signature. We correlated rCBV measures with the molecular subclasses as well as with patient overall survival by using Cox regression analysis. RESULTS: No statistically significant differences were noted for rCBV(max), rCBV(mean) of contrast-enhanced lesion or rCBV(NEL) between the four Verhaak classes or the three Phillips classes. However, increased rCBV measures are associated with poor overall survival in GBM. The rCBV(max) (P = .0131) is the strongest predictor of overall survival regardless of potential confounders or molecular classification. Interestingly, including the Verhaak molecular GBM classification in the survival model clarifies the association of rCBV(mean) with patient overall survival (hazard ratio: 1.46, P = .0212) compared with rCBV(mean) alone (hazard ratio: 1.25, P = .1918). Phillips subclasses are not predictive of overall survival nor do they affect the predictive ability of rCBV measures on overall survival. CONCLUSION: The rCBV(max) measurements could be used to predict patient overall survival independent of the molecular subclasses of GBM; however, Verhaak classifiers provided additional information, suggesting that molecular markers could be used in combination with hemodynamic imaging biomarkers in the future.

SEMC-Net: A Shared-Encoder Multi-Class Learner

  • Jain, Rahul
  • Dixit, Satvik
  • Kumar, Vikas
  • Verma, Bindu
2023 Conference Paper, cited 0 times
Website
Brain tumour segmentation is a crucial task in medical imaging that involves identifying and delineating the boundaries of tumour tissues in the brain from MRI scans. Accurate segmentation plays an indispensable role in the diagnosis, treatment planning, and monitoring of patients with brain tumours. This study presents a novel approach to address the class imbalance prevalent in brain tumour segmentation using a shared-encoder multi-class segmentation framework. The proposed method involves training a single encoder class learner and multiple decoder class learners, which are designed to learn feature representation of a certain class subset, in addition to a shared encoder between them that extracts common features across all classes. The outputs of the complement-class learners are combined and propagated to a meta-learner to obtain the final segmentation map. The authors evaluate their method on a publicly available brain tumour segmentation dataset (BraTS20) and assess performance against the 2D U-Net model trained on all classes using standard evaluation metrics for multi-class semantic segmentation. The IoU and DSC scores for the proposed architecture stands at 0.644 and 0.731, respectively, as compared to 0.604 and 0.690 obtained by the base models. Furthermore, our model exhibits significant performance boosts in individual classes, as evidenced by the DSC scores of 0.588, 0.734, and 0.684 for the necrotic tumour core, peritumoral edema, and the GD-enhancing tumour classes, respectively. In contrast, the 2D-Unet model yields DSC scores of 0.554, 0.699, and 0.641 for the same classes, respectively. The approach exhibits notable performance gains in segmenting the T1-Gd class, which not only poses a formidable challenge in terms of segmentation but also holds paramount clinical significance for radiation therapy.

Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration

  • Jahani, Nariman
  • Cohen, Eric
  • Hsieh, Meng-Kang
  • Weinstein, Susan P
  • Pantalone, Lauren
  • Hylton, Nola
  • Newitt, David
  • Davatzikos, Christos
  • Kontos, Despina
2019 Journal Article, cited 0 times
Website
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.

Quantitative imaging in radiation oncology: An emerging science and clinical service

  • Jaffray, DA
  • Chung, C
  • Coolens, C
  • Foltz, W
  • Keller, H
  • Menard, C
  • Milosevic, M
  • Publicover, J
  • Yeung, I
2015 Conference Proceedings, cited 9 times
Website

Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection

  • Jaeger, PF
  • Kohl, SAA
  • Bickelhaupt, S
  • Isensee, F
  • Kuder, TA
  • Schlemmer, H-P
  • Maier-Hein, KH
2020 Conference Paper, cited 33 times
Website
The task of localizing and categorizing objects in medical images often remains formulated as a semantic segmentation problem. This approach, however, only indirectly solves the coarse localization task by predicting pixel-level scores, requiring ad-hoc heuristics when mapping back to object-level scores. State-of-the-art object detectors on the other hand, allow for individual object scoring in an end-to-end fashion, while ironically trading in the ability to exploit the full pixel-wise supervision signal. This can be particularly disadvantageous in the setting of medical image analysis, where data sets are notoriously small. In this paper, we propose Retina U-Net, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U-Net architecture widely used for semantic segmentation in medical images. The proposed architecture recaptures discarded supervision signals by complementing object detection with an auxiliary task in the form of semantic segmentation without introducing the additional complexity of previously proposed two-stage detectors. We evaluate the importance of full segmentation supervision on two medical data sets, provide an in-depth analysis on a series of toy experiments and show how the corresponding performance gain grows in the limit of small data sets. Retina U-Net yields strong detection performance only reached by its more complex two-staged counterparts. Our framework including all methods implemented for operation on 2D and 3D images is available at github.com/pfjaeger/medicaldetectiontoolkit.

A divide and conquer approach to maximise deep learning mammography classification accuracies

  • Jaamour, A.
  • Myles, C.
  • Patel, A.
  • Chen, S. J.
  • McMillan, L.
  • Harris-Birtill, D.
PLoS One 2023 Journal Article, cited 0 times
Website
Breast cancer claims 11,400 lives on average every year in the UK, making it one of the deadliest diseases. Mammography is the gold standard for detecting early signs of breast cancer, which can help cure the disease during its early stages. However, incorrect mammography diagnoses are common and may harm patients through unnecessary treatments and operations (or a lack of treatment). Therefore, systems that can learn to detect breast cancer on their own could help reduce the number of incorrect interpretations and missed cases. Various deep learning techniques, which can be used to implement a system that learns how to detect instances of breast cancer in mammograms, are explored throughout this paper. Convolution Neural Networks (CNNs) are used as part of a pipeline based on deep learning techniques. A divide and conquer approach is followed to analyse the effects on performance and efficiency when utilising diverse deep learning techniques such as varying network architectures (VGG19, ResNet50, InceptionV3, DenseNet121, MobileNetV2), class weights, input sizes, image ratios, pre-processing techniques, transfer learning, dropout rates, and types of mammogram projections. This approach serves as a starting point for model development of mammography classification tasks. Practitioners can benefit from this work by using the divide and conquer results to select the most suitable deep learning techniques for their case out-of-the-box, thus reducing the need for extensive exploratory experimentation. Multiple techniques are found to provide accuracy gains relative to a general baseline (VGG19 model using uncropped 512 x 512 pixels input images with a dropout rate of 0.2 and a learning rate of 1 x 10-3) on the Curated Breast Imaging Subset of DDSM (CBIS-DDSM) dataset. These techniques involve transfer learning pre-trained ImagetNet weights to a MobileNetV2 architecture, with pre-trained weights from a binarised version of the mini Mammography Image Analysis Society (mini-MIAS) dataset applied to the fully connected layers of the model, coupled with using weights to alleviate class imbalance, and splitting CBIS-DDSM samples between images of masses and calcifications. Using these techniques, a 5.6% gain in accuracy over the baseline model was accomplished. Other deep learning techniques from the divide and conquer approach, such as larger image sizes, do not yield increased accuracies without the use of image pre-processing techniques such as Gaussian filtering, histogram equalisation and input cropping.

NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization: Explanation of the software platform developed for 3D models visualization related with medical images using Augmented and Virtual Reality technology

  • Izard, Santiago González
  • Plaza, Óscar Alonso
  • Torres, Ramiro Sánchez
  • Méndez, Juan Antonio Juanes
  • García-Peñalvo, Francisco José
2019 Conference Proceedings, cited 0 times
Website
The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies.

Heuristic Oncological Prognosis Evaluator (HOPE): Deep-Learning Framework to Detect Multiple Cancers

  • Iyer, Anu
  • Conrad, Lee
  • Prior, Fred
Journal of Student Research 2021 Journal Article, cited 0 times
Website
Cancer is the common name used to categorize a collection of diseases. In the United States, there were an estimated 1.8 million new cancer cases and 600,000 cancer deaths in 2020. Though it has been proven that an early diagnosis can significantly reduce cancer mortality, cancer screening is inaccessible to much of the world’s population. Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. A literature search with the Google Scholar and PubMed databases from January 2020 to June 2021 determined that currently, no machine learning model (n=0/417) has an accuracy of 90% or higher in diagnosing multiple cancers. We propose our model HOPE, the Heuristic Oncological Prognosis Evaluator, a transfer learning diagnostic tool for the screening of patients with common cancers. By applying this approach to magnetic resonance (MRI) and digital whole slide pathology images, HOPE 2.0 demonstrates an overall accuracy of 95.52% in classifying brain, breast, colorectal, and lung cancer. HOPE 2.0 is a unique state-of-the-art model, as it possesses the ability to analyze multiple types of image data (radiology and pathology) and has an accuracy higher than existing models. HOPE 2.0 may ultimately aid in accelerating the diagnosis of multiple cancer types, resulting in improved clinical outcomes compared to previous research that focused on singular cancer diagnosis.

Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks

  • Iuga, A. I.
  • Carolus, H.
  • Hoink, A. J.
  • Brosch, T.
  • Klinder, T.
  • Maintz, D.
  • Persigehl, T.
  • Baessler, B.
  • Pusken, M.
BMC Med Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS: The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS: The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) >/= 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS: The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.

Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities

  • Itakura, Haruka
  • Achrol, Achal S
  • Mitchell, Lex A
  • Loya, Joshua J
  • Liu, Tiffany
  • Westbroek, Erick M
  • Feroze, Abdullah H
  • Rodriguez, Scott
  • Echegaray, Sebastian
  • Azad, Tej D
Science Translational Medicine 2015 Journal Article, cited 90 times
Website

EfficientNet and multi-path convolution with multi-head attention network for brain tumor grade classification

  • Isunuri, B. Venkateswarlu
  • Kakarla, Jagadeesh
2023 Journal Article, cited 0 times
Website
Grade classification is a challenging task in brain tumor image classification. Contemporary models employ transfer learning technique to attain better performance. The existing models ignored the semantic features of a tumor during classification decisions. Moreover, contemporary research requires an optimized model to exhibit better performance on larger datasets. Thus, we propose an EfficientNet and multi-path convolution with a multi-head attention network for the grade classification. We used a pre-trained EfficientNetB4 in the feature extraction phase. Then, a multi-path convolution with multi-head attention network performs a feature enhancement task. Finally, features obtained from the above step are classified using a fully connected double dense network. We utilize TCIA repository datasets to generate a three-class (normal/low-grade/high-grade) classification dataset. Our model achieves 98.35% accuracy and 97.32% Jaccard coefficient. The proposed model achieves superior performance than its competing models in all key metrics. Further, we achieve similar performance on a noisy dataset.

Lung Cancer Detection and Classification using Machine Learning Algorithm

  • Ismail, Meraj Begum Shaikh
Turkish Journal of Computer and Mathematics Education (TURCOMAT) 2021 Journal Article, cited 0 times
Website
The Main Objective of this research paper is to find out the early stage of lung cancer and explore the accuracy levels of various machine learning algorithms. After a systematic literature study, we found out that some classifiers have low accuracy and some are higher accuracy but difficult to reached nearer of 100%. Low accuracy and high implementation cost due to improper dealing with DICOM images. For medical image processing many different types of images are used but Computer Tomography (CT) scans are generally preferred because of less noise. Deep learning is proven to be the best method for medical image processing, lung nodule detection and classification, feature extraction and lung cancer stage prediction. In the first stage of this system used image processing techniques to extract lung regions. The segmentation is done using K Means. The features are extracted from the segmented images and the classification are done using various machine learning algorithm. The performances of the proposed approaches are evaluated based on their accuracy, sensitivity, specificity and classification time.

Fully automated deep-learning section-based muscle segmentation from CT images for sarcopenia assessment

  • Islam, S.
  • Kanavati, F.
  • Arain, Z.
  • Da Costa, O. F.
  • Crum, W.
  • Aboagye, E. O.
  • Rockall, A. G.
Clin Radiol 2022 Journal Article, cited 0 times
Website
AIM: To develop a fully automated deep-learning-based approach to measure muscle area for assessing sarcopenia on standard-of-care computed tomography (CT) of the abdomen without any case exclusion criteria, for opportunistic screening for frailty. MATERIALS AND METHODS: This ethically approved retrospective study used publicly available and institutional unselected abdominal CT images (n=1,070 training, n=31 testing). The method consisted of two sequential steps: section detection from CT volume followed by muscle segmentation on single-section. Both stages used fully convolutional neural networks (FCNN), based on a UNet-like architecture. Input data consisted of CT volumes with a variety of fields of view, section thicknesses, occlusions, artefacts, and anatomical variations. Output consisted of segmented muscle area on a CT section at the L3 vertebral level. The muscle was segmented into erector spinae, psoas, and rectus abdominus muscle groups. Output was tested against expert manual segmentation. RESULTS: Threefold cross-validation was used to evaluate the model. Section detection cross-validation error was 1.41 +/- 5.02 (in sections). Segmentation cross-validation Dice overlaps were 0.97 +/- 0.02, 0.95 +/- 0.04, and 0.94 +/- 0.04 for erector spinae, psoas, and rectus abdominus, respectively, and 0.96 +/- 0.02 for the combined muscle area, with R(2) = 0.95/0.98 for muscle attenuation/area in 28/31 hold-out test cases. No statistical difference was found between the automated output and a second annotator. Fully automated processing took <1 second per CT examination. CONCLUSIONS: A FCNN pipeline accurately and efficiently automates muscle segmentation at the L3 vertebral level from unselected abdominal CT volumes, with no manual processing step. This approach is promising as a generalisable tool for opportunistic screening for frailty on standard-of-care CT.

Brain Tumor Segmentation and Survival Prediction Using 3D Attention UNet

  • Islam, Mobarakol
  • Vibashan, V. S.
  • Jose, V. Jeya Maria
  • Wijethilake, Navodini
  • Utkarsh, Uppal
  • Ren, Hongliang
2020 Book Section, cited 0 times
In this work, we develop an attention convolutional neural network (CNN) to segment brain tumors from Magnetic Resonance Images (MRI). Further, we predict the survival rate using various machine learning methods. We adopt a 3D UNet architecture and integrate channel and spatial attention with the decoder network to perform segmentation. For survival prediction, we extract some novel radiomic features based on geometry, location, the shape of the segmented tumor and combine them with clinical information to estimate the survival duration for each patient. We also perform extensive experiments to show the effect of each feature for overall survival (OS) prediction. The experimental results infer that radiomic features such as histogram, location, and shape of the necrosis region and clinical features like age are the most critical parameters to estimate the OS.

A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks

  • Islam, Kh Tohidul
  • Wijewickrema, Sudanthi
  • O’Leary, Stephen
PeerJ Computer SciencePeerJ Computer Science 2019 Journal Article, cited 0 times
Website
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.

X-ray CT scatter correction by a physics-motivated deep neural network

  • ISKENDER, Berk
2020 Thesis, cited 0 times
Website
A fundamental problem in X-ray Computed Tomography (CT) is the scatter occurring due to the interaction of photons with the imaged object. Unless it is corrected, this phenomenon manifests itself as degradations in the reconstructions in the form of various artifacts. This makes scatter correction a critical step to obtain the desired reconstruction quality. Scatter correction methods consist of two groups: hardware-based and software-based. Despite success in specific settings, hardware-based methods require modification in the hardware or an increase in the scan time or dose. This makes software-based methods attractive. In this context, Monte-Carlo based scatter estimation, analytical-numerical and kernel-based methods were developed. Furthermore, the capacity of data-driven approaches to tackle this problem was recently demonstrated. In this thesis, two novel physics-motivated deep-learning-based methods are proposed. The methods estimate and correct for the scatter in the obtained projection measurements. They incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it. They use a common specific deep neural network architecture and a cost function adapted to the problem. Numerical experiments with data obtained by Monte-Carlo simulations of the imaging of phantoms reveal noticeable improvement over a recent projection-domain deep neural network correction method.

nnU-Net for Brain Tumor Segmentation

  • Isensee, Fabian
  • Jäger, Paul F.
  • Full, Peter M.
  • Vollmuth, Philipp
  • Maier-Hein, Klaus H.
2021 Book Section, cited 0 times
We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnU-Net pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our method took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.

Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks

  • Irshad, Samra
  • Gomes, Douglas P. S.
  • Kim, Seong Tae
IEEE Access 2023 Journal Article, cited 0 times
Quantitative assessment of the abdominal region from CT scans requires the accurate delineation of abdominal organs. Therefore, automatic abdominal image segmentation has been the subject of intensive research for the past two decades. Recently, deep learning-based methods have resulted in state-of-the-art performance for the 3D abdominal CT segmentation. However, the complex characterization of abdominal organs with weak boundaries prevents the deep learning methods from accurate segmentation. Specifically, the voxels on the boundary of organs are more vulnerable to misprediction due to the highly-varying intensities. This paper proposes a method for improved abdominal image segmentation by leveraging organ-boundary prediction as a complementary task. We train 3D encoder-decoder networks to simultaneously segment the abdominal organs and their boundaries via multi-task learning. We explore two network topologies based on the extent of weights shared between the two tasks within a unified multi-task framework. In the first topology, the whole-organ prediction task and the boundary detection task share all the layers in the network except for the last task-specific layers. The second topology employs a single shared encoder but two separate task-specific decoders. The effectiveness of utilizing the organs’ boundary information for abdominal multi-organ segmentation is evaluated on two publically available abdominal CT datasets: Pancreas-CT and the BTCV dataset. The improvements shown in segmentation results reveal the advantage of the multi-task training that forces the network to pay attention to ambiguous boundaries of organs. A maximum relative improvement of 3.5% and 3.6% is observed in Mean Dice Score for Pancreas-CT and BTCV datasets, respectively.

Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework

  • Irmak, Emrah
Iranian Journal of Science and Technology, Transactions of Electrical Engineering 2021 Journal Article, cited 0 times
Website
Brain tumor diagnosis and classification still rely on histopathological analysis of biopsy specimens today. The current method is invasive, time-consuming and prone to manual errors. These disadvantages show how essential it is to perform a fully automated method for multi-classification of brain tumors based on deep learning. This paper aims to make multi-classification of brain tumors for the early diagnosis purposes using convolutional neural network (CNN). Three different CNN models are proposed for three different classification tasks. Brain tumor detection is achieved with 99.33% accuracy using the first CNN model. The second CNN model can classify the brain tumor into five brain tumor types as normal, glioma, meningioma, pituitary and metastatic with an accuracy of 92.66%. The third CNN model can classify the brain tumors into three grades as Grade II, Grade III and Grade IV with an accuracy of 98.14%. All the important hyper-parameters of CNN models are automatically designated using the grid search optimization algorithm. To the best of author’s knowledge, this is the first study for multi-classification of brain tumor MRI images using CNN whose almost all hyper-parameters are tuned by the grid search optimizer. The proposed CNN models are compared with other popular state-of-the-art CNN models such as AlexNet, Inceptionv3, ResNet-50, VGG-16 and GoogleNet. Satisfactory classification results are obtained using large and publicly available clinical datasets. The proposed CNN models can be employed to assist physicians and radiologists in validating their initial screening for brain tumor multi-classification purposes.

Improving the Robustness and Quality of Biomedical CNN Models through Adaptive Hyperparameter Tuning

  • Iqbal, S.
  • Qureshi, A. N.
  • Ullah, A.
  • Li, J. Q.
  • Mahmood, T.
2022 Journal Article, cited 0 times
Website
Deep learning is an obvious method for the detection of disease, analyzing medical images and many researchers have looked into it. However, the performance of deep learning algorithms is frequently influenced by hyperparameter selection, the question of which combination of hyperparameters are best emerges. To address this challenge, we proposed a novel algorithm for Adaptive Hyperparameter Tuning (AHT) that automates the selection of optimal hyperparameters for Convolutional Neural Network (CNN) training. All of the optimal hyperparameters for the CNN models were instantaneously selected and allocated using a novel proposed algorithm Adaptive Hyperparameter Tuning (AHT). Using AHT, enables CNN models to be highly autonomous to choose optimal hyperparameters for classifying medical images into various classifications. The CNN model (Deep-Hist) categorizes medical images into basic classes: malignant and benign, with an accuracy of 95.71%. The most dominant CNN models such as ResNet, DenseNet, and MobileNetV2 are all compared to the already proposed CNN model (Deep-Hist). Plausible classification results were obtained using large, publicly available clinical datasets such as BreakHis, BraTS, NIH-Xray and COVID-19 X-ray. Medical practitioners and clinicians can utilize the CNN model to corroborate their first malignant and benign classification assessment. The recommended Adaptive high F1 score and precision, as well as its excellent generalization and accuracy, imply that it might be used to build a pathologist's aid tool.

Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)

  • Iqbal, Sajid
  • Ghani, M Usman
  • Saba, Tanzila
  • Rehman, Amjad
Microscopy research and technique 2018 Journal Article, cited 8 times
Website

Towards Efficient Segmentation and Classification of White Blood Cell Cancer Using Deep Learning

  • Iqbal, Ashik
  • Ahmed, Md Faysal
  • Suvon, Md Naimul Islam
  • Shuvho, Sourav Das
  • Fahmin, Ahmed
2021 Conference Paper, cited 0 times
Website
White Blood cell cancer is a plasma cell cancer that starts in the bone marrow and leads to the formation of abnormal plasma cells. Medical examiners must be exceedingly selective when diagnosing myeloma cells. Moreover, because the final judgment is dependent on human perception and judgment, there is a chance that the conclusion may be incorrect. This study is noteworthy because it creates a software-assisted way for recognizing and identifying myeloma cells in bone marrow scans. MASK-Recurrent Convolutional Neural Network has been utilized for recognition, while Efficient Net B3 has been used for detection. The mean Average Precision (mAP) of MASK-RCNN is 93%, whereas Efficient Net B3 is 95% accurate. According to the findings of this study, the Mask-RCNN model can identify multiple myeloma, and Efficient Net B3 can distinguish between myeloma and non-myeloma cells.

Automatic head computed tomography image noise quantification with deep learning

  • Inkinen, S. I.
  • Makela, T.
  • Kaasalainen, T.
  • Peltonen, J.
  • Kangasniemi, M.
  • Kortesniemi, M.
Phys Med 2022 Journal Article, cited 0 times
Website
PURPOSE: Computed tomography (CT) image noise is usually determined by standard deviation (SD) of pixel values from uniform image regions. This study investigates how deep learning (DL) could be applied in head CT image noise estimation. METHODS: Two approaches were investigated for noise image estimation of a single acquisition image: direct noise image estimation using supervised DnCNN convolutional neural network (CNN) architecture, and subtraction of a denoised image estimated with denoising UNet-CNN experimented with supervised and unsupervised noise2noise training approaches. Noise was assessed with local SD maps using 3D- and 2D-CNN architectures. Anthropomorphic phantom CT image dataset (N = 9 scans, 3 repetitions) was used for DL-model comparisons. Mean square error (MSE) and mean absolute percentage errors (MAPE) of SD values were determined using the SD values of subtraction images as ground truth. Open-source clinical head CT low-dose dataset (N(train) = 37, N(test) = 10 subjects) were used to demonstrate DL applicability in noise estimation from manually labeled uniform regions and in automated noise and contrast assessment. RESULTS: The direct SD estimation using 3D-CNN was the most accurate assessment method when comparing in phantom dataset (MAPE = 15.5%, MSE = 6.3HU). Unsupervised noise2noise approach provided only slightly inferior results (MAPE = 20.2%, MSE = 13.7HU). 2DCNN and unsupervised UNet models provided the smallest MSE on clinical labeled uniform regions. CONCLUSIONS: DL-based clinical image assessment is feasible and provides acceptable accuracy as compared to true image noise. Noise2noise approach may be feasible in clinical use where no ground truth data is available. Noise estimation combined with tissue segmentation may enable more comprehensive image quality characterization.

Brain tumor segmentation in MRI images using nonparametric localization and enhancement methods with U-net

  • Ilhan, A.
  • Sekeroglu, B.
  • Abiyev, R.
Int J Comput Assist Radiol Surg 2022 Journal Article, cited 2 times
Website
PURPOSE: Segmentation is one of the critical steps in analyzing medical images since it provides meaningful information for the diagnosis, monitoring, and treatment of brain tumors. In recent years, several artificial intelligence-based systems have been developed to perform this task accurately. However, the unobtrusive or low-contrast occurrence of some tumors and similarities to healthy brain tissues make the segmentation task challenging. These yielded researchers to develop new methods for preprocessing the images and improving their segmentation abilities. METHODS: This study proposes an efficient system for the segmentation of the complete brain tumors from MRI images based on tumor localization and enhancement methods with a deep learning architecture named U-net. Initially, the histogram-based nonparametric tumor localization method is applied to localize the tumorous regions and the proposed tumor enhancement method is used to modify the localized regions to increase the visual appearance of indistinct or low-contrast tumors. The resultant images are fed to the original U-net architecture to segment the complete brain tumors. RESULTS: The performance of the proposed tumor localization and enhancement methods with the U-net is tested on benchmark datasets, BRATS 2012, BRATS 2019, and BRATS 2020, and achieved superior results as 0.94, 0.85, 0.87, 0.88 dice scores for the BRATS 2012 HGG-LGG, BRATS 2019, and BRATS 2020 datasets, respectively. CONCLUSION: The results and comparisons showed how the proposed methods improve the segmentation ability of the deep learning models and provide high-accuracy and low-cost segmentation of complete brain tumors in MRI images. The results might yield the implementation of the proposed methods in segmentation tasks of different medical fields.

Multi-View Attention-based Late Fusion (MVALF) CADx system for breast cancer using deep learning

  • Iftikhar, Hina
  • Shahid, Ahmad Raza
  • Raza, Basit
  • Khan,Hasan Nasir
Machine Graphics & Vision 2020 Journal Article, cited 0 times
Website
Breast cancer is a leading cause of death among women. Early detection can significantly reduce the mortality rate among women and improve their prognosis. Mammography is the first line procedure for early diagnosis. In the early era, conventional Computer-Aided Diagnosis (CADx) systems for breast lesion diagnosis were based on just single view information. The last decade evidence the use of two views mammogram: Medio-Lateral Oblique (MLO) and Cranio-Caudal (CC) view for the CADx systems. Most recent studies show the effectiveness of four views of mammogram to train CADx system with feature fusion strategy for classification task. In this paper, we proposed an end-to-end Multi-View Attention-based Late Fusion (MVALF) CADx system that fused the obtained predictions of four view models, which is trained for each view separately. These separate models have different predictive ability for each class. The appropriate fusion of multi-view models can achieve better diagnosis performance. So, it is necessary to assign the proper weights to the multi-view classification models. To resolve this issue, attention-based weighting mechanism is adopted to assign the proper weights to trained models for fusion strategy. The proposed methodology is used for the classification of mammogram into normal, mass, calcification, malignant masses and benign masses. The publicly available datasets CBIS-DDSM and mini-MIAS are used for the experimentation. The results show that our proposed system achieved 0.996 AUC for normal vs. abnormal, 0.922 for mass vs. calcification and 0.896 for malignant vs. benign masses. Superior results are seen for the classification of malignant vs benign masses with our proposed approach, which is higher than the results using single view, two views and four views early fusion-based systems. The overall results of each level show the potential of multi-view late fusion with transfer learning in the diagnosis of breast cancer.

APPLICATION OF MAGNETIC RESONANCE RADIOMICS PLATFORM (MRP) FOR MACHINE LEARNING BASED FEATURES EXTRACTION FROM BRAIN TUMOR IMAGES

  • Idowu, B.A.
  • Dada, O. M.
  • Awojoyogbe, O.B.
Journal of Science, Technology, Mathematics and Education (JOSTMED) 2021 Journal Article, cited 0 times
Website
This study investigated the implementation of magnetic resonance radiomics platform (MRP) for machine learning based features extraction from brain tumor images. Magnetic resonance imaging data publicly available in The Cancer Imaging Archive (TCIA) were downloaded and used to perform image Coregistration, Multi-Modality, Images interpolation, Morphology and Extraction of radiomic features with MRP tools. Radiomics analyses were then applied to the data (containing AX-T1-POST, Diffusion weighted, AX-T2-FSE and AX-T2-FLAIR sequences) using wavelet decomposition principles. The results employing different configurations of low-pass and high-pass filters were exported to Microsoft excel data sheets. The exported data were visualized using MATLAB’s classification learner tool. These exported data and the visualizations provide a new way of deep assessment of image data as well as easier interpretation of image scans. Findings from this study revealed that Machine learning Radiomics Platform is important in characterizing, visualizing and gives adequate information of a brain tumor.

Multi-Graph Convolutional Neural Network for Breast Cancer Multi-task Classification

  • Ibrahim, Mohamed
  • Henna, Shagufta
  • Cullen, Gary
2023 Book Section, cited 0 times
Mammography is a popular diagnostic imaging procedure for detecting breast cancer at an early stage. Various deep-learning approaches to breast cancer detection incur high costs and are erroneous. Therefore, they are not reliable to be used by medical practitioners. Specifically, these approaches do not exploit complex texture patterns and interactions. These approaches warrant the need for labelled data to enable learning, limiting the scalability of these methods with insufficient labelled datasets. Further, these models lack generalisation capability to new-synthesised patterns/textures. To address these problems, in the first instance, we design a graph model to transform the mammogram images into a highly correlated multigraph that encodes rich structural relations and high-level texture features. Next, we integrate a pre-training self-supervised learning multigraph encoder (SSL-MG) to improve feature presentations, especially under limited labelled data constraints. Then, we design a semi-supervised mammogram multigraph convolution neural network downstream model (MMGCN) to perform multi-classifications of mammogram segments encoded in the multigraph nodes. Our proposed frameworks, SSL-MGCN and MMGCN, reduce the need for annotated data to 40% and 60%, respectively, in contrast to the conventional methods that require more than 80% of data to be labelled. Finally, we evaluate the classification performance of MMGCN independently and with integration with SSL-MG in a model called SSL-MMGCN over multi-training settings. Our evaluation results on DSSM, one of the recent public datasets, demonstrate the efficient learning performance of SSL-MNGCN and MMGCN with 0.97 and 0.98 AUC classification accuracy in contrast to the multitask deep graph (GCN) method Hao Du et al. (2021) with 0.81 AUC accuracy.

The application of a workflow integrating the variable reproducibility and harmonizability of radiomic features on a phantom dataset

  • Ibrahim, Abdalla
  • Refaee, Turkey
  • Leijenaar, Ralph TH
  • Primakov, Sergey
  • Hustinx, Roland
  • Mottaghy, Felix M
  • Woodruff, Henry C
  • Maidment, Andrew DA
  • Lambin, Philippe
PLoS One 2021 Journal Article, cited 2 times
Website

MaasPenn radiomics reproducibility score: A novel quantitative measure for evaluating the reproducibility of CT-based handcrafted radiomic features

  • Ibrahim, Abdalla
  • Barufaldi, Bruno
  • Refaee, Turkey
  • Silva Filho, Telmo M
  • Acciavatti, Raymond J
  • Salahuddin, Zohaib
  • Hustinx, Roland
  • Mottaghy, Felix M
  • Maidment, Andrew DA
  • Lambin, Philippe
Cancers 2022 Journal Article, cited 1 times
Website

Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines

  • Ibraheem, Amira Mofreh
  • Rahouma, Kamel Hussein
  • Hamed, Hesham F. A.
2019 Conference Paper, cited 0 times
Website
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.

Squeeze-and-Excitation Normalization for Brain Tumor Segmentation

  • Iantsen, Andrei
  • Jaouen, Vincent
  • Visvikis, Dimitris
  • Hatt, Mathieu
2021 Book Section, cited 0 times
In this paper we described our approach for glioma segmentation in multi-sequence magnetic resonance imaging (MRI) in the context of the MICCAI 2020 Brain Tumor Segmentation Challenge (BraTS). We proposed an architecture based on U-Net with a new computational unit termed “SE Norm” that brought significant improvements in segmentation quality. Our approach obtained competitive results on the validation (Dice scores of 0.780, 0.911, 0.863) and test (Dice scores of 0.805, 0.887, 0.843) sets for the enhanced tumor, whole tumor and tumor core sub-regions. The full implementation and trained models are available at https://github.com/iantsen/brats.

Encoder-Decoder Network for Brain Tumor Segmentation on Multi-sequence MRI

  • Iantsen, Andrei
  • Jaouen, Vincent
  • Visvikis, Dimitris
  • Hatt, Mathieu
2020 Book Section, cited 0 times
In this paper we describe our approach based on convolutional neural networks for medical image segmentation in a context of the BraTS 2019 challenge. We use the conventional encoder-decoder architecture enhanced with residual blocks, as well as spatial and channel squeeze & excitation modules. The present paper describes the general pipeline including the data pre-processing, the choices regarding the model architecture, the training procedure and the chosen data augmentation techniques. Our final results in the BraTS 2019 segmentation challenge are Dice scores equal to 0.76, 0.87 and 0.80 for enhanced tumor, whole tumor and tumor core sub-regions, respectively.

Advanced MRI Techniques in the Monitoring of Treatment of Gliomas

  • Hyare, Harpreet
  • Thust, Steffi
  • Rees, Jeremy
Current treatment options in neurology 2017 Journal Article, cited 11 times
Website
OPINION STATEMENT: With advances in treatments and survival of patients with glioblastoma (GBM), it has become apparent that conventional imaging sequences have significant limitations both in terms of assessing response to treatment and monitoring disease progression. Both 'pseudoprogression' after chemoradiation for newly diagnosed GBM and 'pseudoresponse' after anti-angiogenesis treatment for relapsed GBM are well-recognised radiological entities. This in turn has led to revision of response criteria away from the standard MacDonald criteria, which depend on the two-dimensional measurement of contrast-enhancing tumour, and which have been the primary measure of radiological response for over three decades. A working party of experts published RANO (Response Assessment in Neuro-oncology Working Group) criteria in 2010 which take into account signal change on T2/FLAIR sequences as well as the contrast-enhancing component of the tumour. These have recently been modified for immune therapies, which are associated with specific issues related to the timing of radiological response. There has been increasing interest in quantification and validation of physiological and metabolic parameters in GBM over the last 10 years utilising the wide range of advanced imaging techniques available on standard MRI platforms. Previously, MRI would provide structural information only on the anatomical location of the tumour and the presence or absence of a disrupted blood-brain barrier. Advanced MRI sequences include proton magnetic resonance spectroscopy (MRS), vascular imaging (perfusion/permeability) and diffusion imaging (diffusion weighted imaging/diffusion tensor imaging) and are now routinely available. They provide biologically relevant functional, haemodynamic, cellular, metabolic and cytoarchitectural information and are being evaluated in clinical trials to determine whether they offer superior biomarkers of early treatment response than conventional imaging, when correlated with hard survival endpoints. Multiparametric imaging, incorporating different combinations of these modalities, improves accuracy over single imaging modalities but has not been widely adopted due to the amount of post-processing analysis required, lack of clinical trial data, lack of radiology training and wide variations in threshold values. New techniques including diffusion kurtosis and radiomics will offer a higher level of quantification but will require validation in clinical trial settings. Given all these considerations, it is clear that there is an urgent need to incorporate advanced techniques into clinical trial design to avoid the problems of under or over assessment of treatment response.

Fully Automated Segmentation Models of Supratentorial Meningiomas Assisted by Inclusion of Normal Brain Images

  • Hwang, Kihwan
  • Park, Juntae
  • Kwon, Young-Jae
  • Cho, Se Jin
  • Choi, Byung Se
  • Kim, Jiwon
  • Kim, Eunchong
  • Jang, Jongha
  • Ahn, Kwang-Sung
  • Kim, Sangsoo
  • Kim, Chae-Yong
2022 Journal Article, cited 0 times
Website
To train an automatic brain tumor segmentation model, a large amount of data is required. In this paper, we proposed a strategy to overcome the limited amount of clinically collected magnetic resonance image (MRI) data regarding meningiomas by pre-training a model using a larger public dataset of MRIs of gliomas and augmenting our meningioma training set with normal brain MRIs. Pre-operative MRIs of 91 meningioma patients (171 MRIs) and 10 non-meningioma patients (normal brains) were collected between 2016 and 2019. Three-dimensional (3D) U-Net was used as the base architecture. The model was pre-trained with BraTS 2019 data, then fine-tuned with our datasets consisting of 154 meningioma MRIs and 10 normal brain MRIs. To increase the utility of the normal brain MRIs, a novel balanced Dice loss (BDL) function was used instead of the conventional soft Dice loss function. The model performance was evaluated using the Dice scores across the remaining 17 meningioma MRIs. The segmentation performance of the model was sequentially improved via the pre-training and inclusion of normal brain images. The Dice scores improved from 0.72 to 0.76 when the model was pre-trained. The inclusion of normal brain MRIs to fine-tune the model improved the Dice score; it increased to 0.79. When employing BDL as the loss function, the Dice score reached 0.84. The proposed learning strategy for U-net showed potential for use in segmenting meningioma lesions.

An Investigation into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features

  • Huti, Mohamed
  • Lee, Tiarna
  • Sawyer, Elinor
  • King, Andrew P.
2023 Book Section, cited 0 times
Recent research has shown that artificial intelligence (AI) models can exhibit bias in performance when trained using data that are imbalanced by protected attribute(s). Most work to date has focused on deep learning models, but classical AI techniques that make use of hand-crafted features may also be susceptible to such bias. In this paper we investigate the potential for race bias in random forest (RF) models trained using radiomics features. Our application is prediction of tumour molecular subtype from dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. Our results show that radiomics features derived from DCE-MRI data do contain race-identifiable information, and that RF models can be trained to predict White and Black race from these data with 60–70% accuracy, depending on the subset of features used. Furthermore, RF models trained to predict tumour molecular subtype using race-imbalanced data seem to produce biased behaviour, exhibiting better performance on test data from the race on which they were trained.

Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT

  • Hussain, M. A.
  • Mirikharaji, Z.
  • Momeny, M.
  • Marhamati, M.
  • Neshat, A. A.
  • Garbi, R.
  • Hamarneh, G.
Comput Med Imaging Graph 2022 Journal Article, cited 0 times
Website
Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.

Learnable image histograms-based deep radiomics for renal cell carcinoma grading and staging

  • Hussain, M. A.
  • Hamarneh, G.
  • Garbi, R.
Comput Med Imaging Graph 2021 Journal Article, cited 0 times
Website
Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.

Collage CNN for Renal Cell Carcinoma Detection from CT

  • Hussain, Mohammad Arafat
  • Amir-Khalili, Alborz
  • Hamarneh, Ghassan
  • Abugharbieh, Rafeef
2017 Conference Proceedings, cited 0 times
Website

Serum Procalcitonin as a Predictive Biomarker in COVID-19: A Retrospective Cohort Analysis

  • Hussain, Aaiz
  • Singh, Lavi
  • McAlister Iii, James
  • Jo, Yongho
  • Makaryan, Tadevos T.
  • Hussain, Shaheer
  • Trenschel, Robert W.
  • Kesselman, Marc M.
2022 Journal Article, cited 0 times
Website
Introduction: Since the onset of COVID-19, physicians and scientists have been working to further understand biomarkers associated with the infection, so that patients who have contracted the virus can be treated. Although COVID-19 is a complex virus that affects patients differently, current research suggests that COVID-19 infections have been associated with increased procalcitonin, a biomarker traditionally indicative of bacterial infections. This paper aims to investigate the relationship between COVID-19 infection severity and procalcitonin levels in the hopes to aid the management of patients with COVID-19 infections. Methods: Patient data were obtained from the Renaissance School of Medicine at Stony Brook University. The data of the patients who had tested positive for COVID-19 and had an associated procalcitonin value (n=1046) was divided into age splits of 18-59, 59-74, and 74-90. Multiple factors were analyzed to determine the severity of each patient’s infection. Patients were divided into low, medium, and high severity dependent on the patient's COVID-19 severity. A one-way analysis of variance (ANOVA) was done for each age split to compare procalcitonin values of the severity groups within the respective age split. Next, post hoc analysis was done for the severity groups in each age split to further compare the groups against each other. Results: One-way ANOVA testing of the three age splits all had a resulting p<0.0001, displaying that the null hypothesis was rejected. In the post hoc analysis, however, the test failed to reject the null hypothesis when comparing the medium and high severity groups against each other in the 59-74 and 74-90 age splits. The null hypothesis was rejected in all pairwise comparisons in the 18-59 age split. We determined that a procalcitonin value of greater than 0.24 ng/mL would be characterized as a more severe COVID-19 infection when considering patient factors and comorbidities. Conclusion: The analysis of the data concluded that elevated procalcitonin levels correlated with the severity of COVID-19 infections. This finding can be used to assist medical providers in the management of COVID-19 patients.

Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction

  • Hunter, Luke
2013 Thesis, cited 4 times
Website

Neuro-evolutional based computer aided detection system on computed tomography for the early detection of lung cancer.

  • Huidrom, R.
  • Chanu, Y. J.
  • Singh, K. M.
Multimedia Tools and Applications 2022 Journal Article, cited 0 times
Website
Lung cancer is one of the highest deadly disease which can be treated effectively in its early stage. Computer aided detection (CADe) can detect pulmonary nodules of lung cancer accurately and faster than manual detection. This paper presents a new CADe system using neuro-evolutional approach. The proposed method is focused on machine learning algorithm which is a crucial area of the system. The CADe system extracts lung regions from computed tomography images and detects pulmonary nodules within the lung regions. False positive reduction is performed by using a new neuro-evolutionary approach which consists of a feed-forward neural network and a combination of cuckoo search algorithm and particle swarm optimization. The performance of the proposed method is further improved by using regularized discriminant features and achieves 95.8% sensitivity, 95.3% specificity and 95.5% accuracy.

Pulmonary nodule detection on computed tomography using neuro-evolutionary scheme

  • Huidrom, Ratishchandra
  • Chanu, Yambem Jina
  • Singh, Khumanthem Manglem
Signal, Image and Video Processing 2018 Journal Article, cited 0 times
Website

A longitudinal four‐dimensional computed tomography and cone beam computed tomography dataset for image‐guided radiation therapy research in lung cancer

  • Hugo, Geoffrey D
  • Weiss, Elisabeth
  • Sleeman, William C
  • Balik, Salim
  • Keall, Paul J
  • Lu, Jun
  • Williamson, Jeffrey F
Medical Physics 2017 Journal Article, cited 8 times
Website
PURPOSE: To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ACQUISITION AND VALIDATION METHODS: As part of an NCI-sponsored research study 82 4DCT and 507 4DCBCT images were acquired in a population of 20 locally advanced nonsmall cell lung cancer patients undergoing radiation therapy. All subjects underwent concurrent radiochemotherapy to a total dose of 59.4-70.2 Gy using daily 1.8 or 2 Gy fractions. Audio-visual biofeedback was used to minimize breathing irregularity during all fractions, including acquisition of all 4DCT and 4DCBCT acquisitions in all subjects. Target, organs at risk, and implanted fiducial markers were delineated by a physician in the 4DCT images. Image coordinate system origins between 4DCT and 4DCBCT were manipulated in such a way that the images can be used to simulate initial patient setup in the treatment position. 4DCT images were acquired on a 16-slice helical CT simulator with 10 breathing phases and 3 mm slice thickness during simulation. In 13 of the 20 subjects, 4DCTs were also acquired on the same scanner weekly during therapy. Every day, 4DCBCT images were acquired on a commercial onboard CBCT scanner. An optically tracked external surrogate was synchronized with CBCT acquisition so that each CBCT projection was time stamped with the surrogate respiratory signal through in-house software and hardware tools. Approximately 2500 projections were acquired over a period of 8-10 minutes in half-fan mode with the half bow-tie filter. Using the external surrogate, the CBCT projections were sorted into 10 breathing phases and reconstructed with an in-house FDK reconstruction algorithm. Errors in respiration sorting, reconstruction, and acquisition were carefully identified and corrected. DATA FORMAT AND USAGE NOTES: 4DCT and 4DCBCT images are available in DICOM format and structures through DICOM-RT RTSTRUCT format. All data are stored in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection 4D-Lung and are publicly available. DISCUSSION: Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.

Conditional generative adversarial network driven radiomic prediction of mutation status based on magnetic resonance imaging of breast cancer

  • Huang, Z. H.
  • Chen, L.
  • Sun, Y.
  • Liu, Q.
  • Hu, P.
2024 Journal Article, cited 0 times
Website
BACKGROUND: Breast Cancer (BC) is a highly heterogeneous and complex disease. Personalized treatment options require the integration of multi-omic data and consideration of phenotypic variability. Radiogenomics aims to merge medical images with genomic measurements but encounter challenges due to unpaired data consisting of imaging, genomic, or clinical outcome data. In this study, we propose the utilization of a well-trained conditional generative adversarial network (cGAN) to address the unpaired data issue in radiogenomic analysis of BC. The generated images will then be used to predict the mutations status of key driver genes and BC subtypes. METHODS: We integrated the paired MRI and multi-omic (mRNA gene expression, DNA methylation, and copy number variation) profiles of 61 BC patients from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). To facilitate this integration, we employed a Bayesian Tensor Factorization approach to factorize the multi-omic data into 17 latent features. Subsequently, a cGAN model was trained based on the matched side-view patient MRIs and their corresponding latent features to predict MRIs for BC patients who lack MRIs. Model performance was evaluated by calculating the distance between real and generated images using the Frechet Inception Distance (FID) metric. BC subtype and mutation status of driver genes were obtained from the cBioPortal platform, where 3 genes were selected based on the number of mutated patients. A convolutional neural network (CNN) was constructed and trained using the generated MRIs for mutation status prediction. Receiver operating characteristic area under curve (ROC-AUC) and precision-recall area under curve (PR-AUC) were used to evaluate the performance of the CNN models for mutation status prediction. Precision, recall and F1 score were used to evaluate the performance of the CNN model in subtype classification. RESULTS: The FID of the images from the well-trained cGAN model based on the test set is 1.31. The CNN for TP53, PIK3CA, and CDH1 mutation prediction yielded ROC-AUC values 0.9508, 0.7515, and 0.8136 and PR-AUC are 0.9009, 0.7184, and 0.5007, respectively for the three genes. Multi-class subtype prediction achieved precision, recall and F1 scores of 0.8444, 0.8435 and 0.8336 respectively. The source code and related data implemented the algorithms can be found in the project GitHub at https://github.com/mattthuang/BC_RadiogenomicGAN . CONCLUSION: Our study establishes cGAN as a viable tool for generating synthetic BC MRIs for mutation status prediction and subtype classification to better characterize the heterogeneity of BC in patients. The synthetic images also have the potential to significantly augment existing MRI data and circumvent issues surrounding data sharing and patient privacy for future BC machine learning studies.

Medical Image Classification Using a Light-Weighted Hybrid Neural Network Based on PCANet and DenseNet

  • Huang, Zhiwen
  • Zhu, Xingxing
  • Ding, Mingyue
  • Zhang, Xuming
IEEE Access 2020 Journal Article, cited 23 times
Website
Medical image classification plays an important role in disease diagnosis since it can provide important reference information for doctors. The supervised convolutional neural networks (CNNs) such as DenseNet provide the versatile and effective method for medical image classification tasks, but they require large amounts of data with labels and involve complex and time-consuming training process. The unsupervised CNNs such as principal component analysis network (PCANet) need no labels for training but cannot provide desirable classification accuracy. To realize the accurate medical image classification in the case of a small training dataset, we have proposed a light-weighted hybrid neural network which consists of a modified PCANet cascaded with a simplified DenseNet. The modified PCANet has two stages, in which the network produces the effective feature maps at each stage by convoluting inputs with various learned kernels. The following simplified DenseNet with a small number of weights will take all feature maps produced by the PCANet as inputs and employ the dense shortcut connections to realize accurate medical image classification. To appreciate the performance of the proposed method, some experiments have been done on mammography and osteosarcoma histology images. Experimental results show that the proposed hybrid neural network is easy to train and it outperforms such popular CNN models as PCANet, ResNet and DenseNet in terms of classification accuracy, sensitivity and specificity. INDEX TERMS Medical image classification, hybrid neural network, PCANet, DenseNet.

Batch Similarity Based Triplet Loss Assembled into Light-Weighted Convolutional Neural Networks for Medical Image Classification

  • Huang, Z.
  • Zhou, Q.
  • Zhu, X.
  • Zhang, X.
Sensors (Basel) 2021 Journal Article, cited 0 times
Website
In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.

N6-methyladenosine-related lncRNAs in combination with computational histopathology and radiomics predict the prognosis of bladder cancer

  • Huang, Z.
  • Wang, G.
  • Wu, Y.
  • Yang, T.
  • Shao, L.
  • Yang, B.
  • Li, P.
  • Li, J.
Translational oncologyTransl Oncol 2022 Journal Article, cited 0 times
Website
OBJECTIVES: Identification of m6A- related lncRNAs associated with BC diagnosis and prognosis. METHODS: From the TCGA database, we obtained transcriptome data and corresponding clinical information (including histopathological and CT imaging data) for 408 patients. And bioinformatics, computational histopathology, and radiomics were used to identify and analyze diagnostic and prognostic biomarkers of m6A-related lncRNAs in BC. RESULTS: 3 significantly high-expressed m6A-related lncRNAs were significantly associated with the prognosis of BC. The BC samples were divided into two subgroups based on the expression of the 3 lncRNAs. The overall survival of patients in cluster 2 was significantly lower than that in cluster 1. The immune landscape results showed that the expression of PD-L1, T cells follicular helper, NK cells resting, and mast cells activated in cluster 2 were significantly higher, and naive B cells, plasma cells, T cells regulatory (Tregs), and mast cells resting were significantly lower. Computational histopathology results showed a significantly higher percentage of tumor-infiltrating lymphocytes (TILs) in cluster 2. The radiomics results show that the 3 eigenvalues of diagnostics image-original minimum, diagnostics image-original maximum, and original GLCM inverse variance are significantly higher in cluster 2. High expression of 2 bridge genes in the PPI network of 30 key immune genes predicts poorer disease-free survival, while immunohistochemistry showed that their expression levels were significantly higher in high-grade BC than in low-grade BC and normal tissue. CONCLUSION: Based on the results of immune landscape, computational histopathology, and radiomics, these 3 m6A-related lncRNAs may be diagnostic and prognostic biomarkers for BC.

GammaNet: An intensity-invariance deep neural network for computer-aided brain tumor segmentation

  • Huang, Zheng
  • Liu, Yunhui
  • Song, Guoli
  • Zhao, Yiwen
Optik 2021 Journal Article, cited 0 times
Website
Due to their wide variety in location, appearance, size and intensity distribution, automatic and precise brain tumor segmentation is a challenging task. To address this issue, a computer-aided brain tumor segmentation system based on an adaptive gamma correction neural network (GammaNet) is proposed in this paper. Inspired from the conventional gamma correction, an adaptive gamma correction (AGC) block is proposed to realize intensity invariance and force the network to focus on significant regions. In addition, to adaptively adjust the intensity distributions of local regions, the feature maps are divided into several proposal regions, and local image characteristics are emphasized. Furthermore, to enlarge the receptive field without information loss and improve the segmentation performance, a dense atrous spatial pyramid pooling (Dense-ASPP) module is combined with AGC blocks to construct the GammaNet. The experimental results show that the dice similarity coefficient (DSC), sensitivity and intersection of union (IoU) of GammaNet are 85.8%, 87.8% and 80.31%, respectively, the implementation of AGC blocks and the Dense-ASPP can improve the DSC by 3.69% and 1.11%, respectively, which indicates that the GammaNet can achieve state-of-the-art performance.

Fast and Fully-Automated Detection and Segmentation of Pulmonary Nodules in Thoracic CT Scans Using Deep Convolutional Neural Networks

  • Huang, X.
  • Sun, W.
  • Tseng, T. B.
  • Li, C.
  • Qian, W.
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times
Website
Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.

Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge

  • Huang, W.
  • Li, X.
  • Chen, Y.
  • Li, X.
  • Chang, M. C.
  • Oborski, M. J.
  • Malyarenko, D. I.
  • Muzi, M.
  • Jajamovich, G. H.
  • Fedorov, A.
  • Tudorica, A.
  • Gupta, S. N.
  • Laymon, C. M.
  • Marro, K. I.
  • Dyvorne, H. A.
  • Miller, J. V.
  • Barbodiak, D. P.
  • Chenevert, T. L.
  • Yankeelov, T. E.
  • Mountz, J. M.
  • Kinahan, P. E.
  • Kikinis, R.
  • Taouli, B.
  • Fennessy, F.
  • Kalpathy-Cramer, J.
Translational oncologyTransl Oncol 2014 Journal Article, cited 60 times
Website
Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.

The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge

  • Huang, Wei
  • Chen, Yiyi
  • Fedorov, Andriy
  • Li, Xia
  • Jajamovich, Guido H
  • Malyarenko, Dariya I
  • Aryal, Madhava P
  • LaViolette, Peter S
  • Oborski, Matthew J
  • O'Sullivan, Finbarr
Tomography: a journal for imaging research 2016 Journal Article, cited 21 times
Website

An ensemble-acute lymphoblastic leukemia model for acute lymphoblastic leukemia image classification

  • Huang, M. L.
  • Huang, Z. B.
Math Biosci Eng 2024 Journal Article, cited 0 times
Website
The timely diagnosis of acute lymphoblastic leukemia (ALL) is of paramount importance for enhancing the treatment efficacy and the survival rates of patients. In this study, we seek to introduce an ensemble-ALL model for the image classification of ALL, with the goal of enhancing early diagnostic capabilities and streamlining the diagnostic and treatment processes for medical practitioners. In this study, a publicly available dataset is partitioned into training, validation, and test sets. A diverse set of convolutional neural networks, including InceptionV3, EfficientNetB4, ResNet50, CONV_POOL-CNN, ALL-CNN, Network in Network, and AlexNet, are employed for training. The top-performing four individual models are meticulously chosen and integrated with the squeeze-and-excitation (SE) module. Furthermore, the two most effective SE-embedded models are harmoniously combined to create the proposed ensemble-ALL model. This model leverages the Bayesian optimization algorithm to enhance its performance. The proposed ensemble-ALL model attains remarkable accuracy, precision, recall, F1-score, and kappa scores, registering at 96.26, 96.26, 96.26, 96.25, and 91.36%, respectively. These results surpass the benchmarks set by state-of-the-art studies in the realm of ALL image classification. This model represents a valuable contribution to the field of medical image recognition, particularly in the diagnosis of acute lymphoblastic leukemia, and it offers the potential to enhance the efficiency and accuracy of medical professionals in the diagnostic and treatment processes.

A Semiautomated Deep Learning Approach for Pancreas Segmentation

  • Huang, M.
  • Huang, C.
  • Yuan, J.
  • Kong, D.
2021 Journal Article, cited 1 times
Website
Accurate pancreas segmentation from 3D CT volumes is important for pancreas diseases therapy. It is challenging to accurately delineate the pancreas due to the poor intensity contrast and intrinsic large variations in volume, shape, and location. In this paper, we propose a semiautomated deformable U-Net, i.e., DUNet for the pancreas segmentation. The key innovation of our proposed method is a deformable convolution module, which adaptively adds learned offsets to each sampling position of 2D convolutional kernel to enhance feature representation. Combining deformable convolution module with U-Net enables our DUNet to flexibly capture pancreatic features and improve the geometric modeling capability of U-Net. Moreover, a nonlinear Dice-based loss function is designed to tackle the class-imbalanced problem in the pancreas segmentation. Experimental results show that our proposed method outperforms all comparison methods on the same NIH dataset.

A reversible data hiding method by histogram shifting in high quality medical images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
Journal of Systems and Software 2013 Journal Article, cited 60 times
Website
Enormous demands for recognizing complicated anatomical structures in medical images have been demanded on high quality of medical image such as each pixel expressed by 16-bit depth. Now, most of data hiding algorithms are still applied in 8-bit depth medical images. We proposed a histogram shifting method for image reversible data hiding testing on high bit depth medical images. Among image local block pixels, we exploit the high correlation for smooth surface of anatomical structure in medical images. Thus, we apply a different value for each block of pixels to produce a difference histogram to embed secret bits. During data embedding stage, the image blocks are divided into two categories due to two corresponding embedding strategies. Via an inverse histogram shifting mechanism, the original image will be accurately recovered after the hidden data extraction. Due to requirements of medical images for data hiding, we proposed six criteria: (1) well-suited for high quality medical images, (2) without salt-and-pepper, (3) applicable to medical image with smooth surface, (4) well-suited sparse histogram of intensity levels, (5) free location map, (6) ability of adjusting data embedding capacity, PSNR and Inter-Slice PSNR. We proposed a data hiding methods satisfying above 6 criteria. © 2012 Elsevier Inc. All rights reserved

The Study on Data Hiding in Medical Images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
International Journal of Network Security 2012 Journal Article, cited 25 times
Website
Reversible data hiding plays an important role in medical image systems. Many hospitals have already applied the electronic medical information in healthcare systems. Reversible data hiding is one of the feasible methodologies to protect the individual privacy and confidential information. With application in several high quality medical devices, the detection rate of diseases and treating are improved at the early stage. Its demands havebeen rising for recognizing complicated anatomical structures in high quality images. However, most data hiding methods are still applied in 8-bit depth medical images with 255 intensity levels. This paper summarizes the existing reversible data hiding algorithms and introduces basic knowledge in medical image.

Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types

  • Huang, Lyu
  • Chen, Jiayan
  • Hu, Weigang
  • Xu, Xinyan
  • Liu, Di
  • Wen, Junmiao
  • Lu, Jiayu
  • Cao, Jianzhao
  • Zhang, Junhua
  • Gu, Yu
  • Wang, Jiazhou
  • Fan, Min
Clinical Lung Cancer 2019 Journal Article, cited 0 times
Website
Objectives To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types. Methods After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis. Results The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028). Conclusions This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary. Abbreviations and acronyms TCIA The Cancer Imaging Archive ALK Anaplastic lymphoma kinase NSCLC Non-small cell lung cancer EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion C-index Concordance index CI Confidence interval ICC The intra-class correlation coefficient OS Overall Survival LASSO The Least Absolute Shrinkage and Selection Operator EGFR Epidermal Growth Factor Receptor TKI Tyrosine-kinase inhibitor

Local-Whole-Focus: Identifying Breast Masses and Calcified Clusters on Full-Size Mammograms

  • Huang, Jun
  • Xiao, He
  • Wang, Qingfeng
  • Liu, Zhiqin
  • Chen, Bo
  • Wang, Yaobin
  • Zhang, Ping
  • Zhou, Ying
2022 Conference Paper, cited 0 times
The detection of breast masses and calcified clusters on mammograms is critical for early diagnosis and treatment to improve the survivals of breast cancer patients. In this study, we propose a local-whole-focus pipeline to automatically identify breast masses and calcified clusters on full-size mammograms, from local breast tissues to the whole mammograms, and then focusing on the lesion areas. We first train a deep model to learn the fine features of breast masses and calcified clusteres on local breast tissues, and then transfer the well-trained deep model to identify breast masses and calcified clusteres on full-size mammograms with image-level annotations. We also highlight the areas of the breast masses and calcified clusteres in mammograms to visualize the identification results. We evaluated the proposed local-whole-focus pipeline on a public dataset CBIS-DDSM (Curated Breast Imaging Subset of Digital Database for Screening Mammography) and a private dataset MY-Mammo (Mianyang central hospital mammograms). The experiment results showed the DenseNet embedded with squeeze-and-excitation (SE) blocks achieved competitive results on the identification of breast masses and calcified clusteres on full-size mammograms. The highlight areas of the breast masses and calcified clusteres on the entire mammograms could also explain model decision making, which are important in practical medical applications. Index Terms–Breast mass, calcified cluster, local breast tissue, full-size mammogram, automatic identification.

CDDnet: Cross-domain denoising network for low-dose CT image via local and global information alignment

  • Huang, Jiaxin
  • Chen, Kecheng
  • Ren, Yazhou
  • Sun, Jiayu
  • Wang, Yanmei
  • Tao, Tao
  • Pu, Xiaorong
2023 Journal Article, cited 0 times
Website
The domain shift problem has emerged as a challenge in cross-domain low-dose CT (LDCT) image denoising task, where the acquisition of a sufficient number of medical images from multiple sources may be constrained by privacy concerns. In this study, we propose a novel cross-domain denoising network (CDDnet) that incorporates both local and global information of CT images. To address the local component, a local information alignment module has been proposed to regularize the similarity between extracted target and source features from selected patches. To align the general information of the semantic structure from a global perspective, an autoencoder is adopted to learn the latent correlation between the source label and the estimated target label generated by the pre-trained denoiser. Experimental results demonstrate that our proposed CDDnet effectively alleviates the domain shift problem, outperforming other deep learning-based and domain adaptation-based methods under cross-domain scenarios.

Open-source algorithm and software for computed tomography-based virtual pancreatoscopy and other applications

  • Huang, H.
  • Yu, X.
  • Tian, M.
  • He, W.
  • Li, S. X.
  • Liang, Z.
  • Gao, Y.
2022 Journal Article, cited 0 times
Website
Pancreatoscopy plays a significant role in the diagnosis and treatment of pancreatic diseases. However, the risk of pancreatoscopy is remarkably greater than that of other endoscopic procedures, such as gastroscopy and bronchoscopy, owing to its severe invasiveness. In comparison, virtual pancreatoscopy (VP) has shown notable advantages. However, because of the low resolution of current computed tomography (CT) technology and the small diameter of the pancreatic duct, VP has limited clinical use. In this study, an optimal path algorithm and super-resolution technique are investigated for the development of an open-source software platform for VP based on 3D Slicer. The proposed segmentation of the pancreatic duct from the abdominal CT images reached an average Dice coefficient of 0.85 with a standard deviation of 0.04. Owing to the excellent segmentation performance, a fly-through visualization of both the inside and outside of the duct was successfully reconstructed, thereby demonstrating the feasibility of VP. In addition, a quantitative analysis of the wall thickness and topology of the duct provides more insight into pancreatic diseases than a fly-through visualization. The entire VP system developed in this study is available at https://github.com/gaoyi/VirtualEndoscopy.git .

Assigning readers to cases in imaging studies using balanced incomplete block designs

  • Huang, Erich P
  • Shih, Joanna H
Stat Methods Med Res 2021 Journal Article, cited 0 times
Website
In many imaging studies, each case is reviewed by human readers and characterized according to one or more features. Often, the inter-reader agreement of the feature indications is of interest in addition to their diagnostic accuracy or association with clinical outcomes. Complete designs in which all participating readers review all cases maximize efficiency and guarantee estimability of agreement metrics for all pairs of readers but often involve a heavy reading burden. Assigning readers to cases using balanced incomplete block designs substantially reduces reading burden by having each reader review only a subset of cases, while still maintaining estimability of inter-reader agreement for all pairs of readers. Methodology for data analysis and power and sample size calculations under balanced incomplete block designs is presented and applied to simulation studies and an actual example. Simulation studies results suggest that such designs may reduce reading burdens by >40% while in most scenarios incurring a <20% increase in the standard errors and a <8% and <20% reduction in power to detect between-modality differences in diagnostic accuracy and kappa statistics, respectively.

Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

  • Huang, Detian
  • Huang, Weiqin
  • Yuan, Zhenguo
  • Lin, Yanming
  • Zhang, Jian
  • Zheng, Lixin
Information 2018 Journal Article, cited 0 times
Website
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.

Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes

  • Huang, Chao
  • Cintra, Murilo
  • Brennan, Kevin
  • Zhou, Mu
  • Colevas, A Dimitrios
  • Fischbein, Nancy
  • Zhu, Shankuan
  • Gevaert, Olivier
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: Radiomics-based non-invasive biomarkers are promising to facilitate the translation of therapeutically related molecular subtypes for treatment allocation of patients with head and neck squamous cell carcinoma (HNSCC). METHODS: We included 113 HNSCC patients from The Cancer Genome Atlas (TCGA-HNSCC) project. Molecular phenotypes analyzed were RNA-defined HPV status, five DNA methylation subtypes, four gene expression subtypes and five somatic gene mutations. A total of 540 quantitative image features were extracted from pre-treatment CT scans. Features were selected and used in a regularized logistic regression model to build binary classifiers for each molecular subtype. Models were evaluated using the average area under the Receiver Operator Characteristic curve (AUC) of a stratified 10-fold cross-validation procedure repeated 10 times. Next, an HPV model was trained with the TCGA-HNSCC, and tested on a Stanford cohort (N=53). FINDINGS: Our results show that quantitative image features are capable of distinguishing several molecular phenotypes. We obtained significant predictive performance for RNA-defined HPV+ (AUC=0.73), DNA methylation subtypes MethylMix HPV+ (AUC=0.79), non-CIMP-atypical (AUC=0.77) and Stem-like-Smoking (AUC=0.71), and mutation of NSD1 (AUC=0.73). We externally validated the HPV prediction model (AUC=0.76) on the Stanford cohort. When compared to clinical models, radiomic models were superior to subtypes such as NOTCH1 mutation and DNA methylation subtype non-CIMP-atypical while were inferior for DNA methylation subtype CIMP-atypical and NSD1 mutation. INTERPRETATION: Our study demonstrates that radiomics can potentially serve as a non-invasive tool to identify treatment-relevant subtypes of HNSCC, opening up the possibility for patient stratification, treatment allocation and inclusion in clinical trials. FUND: Dr. Gevaert reports grants from National Institute of Dental & Craniofacial Research (NIDCR) U01 DE025188, grants from National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIBIB), R01 EB020527, grants from National Cancer Institute (NCI), U01 CA217851, during the conduct of the study; Dr. Huang and Dr. Zhu report grants from China Scholarship Council (Grant NO:201606320087), grants from China Medical Board Collaborating Program (Grant NO:15-216), the Cyrus Tang Foundation, and the Zhejiang University Education Foundation during the conduct of the study; Dr. Cintra reports grants from Sao Paulo State Foundation for Teaching and Research (FAPESP), during the conduct of the study.

MIL normalization -- prerequisites for accurate MRI radiomics analysis

  • Hu, Z.
  • Zhuang, Q.
  • Xiao, Y.
  • Wu, G.
  • Shi, Z.
  • Chen, L.
  • Wang, Y.
  • Yu, J.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
The quality of magnetic resonance (MR) images obtained with different instruments and imaging parameters varies greatly. A large number of heterogeneous images are collected, and they suffer from acquisition variation. Such imaging quality differences will have a great impact on the radiomics analysis. The main differences in MR images include modality mismatch (M), intensity distribution variance (I), and layer-spacing differences (L), which are referred to as MIL differences in this paper for convenience. An MIL normalization system is proposed to reconstruct uneven MR images into high-quality data with complete modality, a uniform intensity distribution and consistent layer spacing. Three radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis of glioma, were used to verify the effect of MIL normalization on radiomics analysis. Three retrospective glioma datasets were analyzed in this study: BraTs (285 cases), TCGA (112 cases) and HuaShan (403 cases). They were used to test the effect of MIL on three different radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis. MIL normalization included three components: multimodal synthesis based on an encoder-decoder network, intensity normalization based on CycleGAN, and layer-spacing unification based on Statistical Parametric Mapping (SPM). The Dice similarity coefficient, areas under the curve (AUC) and six other indicators were calculated and compared after different normalization steps. The MIL normalization system can improved the Dice coefficient of segmentation by 9% (P < .001), the AUC of pathological grading by 32% (P < .001), and IDH1 status prediction by 25% (P < .001) when compared to non-normalization. The proposed MIL normalization system provides high-quality standardized data, which is a prerequisite for accurate radiomics analysis.

A neural network approach to lung nodule segmentation

  • Hu, Yaoxiu
  • Menon, Prahlad G
2016 Conference Proceedings, cited 1 times
Website
Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%±0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.

MLLCD: A Meta Learning-based Method for Lung Cancer Diagnosis Using Histopathology Images

  • Hu, Xiangjun
  • Wang, Suixue
  • Li, Hang
  • Zhang, Qingchen
2023 Conference Paper, cited 0 times
Website
Lung cancer is a leading cause of death. An accurate early lung cancer diagnosis can improve a patient’s survival chances. Histopathological images are essential for cancer diagnosis. With the development of deep learning in the past decade, many scholars have used deep learning to learn the features of histopathological images and achieve lung cancer classification. However, deep learning requires a large quantity of annotated data to train the model to achieve a good classification effect, and collecting many annotated pathological images is time-consuming and expensive. Faced with the scarcity of pathological data, we present a meta-learning method for lung cancer diagnosis (called MLLCD). In detail, the MLLCD works in three steps. First, we preprocess all data using the bilinear interpolation method and then design the base learner which units a convolutional neural network(CNN) and transformer to distill local features and global features of pathology images with different resolutions. Finally, we train and update the base learner with a model-agnostic meta-learning (MAML) algorithm. Clinical Proteomic Tumor Analysis Consortium (CPTAC) cancer patient data demonstrate that our proposed model achieves the receiver operating characteristic (ROC) values of 0.94 for lung cancer diagnosis.

Domain and Content Adaptive Convolution based Multi-Source Domain Generalization for Medical Image Segmentation

  • Hu, S.
  • Liao, Z.
  • Zhang, J.
  • Xia, Y.
IEEE Trans Med Imaging 2022 Journal Article, cited 0 times
Website
The domain gap caused mainly by variable medical image quality renders a major obstacle on the path between training a segmentation model in the lab and applying the trained model to unseen clinical data. To address this issue, domain generalization methods have been proposed, which however usually use static convolutions and are less flexible. In this paper, we propose a multi-source domain generalization model based on the domain and content adaptive convolution (DCAC) for the segmentation of medical images across different modalities. Specifically, we design the domain adaptive convolution (DAC) module and content adaptive convolution (CAC) module and incorporate both into an encoder-decoder backbone. In the DAC module, a dynamic convolutional head is conditioned on the predicted domain code of the input to make our model adapt to the unseen target domain. In the CAC module, a dynamic convolutional head is conditioned on the global image features to make our model adapt to the test image. We evaluated the DCAC model against the baseline and four state-of-the-art domain generalization methods on the prostate segmentation, COVID-19 lesion segmentation, and optic cup/optic disc segmentation tasks. Our results not only indicate that the proposed DCAC model outperforms all competing methods on each segmentation task but also demonstrate the effectiveness of the DAC and CAC modules. Code is available at https://git.io/DCAC.

Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field

  • Hu, Kai
  • Gan, Qinghai
  • Zhang, Yuan
  • Deng, Shuhua
  • Xiao, Fen
  • Huang, Wei
  • Cao, Chunhong
  • Gao, Xieping
IEEE Access 2019 Journal Article, cited 2 times
Website
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.

An End-to-end Image Feature Representation Model of Pulmonary Nodules

  • Hu, Jinqiao
2022 Conference Paper, cited 0 times
Lung cancer is a cancer with a high mortality rate. If lung cancer can be detected early, the mortality rate can be greatly reduced. Lung nodule detection based on CT or MRI equipment is a common method to detect early lung cancer. Computer vision technology is widely used for image processing and classification of pulmonary nodules, but because the distinction between pulmonary nodule areas and surrounding non-nodule areas is not obvious, general image processing methods can only extract the superficial features of the image in pulmonary nodules. The detection accuracy cannot be further improved. In this paper, we propose an end-to-end model for constructing feature representations for lung nodule image classification based on local and global features. First, local plaque regions are selected and associated with relatively intact tissue, and then local and global features are extracted from each region. Deep models represent features that implement high-level abstract representations that describe image objects. The test results on standard datasets show that the method proposed in this paper has advantages on some evaluation metrics.

Classification of Prostate Transitional Zone Cancer and Hyperplasia Using Deep Transfer Learning From Disease-Related Images

  • Hu, B.
  • Yan, L. F.
  • Yang, Y.
  • Yu, Y.
  • Sun, Q.
  • Zhang, J.
  • Nan, H. Y.
  • Han, Y.
  • Hu, Y. C.
  • Sun, Y. Z.
  • Xiao, G.
  • Tian, Q.
  • Yue, C.
  • Feng, J. H.
  • Zhai, L. H.
  • Zhao, D.
  • Cui, G. B.
  • Lockhart Welch, V.
  • Cornett, E. M.
  • Urits, I.
  • Viswanath, O.
  • Varrassi, G.
  • Kaye, A. D.
  • Wang, W.
2021 Journal Article, cited 2 times
Website
Purpose The diagnosis of prostate transition zone cancer (PTZC) remains a clinical challenge due to their similarity to benign prostatic hyperplasia (BPH) on MRI. The Deep Convolutional Neural Networks (DCNNs) showed high efficacy in diagnosing PTZC on medical imaging but was limited by the small data size. A transfer learning (TL) method was combined with deep learning to overcome this challenge. Materials and methods A retrospective investigation was conducted on 217 patients enrolled from our hospital database (208 patients) and The Cancer Imaging Archive (nine patients). Using T2-weighted images (T2WIs) and apparent diffusion coefficient (ADC) maps, DCNN models were trained and compared between different TL databases (ImageNet vs. disease-related images) and protocols (from scratch, fine-tuning, or transductive transferring). Results PTZC and BPH can be classified through traditional DCNN. The efficacy of TL from natural images was limited but improved by transferring knowledge from the disease-related images. Furthermore, transductive TL from disease-related images had comparable efficacy to the fine-tuning method. Limitations include retrospective design and a relatively small sample size. Conclusion Deep TL from disease-related images is a powerful tool for an automated PTZC diagnostic system. In developing regions where only conventional MR scans are available, the accurate diagnosis of PTZC can be achieved via transductive deep TL from disease-related images.

Effect of a computer-aided diagnosis system on radiologists' performance in grading gliomas with MRI

  • Hsieh, Kevin Li-Chun
  • Tsai, Ruei-Je
  • Teng, Yu-Chuan
  • Lo, Chung-Ming
PLoS One 2017 Journal Article, cited 0 times
Website
The effects of a computer-aided diagnosis (CAD) system based on quantitative intensity features with magnetic resonance (MR) imaging (MRI) were evaluated by examining radiologists' performance in grading gliomas. The acquired MRI database included 71 lower-grade gliomas and 34 glioblastomas. Quantitative image features were extracted from the tumor area and combined in a CAD system to generate a prediction model. The effect of the CAD system was evaluated in a two-stage procedure. First, a radiologist performed a conventional reading. A sequential second reading was determined with a malignancy estimation by the CAD system. Each MR image was regularly read by one radiologist out of a group of three radiologists. The CAD system achieved an accuracy of 87% (91/105), a sensitivity of 79% (27/34), a specificity of 90% (64/71), and an area under the receiver operating characteristic curve (Az) of 0.89. In the evaluation, the radiologists' Az values significantly improved from 0.81, 0.87, and 0.84 to 0.90, 0.90, and 0.88 with p = 0.0011, 0.0076, and 0.0167, respectively. Based on the MR image features, the proposed CAD system not only performed well in distinguishing glioblastomas from lower-grade gliomas but also provided suggestions about glioma grading to reinforce radiologists' confidence rating.

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer Methods and Programs in Biomedicine 2016 Journal Article, cited 13 times
Website
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.

Quantitative glioma grading using transformed gray-scale invariant textures of MRI

  • Hsieh, Kevin Li-Chun
  • Chen, Cheng-Yu
  • Lo, Chung-Ming
2017 Journal Article, cited 8 times
Website
Background: A computer-aided diagnosis (CAD) system based on intensity-invariant magnetic resonance (MR) imaging features was proposed to grade gliomas for general application to various scanning systems and settings. Method: In total, 34 glioblastomas and 73 lower-grade gliomas comprised the image database to evaluate the proposed CAD system. For each case, the local texture on MR images was transformed into a local binary pattern (LBP) which was intensity-invariant. From the LBP, quantitative image features, including the histogram moment and textures, were extracted and combined in a logistic regression classifier to establish a malignancy prediction model. The performance was compared to conventional texture features to demonstrate the improvement. Results: The performance of the CAD system based on LBP features achieved an accuracy of 93% (100/107), a sensitivity of 97% (33/34), a negative predictive value of 99% (67/68), and an area under the receiver operating characteristic curve (Az) of 0.94, which were significantly better than the conventional texture features: an accuracy of 84% (90/107), a sensitivity of 76% (26/34), a negative predictive value of 89% (64/72), and an Az of 0.89 with respective p values of 0.0303, 0.0122, 0.0201, and 0.0334. Conclusions: More-robust texture features were extracted from MR images and combined into a significantly better CAD system for distinguishing glioblastomas from lower-grade gliomas. The proposed CAD system would be more practical in clinical use with various imaging systems and settings.

Performance of sparse-view CT reconstruction with multi-directional gradient operators

  • Hsieh, C. J.
  • Jin, S. C.
  • Chen, J. C.
  • Kuo, C. W.
  • Wang, R. T.
  • Chu, W. C.
PLoS One 2019 Journal Article, cited 0 times
Website
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.

Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations

  • Hrynaszkiewicz, Iain
  • Khodiyar, Varsha
  • Hufton, Andrew L
  • Sansone, Susanna-Assunta
Research Integrity and Peer Review 2016 Journal Article, cited 8 times
Website
Sharing of experimental clinical research data usually happens between individuals or research groups rather than via public repositories, in part due to the need to protect research participant privacy. This approach to data sharing makes it difficult to connect journal articles with their underlying datasets and is often insufficient for ensuring access to data in the long term. Voluntary data sharing services such as the Yale Open Data Access (YODA) and Clinical Study Data Request (CSDR) projects have increased accessibility to clinical datasets for secondary uses while protecting patient privacy and the legitimacy of secondary analyses but these resources are generally disconnected from journal articles-where researchers typically search for reliable information to inform future research. New scholarly journal and article types dedicated to increasing accessibility of research data have emerged in recent years and, in general, journals are developing stronger links with data repositories. There is a need for increased collaboration between journals, data repositories, researchers, funders, and voluntary data sharing services to increase the visibility and reliability of clinical research. Using the journal Scientific Data as a case study, we propose and show examples of changes to the format and peer-review process for journal articles to more robustly link them to data that are only available on request. We also propose additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements (DUAs).

Learning-based parameter prediction for quality control in three-dimensional medical image compression

  • Hou, Y. X.
  • Ren, Z.
  • Tao, Y. B.
  • Chen, W.
Frontiers of Information Technology & Electronic EngineeringFront Inform Tech El 2021 Journal Article, cited 0 times
Website
Quality control is of vital importance in compressing three-dimensional (3D) medical imaging data. Optimal compression parameters need to be determined based on the specific quality requirement. In high efficiency video coding (HEVC), regarded as the state-of-the-art compression tool, the quantization parameter (QP) plays a dominant role in controlling quality. The direct application of a video-based scheme in predicting the ideal parameters for 3D medical image compression cannot guarantee satisfactory results. In this paper we propose a learning-based parameter prediction scheme to achieve efficient quality control. Its kernel is a support vector regression (SVR) based learning model that is capable of predicting the optimal QP from both video-based and structural image features extracted directly from raw data, avoiding time-consuming processes such as pre-encoding and iteration, which are often needed in existing techniques. Experimental results on several datasets verify that our approach outperforms current video-based quality control methods. 质量控制是三维医学图像压缩过程至关重要的环节, 需设定最佳图像压缩参数才能满足特定的压缩质量需求. 高效视频编码 (HEVC) 是目前最先进的压缩工具. 其中, 量化参数 (QP) 对HEVC的压缩质量控制起决定性作用, 如能对其精确预测, 就能完成质量控制的目标; 然而, 直接将视频压缩领域中的预测方法套用到三维医学数据压缩, 精度和效率无法取得令人满意的结果. 为此, 提出一种基于学习的参数预测方法, 用于实现三维医学图像压缩中的高效质量控制. 本文方法基于支撑向量回归 (SVR), 可以直接利用从原始数据中提取的基于视频的特征与基于结构的特征来预测最佳QP, 无需经过耗时长的预编码或迭代过程. 在若干数据集上的实验结果证明, 本文方法比现有方法在预测准确度和速度上表现更好.

Brain Tumor Segmentation based on Knowledge Distillation and Adversarial Training

  • Hou, Yaqing
  • Li, Tianbo
  • Zhang, Qiang
  • Yu, Hua
  • Ge, Hongwei
2021 Conference Paper, cited 0 times
Website
3D MRI brain tumor segmentation is a reliable method for disease diagnosis and treatment plans in the future. Early on, the segmentation of brain tumors is mostly done manually. However, manual segmentation of 3D MRI brain tumor requires professional anatomical knowledge and may be inaccurate. In this paper, we propose a 3D MRI brain tumor segmentation architecture based on the encoder-decoder structure. Specially, we introduce knowledge distillation and adversarial training methods, which compresses and improves the accuracy and robustness of the model. Furthermore, we obtain soft targets by designing multiple teacher network training and then apply them to the student network. Finally, we evaluate our method on a challenging BraTS dataset. As a result, the performance of our proposed model is superior to state-of-the-art methods.

A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks

  • Hossain, S
  • Najeeb, S
  • Shahriyar, A
  • Abdullah, ZR
  • Haque, MA
2019 Conference Proceedings, cited 0 times
Website
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.

Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate

  • Hossain, Shamim
  • Jalab, Hamid A.
  • Zulfiqar, Fariha
  • Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering 2019 Journal Article, cited 0 times
Website
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.

Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study

  • Hosny, Ahmed
  • Bitterman, Danielle S.
  • Guthier, Christian V.
  • Qian, Jack M.
  • Roberts, Hannah
  • Perni, Subha
  • Saraf, Anurag
  • Peng, Luke C.
  • Pashtan, Itai
  • Ye, Zezhong
  • Kann, Benjamin H.
  • Kozono, David E.
  • Christiani, David
  • Catalano, Paul J.
  • Aerts, Hugo J. W. L.
  • Mak, Raymond H.
2022 Journal Article, cited 0 times
Website
Background Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. Methods In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. Findings We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83–0·92], p=0·0062; SD 0·86 [0·71–0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76–0·88) and SD 0·79 (0·68–0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56–0·80) and SD 0·50 (0·34–0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60–0·81) and SD 0·47 (0·35–0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). Interpretation We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance. Funding US National Institutes of Health and EU European Research Council.

Improved generalized ComBat methods for harmonization of radiomic features

  • Horng, H.
  • Singh, A.
  • Yousefi, B.
  • Cohen, E. A.
  • Haghighi, B.
  • Katz, S.
  • Noel, P. B.
  • Kontos, D.
  • Shinohara, R. T.
2022 Journal Article, cited 0 times
Website
Radiomic approaches in precision medicine are promising, but variation associated with image acquisition factors can result in severe biases and low generalizability. Multicenter datasets used in these studies are often heterogeneous in multiple imaging parameters and/or have missing information, resulting in multimodal radiomic feature distributions. ComBat is a promising harmonization tool, but it only harmonizes by single/known variables and assumes standardized input data are normally distributed. We propose a procedure that sequentially harmonizes for multiple batch effects in an optimized order, called OPNested ComBat. Furthermore, we propose to address bimodality by employing a Gaussian Mixture Model (GMM) grouping considered as either a batch variable (OPNested + GMM) or as a protected clinical covariate (OPNested - GMM). Methods were evaluated on features extracted with CapTK and PyRadiomics from two public lung computed tomography (CT) datasets. We found that OPNested ComBat improved harmonization performance over standard ComBat. OPNested + GMM ComBat exhibited the best harmonization performance but the lowest predictive performance, while OPNested - GMM ComBat showed poorer harmonization performance, but the highest predictive performance. Our findings emphasize that improved harmonization performance is no guarantee of improved predictive performance, and that these methods show promise for superior standardization of datasets heterogeneous in multiple or unknown imaging parameters and greater generalizability.

The Auto-Lindberg Project: Standardized Target Nomenclature in Radiation Oncology Enables Real-World Data Extraction From Radiation Treatment Plans

  • Hope, A.
  • Kim, J. W.
  • Kazmierski, M.
  • Welch, M.
  • Marsilla, J.
  • Huang, S. H.
  • Hosni, A.
  • Tadic, T.
  • Patel, T.
  • Haibe-Kains, B.
  • Waldron, J.
  • O'Sullivan, B.
  • Bratman, S.
2024 Journal Article, cited 0 times
Website
Treatment plan archives contain vast quantities of patient-specific data in a digital format, but are underused due to challenges in storage, retrieval, and analysis methodology. With standardized nomenclature and careful patient outcomes monitoring, treatment plans can be rich sources of data to explore relevant clinical questions. Even without outcomes, treatment plan archives contain data to address questions such as pretreatment disease distribution or institutional treatment strategies. A comprehensive understanding of cancer's natural history and lymph node (LN) distribution is critical to management of each patient's disease. Macroscopic tumor location has important implications for adjacent LN regions that may also harbor microscopic cancer involvement. Lindberg et al demonstrated from large patient data sets that different head and neck cancer subsites had different distributions of involved LNs.1 Similar population-based data are rare2 in the modern era, barring some surgical studies.3, 4, 5 Nodal involvement risk estimates can help select patients for elective neck irradiation, including choices of ipsilateral versus bilateral treatment (eg, oropharyngeal carcinoma [OPC]).6 In this study, an algorithm automatically extracted LN data from a large data set of treatment plans for patients with head and neck cancer. Further programmatic methods generated representative example “AutoLindberg” diagrams and summary tables regarding the extent of cervical LN involvement for clinically relevant patient subsets.

Approaches to uncovering cancer diagnostic and prognostic molecular signatures

  • Hong, Shengjun
  • Huang, Yi
  • Cao, Yaqiang
  • Chen, Xingwei
  • Han, Jing-Dong J
Molecular & Cellular Oncology 2014 Journal Article, cited 2 times
Website
The recent rapid development of high-throughput technology enables the study of molecular signatures for cancer diagnosis and prognosis at multiple levels, from genomic and epigenomic to transcriptomic. These unbiased large-scale scans provide important insights into the detection of cancer-related signatures. In addition to single-layer signatures, such as gene expression and somatic mutations, integrating data from multiple heterogeneous platforms using a systematic approach has been proven to be particularly effective for the identification of classification markers. This approach not only helps to uncover essential driver genes and pathways in the cancer network that are responsible for the mechanisms of cancer development, but will also lead us closer to the ultimate goal of personalized cancer therapy.

Modulation of Nogo receptor 1 expression orchestrates myelin-associated infiltration of glioblastoma

  • Hong, J. H.
  • Kang, S.
  • Sa, J. K.
  • Park, G.
  • Oh, Y. T.
  • Kim, T. H.
  • Yin, J.
  • Kim, S. S.
  • D'Angelo, F.
  • Koo, H.
  • You, Y.
  • Park, S.
  • Kwon, H. J.
  • Kim, C. I.
  • Ryu, H.
  • Lin, W.
  • Park, E. J.
  • Kim, Y. J.
  • Park, M. J.
  • Kim, H.
  • Kim, M. S.
  • Chung, S.
  • Park, C. K.
  • Park, S. H.
  • Kang, Y. H.
  • Kim, J. H.
  • Saya, H.
  • Nakano, I.
  • Gwak, H. S.
  • Yoo, H.
  • Lee, J.
  • Hur, E. M.
  • Shi, B.
  • Nam, D. H.
  • Iavarone, A.
  • Lee, S. H.
  • Park, J. B.
BRAIN 2021 Journal Article, cited 1 times
Website
As the clinical failure of glioblastoma treatment is attributed by multiple components, including myelin-associated infiltration, assessment of the molecular mechanisms underlying such process and identification of the infiltrating cells have been the primary objectives in glioblastoma research. Here, we adopted radiogenomic analysis to screen for functionally relevant genes that orchestrate the process of glioma cell infiltration through myelin and promote glioblastoma aggressiveness. The receptor of the Nogo ligand (NgR1) was selected as the top candidate through Differentially Expressed Genes (DEG) and Gene Ontology (GO) enrichment analysis. Gain and loss of function studies on NgR1 elucidated its underlying molecular importance in suppressing myelin-associated infiltration in vitro and in vivo. The migratory ability of glioblastoma cells on myelin is reversibly modulated by NgR1 during differentiation and dedifferentiation process through deubiquitinating activity of USP1, which inhibits the degradation of ID1 to downregulate NgR1 expression. Furthermore, pimozide, a well-known antipsychotic drug, upregulates NgR1 by post-translational targeting of USP1, which sensitizes glioma stem cells to myelin inhibition and suppresses myelin-associated infiltration in vivo. In primary human glioblastoma, downregulation of NgR1 expression is associated with highly infiltrative characteristics and poor survival. Together, our findings reveal that loss of NgR1 drives myelin-associated infiltration of glioblastoma and suggest that novel therapeutic strategies aimed at reactivating expression of NgR1 will improve the clinical outcome of glioblastoma patients.

Prostate Segmentation according to the PI-RADS standard using a 3D CNN

  • Holmlund, William
2022 Thesis, cited 0 times
Website
Segmentation of the prostate and its internal anatomical zones in magnetic resonance images is an important step in many diagnostic applications. This task can be time consuming, and is therefore a good candidate for introducing an automated method. The aim of this thesis has been to train a three dimensional Convolutional Neural Network (CNN) that segments the prostate and its four anatomical zones, according to the global PI-RADS standard for use as decision support in the delineation process. This was performed on a publicly available data set that included images for training (n=78) and validation (n=20). For the evaluation, an internal data set from the University Hospital of Umeå consisting of forty patients, were used to test the generalization capability of the model. Prior to training, the delineations of anterior fibromuscular stroma (AFS), the peripheral (PZ), central (CZ) and transitional (TZ) zones, as well as the prostatic urethra, were validated in collaboration with an experienced radiologist. The Dice score for the segmentation of the prostate was 0.88, and for the internal zones: PZ: 0.72, CZ: 0.40, TZ: 0.72, U: 0.05, and AFS: 0.34, for the test dataset. Accurate segmentation of the Urethra was challenging due to the structural differences between the data sets, and therefore these results can easily be discarded and viewed as less relevant when reviewing the structures. In conclusion, the trained CNN can be used as decision support for prostate zone delineation.

Artificial CT images can enhance variation of case images in diagnostic radiology skills training

  • Hofmeijer, E. I. S.
  • Wu, S. C.
  • Vliegenthart, R.
  • Slump, C. H.
  • van der Heijden, F.
  • Tan, C. O.
Insights Imaging 2023 Journal Article, cited 0 times
Website
OBJECTIVES: We sought to investigate if artificial medical images can blend with original ones and whether they adhere to the variable anatomical constraints provided. METHODS: Artificial images were generated with a generative model trained on publicly available standard and low-dose chest CT images (805 scans; 39,803 2D images), of which 17% contained evidence of pathological formations (lung nodules). The test set (90 scans; 5121 2D images) was used to assess if artificial images (512 x 512 primary and control image sets) blended in with original images, using both quantitative metrics and expert opinion. We further assessed if pathology characteristics in the artificial images can be manipulated. RESULTS: Primary and control artificial images attained an average objective similarity of 0.78 +/- 0.04 (ranging from 0 [entirely dissimilar] to 1[identical]) and 0.76 +/- 0.06, respectively. Five radiologists with experience in chest and thoracic imaging provided a subjective measure of image quality; they rated artificial images as 3.13 +/- 0.46 (range of 1 [unrealistic] to 4 [almost indistinguishable to the original image]), close to their rating of the original images (3.73 +/- 0.31). Radiologists clearly distinguished images in the control sets (2.32 +/- 0.48 and 1.07 +/- 0.19). In almost a quarter of the scenarios, they were not able to distinguish primary artificial images from the original ones. CONCLUSION: Artificial images can be generated in a way such that they blend in with original images and adhere to anatomical constraints, which can be manipulated to augment the variability of cases. CRITICAL RELEVANCE STATEMENT: Artificial medical images can be used to enhance the availability and variety of medical training images by creating new but comparable images that can blend in with original images. KEY POINTS: * Artificial images, similar to original ones, can be created using generative networks. * Pathological features of artificial images can be adjusted through guiding the network. * Artificial images proved viable to augment the depth and broadening of diagnostic training.

Deep learning models for classifying cancer and COVID-19 lung diseases

  • Hişam, Dua
  • Hişam, Enes
2021 Conference Paper, cited 0 times
Website
The use of Computed Tomography (CT) images for detecting lung diseases is both hard and time-consuming for humans. In the past few years, Artificial Intelligence (AI), especially, deep learning models have provided impressive results vs the classical methods in a lot of different fields. Nowadays, a lot of researchers are trying to develop different deep learning mechanisms to increase and improve the performance of different systems in lung disease screening with CT images. In this work, different deep learning-based models such as DarkNet-53 (the backbone of YOLO-v3), ResNet50, and VGG19 were applied to classify CT images of patients having Corona Virus disease (COVID-19) or lung cancer. Each model's performance is presented, analyzed, and compared. The dataset used in the study came from two different sources, the large-scale CT dataset for lung cancer diagnoses (Lung-PET -CT-Dx) for lung cancer CT images while International COVID-19 Open Radiology Dataset (RICORD) for COVID-19 CT images. As a result, DarkNet-53 overperformed other models by achieving 100% accuracy. While the accuracies for ResNet and VGG19 were 80% and 77% respectively.

Gross tumour volume radiomics for prognostication of recurrence & death following radical radiotherapy for NSCLC

  • Hindocha, S.
  • Charlton, T. G.
  • Linton-Reid, K.
  • Hunter, B.
  • Chan, C.
  • Ahmed, M.
  • Greenlay, E. J.
  • Orton, M.
  • Bunce, C.
  • Lunn, J.
  • Doran, S. J.
  • Ahmad, S.
  • McDonald, F.
  • Locke, I.
  • Power, D.
  • Blackledge, M.
  • Lee, R. W.
  • Aboagye, E. O.
NPJ Precis Oncol 2022 Journal Article, cited 0 times
Website
Recurrence occurs in up to 36% of patients treated with curative-intent radiotherapy for NSCLC. Identifying patients at higher risk of recurrence for more intensive surveillance may facilitate the earlier introduction of the next line of treatment. We aimed to use radiotherapy planning CT scans to develop radiomic classification models that predict overall survival (OS), recurrence-free survival (RFS) and recurrence two years post-treatment for risk-stratification. A retrospective multi-centre study of >900 patients receiving curative-intent radiotherapy for stage I-III NSCLC was undertaken. Models using radiomic and/or clinical features were developed, compared with 10-fold cross-validation and an external test set, and benchmarked against TNM-stage. Respective validation and test set AUCs (with 95% confidence intervals) for the radiomic-only models were: (1) OS: 0.712 (0.592-0.832) and 0.685 (0.585-0.784), (2) RFS: 0.825 (0.733-0.916) and 0.750 (0.665-0.835), (3) Recurrence: 0.678 (0.554-0.801) and 0.673 (0.577-0.77). For the combined models: (1) OS: 0.702 (0.583-0.822) and 0.683 (0.586-0.78), (2) RFS: 0.805 (0.707-0.903) and 0.755 (0.672-0.838), (3) Recurrence: 0.637 (0.51-0..765) and 0.738 (0.649-0.826). Kaplan-Meier analyses demonstrate OS and RFS difference of >300 and >400 days respectively between low and high-risk groups. We have developed validated and externally tested radiomic-based prediction models. Such models could be integrated into the routine radiotherapy workflow, thus informing a personalised surveillance strategy at the point of treatment. Our work lays the foundations for future prospective clinical trials for quantitative personalised risk-stratification for surveillance following curative-intent radiotherapy for NSCLC.

Artificial Intelligence for Colorectal Polyps Classification Using 3D CNN

  • Hicham, Khadija
  • Laghmati, Sara
  • Tmiri, Amal
2023 Book Section, cited 0 times
Website
Convolutional Neural Network (CNN) has made remarkable progress in the medical field. The use of CNN is widely necessary to extract highly representative characteristics in the case of acute medical pathology. Composed of fully connected layers, the CNN allows the classification of the data. The classification process is done among the network layers by filtering, selecting, and applying these features at the last layers. CNN offers a better prognosis, especially in the case of colorectal cancer (CRC) prevention. CRC develops from cells that line the inner lining of the colon. Mostly, it comes from a benign tumor, called a polyp, which slowly grows with time to develop into malignant cells. However, classification of 3D scan images of the abdomen based on the presence or absence of polyps is necessary to increase the chance of early detection of the disease and thus guide it to the appropriate treatment. In this work, we present and study a 3D CNN model for the processing and classification of polyps. The results show promising performances for a 12 layers 3D CNN model.

Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling

  • Hiasa, Yuta
  • Otake, Yoshito
  • Takao, Masaki
  • Ogawa, Takeshi
  • Sugano, Nobuhiko
  • Sato, Yoshinobu
IEEE Trans Med Imaging 2019 Journal Article, cited 2 times
Website
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

Design of a Patient-Specific Radiotherapy Treatment Target

  • Heyns, Michael
  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
  • Xiang, Hong
2013 Conference Proceedings, cited 3 times
Website
This paper describes the design of a patient-specific, radiotherapy quality assurance target that can be used to verify a treatment plan by measurement of actual dosage. Staring from a patient's (segmented) MR images, a physical model containing insertable cartridges for holding dosimeters is printed in 3D. Dosimeters can be located at specific locations of interest (e.g., tumor, nerve bundles, urethra). The model (dosimeter insert) can be placed into a pelvis 'shell' and subject to a specified treatment plan. A design for the dosimeter insert can be efficiently fabricated using rapid prototyping techniques.

Convolutional Neural Networks for Multi-scale Lung Nodule Classification in CT: Influence of Hyperparameter Tuning on Performance

  • Hernández-Rodríguez, Jorge
  • Cabrero-Fraile, Francisco-Javier
  • Rodríguez-Conde, María-José
TEM Journal 2022 Journal Article, cited 0 times
Website
In this study, a system based in Convolutional Neural Networks for differentiating lung nodules and non-nodules in Computed Tomography is developed. Multi-scale patches, extracted from LIDC-IDRI database, are used to train different CNN models. Adjustable hyperparameters are modified sequentially, to study their influence, evaluate learning process and find each size best performing network. Classification accuracies obtained are superior to 87% for all sizes with areas under Receiver Operating Characteristic in the interval (0.936-0.951). Trained models are tested with nodules from an independent database, providing sensitivities above 96%. Performance of trained models is similar to other published articles and show good classification capacities. As a basis for developing CAD systems, recommendations regarding hyperparameter tuning are provided.

Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
2019 Conference Proceedings, cited 1 times
Website

Deep Feature Learning For Soft Tissue Sarcoma Classification In MR Images Via Transfer Learning

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
Expert Systems with Applications 2018 Journal Article, cited 0 times
Website

Brain Tumor Segmentation with Self-ensembled, Deeply-Supervised 3D U-Net Neural Networks: A BraTS 2020 Challenge Solution

  • Henry, Théophraste
  • Carré, Alexandre
  • Lerousseau, Marvin
  • Estienne, Théo
  • Robert, Charlotte
  • Paragios, Nikos
  • Deutsch, Eric
2021 Book Section, cited 0 times
Brain tumor segmentation is a critical task for patient’s disease management. In order to automate and standardize this task, we trained multiple U-net like neural networks, mainly with deep supervision and stochastic weight averaging, on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. Two independent ensembles of models from two different training pipelines were trained, and each produced a brain tumor segmentation map. These two labelmaps per patient were then merged, taking into account the performance of each ensemble for specific tumor subregions. Our performance on the online validation dataset with test time augmentation were as follows: Dice of 0.81, 0.91 and 0.85; Hausdorff (95%) of 20.6, 4, 3, 5.7 mm for the enhancing tumor, whole tumor and tumor core, respectively. Similarly, our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff (95%) of 20.4, 6.7 and 19.5 mm on the final test dataset, ranking us among the top ten teams. More complicated training schemes and neural network architectures were investigated without significant performance gain at the cost of greatly increased training time. Overall, our approach yielded good and balanced performance for each tumor subregion. Our solution is open sourced at https://github.com/lescientifik/open_brats2020.

Deep learning for the detection of benign and malignant pulmonary nodules in non-screening chest CT scans

  • Hendrix, W.
  • Hendrix, N.
  • Scholten, E. T.
  • Mourits, M.
  • Trap-de Jong, J.
  • Schalekamp, S.
  • Korst, M.
  • van Leuken, M.
  • van Ginneken, B.
  • Prokop, M.
  • Rutten, M.
  • Jacobs, C.
2023 Journal Article, cited 0 times
Website
BACKGROUND: Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD: We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS: On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS: The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting. Early-stage lung cancer can be diagnosed after identifying an abnormal spot on a chest CT scan ordered for other medical reasons. These spots or lung nodules can be overlooked by radiologists, as they are not necessarily the focus of an examination and can be as small as a few millimeters. Software using Artificial Intelligence (AI) technology has proven to be successful for aiding radiologists in this task, but its performance is understudied outside a lung cancer screening setting. We therefore developed and validated AI software for the detection of cancerous nodules or non-cancerous nodules that would need attention. We show that the software can reliably detect these nodules in a non-screening setting and could potentially aid radiologists in daily clinical practice. eng

Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer

  • Hegde, John V
  • Mulkern, Robert V
  • Panych, Lawrence P
  • Fennessy, Fiona M
  • Fedorov, Andriy
  • Maier, Stephan E
  • Tempany, Clare
Journal of Magnetic Resonance Imaging 2013 Journal Article, cited 164 times
Website
Magnetic resonance (MR) examinations of men with prostate cancer are most commonly performed for detecting, characterizing, and staging the extent of disease to best determine diagnostic or treatment strategies, which range from biopsy guidance to active surveillance to radical prostatectomy. Given both the exam's importance to individual treatment plans and the time constraints present for its operation at most institutions, it is essential to perform the study effectively and efficiently. This article reviews the most commonly employed modern techniques for prostate cancer MR examinations, exploring the relevant signal characteristics from the different methods discussed and relating them to intrinsic prostate tissue properties. Also, a review of recent articles using these methods to enhance clinical interpretation and assess clinical performance is provided. J. Magn. Reson. Imaging 2013;37:1035-1054. © 2013 Wiley Periodicals, Inc.

Multi- class classification of breast cancer abnormalities using Deep Convolutional Neural Network (CNN)

  • Heenaye-Mamode Khan, M.
  • Boodoo-Jahangeer, N.
  • Dullull, W.
  • Nathire, S.
  • Gao, X.
  • Sinha, G. R.
  • Nagwanshi, K. K.
PLoS One 2021 Journal Article, cited 0 times
Website
The real cause of breast cancer is very challenging to determine and therefore early detection of the disease is necessary for reducing the death rate due to risks of breast cancer. Early detection of cancer boosts increasing the survival chance up to 8%. Primarily, breast images emanating from mammograms, X-Rays or MRI are analyzed by radiologists to detect abnormalities. However, even experienced radiologists face problems in identifying features like micro-calcifications, lumps and masses, leading to high false positive and high false negative. Recent advancement in image processing and deep learning create some hopes in devising more enhanced applications that can be used for the early detection of breast cancer. In this work, we have developed a Deep Convolutional Neural Network (CNN) to segment and classify the various types of breast abnormalities, such as calcifications, masses, asymmetry and carcinomas, unlike existing research work, which mainly classified the cancer into benign and malignant, leading to improved disease management. Firstly, a transfer learning was carried out on our dataset using the pre-trained model ResNet50. Along similar lines, we have developed an enhanced deep learning model, in which learning rate is considered as one of the most important attributes while training the neural network. The learning rate is set adaptively in our proposed model based on changes in error curves during the learning process involved. The proposed deep learning model has achieved a performance of 88% in the classification of these four types of breast cancer abnormalities such as, masses, calcifications, carcinomas and asymmetry mammograms.

A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients

  • Hedyehzadeh, Mohammadreza
  • Maghooli, Keivan
  • MomenGharibvand, Mohammad
  • Pistorius, Stephen
J Digit Imaging 2020 Journal Article, cited 0 times
Website
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.

Fast Super-Resolution in MRI Images Using Phase Stretch Transform, Anchored Point Regression and Zero-Data Learning

  • He, Sifeng
  • Jalali, Bahram
2019 Conference Proceedings, cited 0 times
Website
Medical imaging is fundamentally challenging due to absorption and scattering in tissues and by the need to minimize illumination of the patient with harmful radiation. Common problems are low spatial resolution, limited dynamic range and low contrast. These predicaments have fueled interest in enhancing medical images using digital post processing. In this paper, we propose and demonstrate an algorithm for real-time inference that is suitable for edge computing. Our locally adaptive learned filtering technique named Phase Stretch Anchored Regression (PhSAR) combines the Phase Stretch Transform for local features extraction in visually impaired images with clustered anchored points to represent image feature space and fast regression based learning. In contrast with the recent widely-used deep neural network for image super-resolution, our algorithm achieves significantly faster inference and less hallucination on image details and is interpretable. Tests on brain MRI images using zero-data learning reveal its robustness with explicit PSNR improvement and lower latency compared to relevant benchmarks.

Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction

  • He, Qiang
  • Li, Xin
  • Kim, DW Nathan
  • Jia, Xun
  • Gu, Xuejun
  • Zhen, Xin
  • Zhou, Linghong
Information Fusion 2020 Journal Article, cited 0 times
Website

MTF1 has the potential as a diagnostic and prognostic marker for gastric cancer and is associated with good prognosis

  • He, J.
  • Jiang, X.
  • Yu, M.
  • Wang, P.
  • Fu, L.
  • Zhang, G.
  • Cai, H.
2023 Journal Article, cited 0 times
Website
PURPOSE: Metal Regulatory Transcription Factor 1 (MTF1) can be an essential transcription factor for heavy metal response in cells and can also reduce oxidative and hypoxic stresses in cells. However, the current research on MTF1 in gastric cancer is lacking. METHODS: Bioinformatics techniques were used to perform expression analysis, prognostic analysis, enrichment analysis, tumor microenvironment correlation analysis, immunotherapy Immune cell Proportion Score (IPS) correlation and drug sensitivity correlation analysis of MTF1 in gastric cancer. And qRT-PCR was used to verify MTF1 expression in gastric cancer cells and tissues. RESULTS: MTF1 showed low expression in gastric cancer cells and tissues, and low expression in T3 stage compared with T1 stage. KM prognostic analysis showed that high expression of MTF1 was significantly associated with longer overall survival (OS), FP (first progression) and PPS (post-progression survival) in gastric cancer patients. Cox regression analysis showed that MTF1 was an independent prognostic factor and a protective factor in gastric cancer patients. MTF1 is involved in pathways in cancer, and the high expression of MTF1 is negatively correlated with the half maximal inhibitory concentration (IC50) of common chemotherapeutic drugs. CONCLUSION: MTF1 is relatively lowly expressed in gastric cancer. MTF1 is also an independent prognostic factor for gastric cancer patients and is associated with good prognosis. It has the potential to be a diagnostic and prognostic marker for gastric cancer.

A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients

  • He, Bo
  • Zhao, Wei
  • Pi, Jiang-Yuan
  • Han, Dan
  • Jiang, Yuan-Ming
  • Zhang, Zhen-Guang
Respiratory research 2018 Journal Article, cited 0 times
Website

Descriptions and evaluations of methods for determining surface curvature in volumetric data

  • Hauenstein, Jacob D.
  • Newman, Timothy S.
Computers & Graphics 2020 Journal Article, cited 0 times
Website
Highlights • Methods using convolution or fitting are often the most accurate. • The existing TE method is fast and accurate on noise-free data. • The OP method is faster than existing, similarly accurate methods on real data. • Even modest errors in curvature notably impact curvature-based renderings. • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings. Abstract Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

Fully Automated MR Based Virtual Biopsy of Cerebral Gliomas

  • Haubold, Johannes
  • Hosch, René
  • Parmar, Vicky
  • Glas, Martin
  • Guberina, Nika
  • Catalano, Onofrio Antonio
  • Pierscianek, Daniela
  • Wrede, Karsten
  • Deuschl, Cornelius
  • Forsting, Michael
  • Nensa, Felix
  • Flaschel, Nils
  • Umutlu, Lale
Cancers 2021 Journal Article, cited 0 times
Website
OBJECTIVE: The aim of this study was to investigate the diagnostic accuracy of a radiomics analysis based on a fully automated segmentation and a simplified and robust MR imaging protocol to provide a comprehensive analysis of the genetic profile and grading of cerebral gliomas for everyday clinical use. METHODS: MRI examinations of 217 therapy-naive patients with cerebral gliomas, each comprising a non-contrast T1-weighted, FLAIR and contrast-enhanced T1-weighted sequence, were included in the study. In addition, clinical and laboratory parameters were incorporated into the analysis. The BraTS 2019 pretrained DeepMedic network was used for automated segmentation. The segmentations generated by DeepMedic were evaluated with 200 manual segmentations with a DICE score of 0.8082 +/- 0.1321. Subsequently, the radiomics signatures were utilized to predict the genetic profile of ATRX, IDH1/2, MGMT and 1p19q co-deletion, as well as differentiating low-grade glioma from high-grade glioma. RESULTS: The network provided an AUC (validation/test) for the differentiation between low-grade gliomas vs. high-grade gliomas of 0.981 +/- 0.015/0.885 +/- 0.02. The best results were achieved for the prediction of the ATRX expression loss with AUCs of 0.979 +/- 0.028/0.923 +/- 0.045, followed by 0.929 +/- 0.042/0.861 +/- 0.023 for the prediction of IDH1/2. The prediction of 1p19q and MGMT achieved moderate results, with AUCs of 0.999 +/- 0.005/0.711 +/- 0.128 for 1p19q and 0.854 +/- 0.046/0.742 +/- 0.050 for MGMT. CONCLUSION: This fully automated approach utilizing simplified MR protocols to predict the genetic profile and grading of cerebral gliomas provides an easy and efficient method for non-invasive tumor decoding. SIMPLE SUMMARY: Over the past few years, radiomics-based tissue characterization has demonstrated its potential for non-invasive prediction of the genetic profile and grading in cerebral gliomas using multiparametric MRI. The aim of our study was to investigate the feasibility and diagnostic accuracy of a fully automated radiomics analysis based on a simplified MR protocol derived from various scanner systems to prospectively ease the transition of radiomics-based non-invasive tissue sampling into clinical practice. Using an MRI with non-contrast and post-contrast T1-weighted sequences and FLAIR, our workflow automatically predicts the IDH1/2 mutation, the ATRX expression loss, the 1p19q co-deletion and the MGMT methylation status. It also effectively differentiates low-grade from high-grade gliomas. In summary, the present study demonstrated that a fully automated prediction of grading and the genetic profile of cerebral gliomas could be performed with our proposed method using a simplified MRI protocol that is robust to variations in scanner systems, imaging parameters and field strength.

Centerline detection and estimation of pancreatic duct from abdominal CT images

  • Hattori, Chihiro
  • Furukawa, Daisuke
  • Yamazaki, Fukashi
  • Fujisawa, Yasuko
  • Sakaguchi, Takuya
  • Išgum, Ivana
  • Colliot, Olivier
2022 Conference Paper, cited 1 times
Website
Purpose: The aim of this work is to automatically detect and estimate the centerline of the pancreatic duct accurately. The proposed method uses four different algorithms for tracking the pancreatic duct in each of four type pancreatic zones. Method: The pancreatic duct was divided into 4 zones; Zone A has a clearly delineated pancreatic duct, Zone B is obscured, Zone C runs from visible segment to the pancreas’ tail and Zone D extends from head of the pancreas to the first visible point. The pancreatic duct is obscured in regions of lengths from 10-40 mm. Proposed method combines deep learning CNN for duct segmentation, followed by Dijkstra's rooting algorithm for estimation of centerline in Zones A and Zones B. In Zone C and D, the centerline was estimated using geometric information. The reference standard for the pancreatic duct was determined using non-obscured data by skilled technologists. Results: Zones A, which used a neural network method, had a success rate of 94%. In Zone B, the difference was <3mm when obscured interval was 10-40mm In Zone C and D, distance between computer estimated pancreas head and tail points and operator determined anatomical point was 10mm and 19mm, respectively. Optimal characteristic cost functions for each zone allow the natural centerline to be estimated even in obscured region. The new algorithms increased the average visible centerline length by 146% with calculation time of <40 seconds.

Segmentation of Kidney Tumors on Non-Contrast CT Images Using Protuberance Detection Network

  • Hatsutani, Taro
  • Ichinose, Akimichi
  • Nakamura, Keigo
  • Kitamura, Yoshiro
2023 Book Section, cited 0 times
Many renal cancers are incidentally found on non-contrast CT (NCCT) images. On contrast-enhanced CT (CECT) images, most kidney tumors, especially renal cancers, have different intensity values compared to normal tissues. However, on NCCT images, some tumors called isodensity tumors, have similar intensity values to the surrounding normal tissues, and can only be detected through a change in organ shape. Several deep learning methods which segment kidney tumors from CECT images have been proposed and showed promising results. However, these methods fail to capture such changes in organ shape on NCCT images. In this paper, we present a novel framework, which can explicitly capture protruded regions in kidneys to enable a better segmentation of kidney tumors. We created a synthetic mask dataset that simulates a protuberance, and trained a segmentation network to separate the protruded regions from the normal kidney regions. To achieve the segmentation of whole tumors, our framework consists of three networks. The first network is a conventional semantic segmentation network which extracts a kidney region mask and an initial tumor region mask. The second network, which we name protuberance detection network, identifies the protruded regions from the kidney region mask. Given the initial tumor region mask and the protruded region mask, the last network fuses them and predicts the final kidney tumor mask accurately. The proposed method was evaluated on a publicly available KiTS19 dataset, which contains 108 NCCT images, and showed that our method achieved a higher dice score of 0.615 (+0.097) and sensitivity of 0.721 (+0.103) compared to 3D-UNet. To the best of our knowledge, this is the first deep learning method that is specifically designed for kidney tumor segmentation on NCCT images.

Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images

  • Hatamizadeh, Ali
  • Nath, Vishwesh
  • Tang, Yucheng
  • Yang, Dong
  • Roth, Holger R.
  • Xu, Daguang
2022 Book Section, cited 0 times
Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can assist clinicians in diagnosing the patient and successively studying the progression of the malignant entity. In recent years, Fully Convolutional Neural Networks (FCNNs) approaches have become the de facto standard for 3D medical image segmentation. The popular “U-shaped” network architecture has achieved state-of-the-art performance benchmarks on different 2D and 3D semantic segmentation tasks and across various imaging modalities. However, due to the limited kernel size of convolution layers in FCNNs, their performance of modeling long-range information is sub-optimal, and this can lead to deficiencies in the segmentation of tumors with variable sizes. On the other hand, transformer models have demonstrated excellent capabilities in capturing such long-range information in multiple domains, including natural language processing and computer vision. Inspired by the success of vision transformers and their variants, we propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is reformulated as a sequence to sequence prediction problem wherein multi-modal input data is projected into a 1D sequence of embedding and used as an input to a hierarchical Swin transformer as the encoder. The swin transformer encoder extracts features at five different resolutions by utilizing shifted windows for computing self-attention and is connected to an FCNN-based decoder at each resolution via skip connections. We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase. Code: https://monai.io/research/swin-unetr.

Breast cancer masses classification using deep convolutional neural networks and transfer learning

  • Hassan, Shayma’a A.
  • Sayed, Mohammed S.
  • Abdalla, Mahmoud I.
  • Rashwan, Mohsen A.
Multimedia Tools and Applications 2020 Journal Article, cited 0 times
Website
With the recent advances in the deep learning field, the use of deep convolutional neural networks (DCNNs) in biomedical image processing becomes very encouraging. This paper presents a new classification model for breast cancer masses based on DCNNs. We investigated the use of transfer learning from AlexNet and GoogleNet pre-trained models to suit this task. We experimentally determined the best DCNN model for accurate classification by comparing different models, which vary according to the design and hyper-parameters. The effectiveness of these models were demonstrated using four mammogram databases. All models were trained and tested using a mammographic dataset from CBIS-DDSM and INbreast databases to select the best AlexNet and GoogleNet models. The performance of the two proposed models was further verified using images from Egyptian National Cancer Institute (NCI) and MIAS database. When tested on CBIS-DDSM and INbreast databases, the proposed AlexNet model achieved an accuracy of 100% for both databases. While, the proposed GoogleNet model achieved accuracy of 98.46% and 92.5%, respectively. When tested on NCI images and MIAS databases, AlexNet achieved an accuracy of 97.89% with AUC of 98.32%, and accuracy of 98.53% with AUC of 98.95%, respectively. GoogleNet achieved an accuracy of 91.58% with AUC of 96.5%, and accuracy of 88.24% with AUC of 94.65%, respectively. These results suggest that AlexNet has better performance and more robustness than GoogleNet. To the best of our knowledge, the proposed AlexNet model outperformed the latest methods. It achieved the highest accuracy and AUC score and the lowest testing time reported on CBIS-DDSM, INbreast and MIAS databases.

Prostate cancer classification from ultrasound and MRI images using deep learning based Explainable Artificial Intelligence

  • Hassan, Md Rafiul
  • Islam, Md Fakrul
  • Uddin, Md Zia
  • Ghoshal, Goutam
  • Hassan, Mohammad Mehedi
  • Huda, Shamsul
  • Fortino, Giancarlo
Future Generation Computer Systems 2022 Journal Article, cited 0 times
Website

Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features

  • Hasan, Ali M.
  • Al-Jawad, Mohammed M.
  • Jalab, Hamid A.
  • Shaiba, Hadil
  • Ibrahim, Rabha W.
  • Al-Shamasneh, Ala’a R.
Entropy 2020 Journal Article, cited 0 times
Website
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

QGFormer: Queries-guided transformer for flexible medical image synthesis with domain missing

  • Hao, Huaibo
  • Xue, Jie
  • Huang, Pu
  • Ren, Liwen
  • Li, Dengwang
Expert Systems with Applications 2024 Journal Article, cited 0 times
Website
Domain missing poses a common challenge in medical clinical practice, limiting diagnostic accuracy compared to the complete multi-domain images that provide complementary information. We propose QGFormer to address this issue by flexibly imputing missing domains from any available source domain using a single model, which is challenging due to (1) the inherent limitation of CNNs to capture long-range dependencies, (2) the difficulty in modeling the inter- and intra-domain dependencies of multi-domain images, and (3) inefficiencies in fusing domain-specific features associated with missing domains. To tackle these challenges, we introduce two spatial-domanial attentions (SDAs), which establish intra-domain (spatial dimension) and inter-domain (domain dimension) dependencies independently or jointly. QGFormer, constructed based on SDAs, comprises three components: Encoder, Decoder and Fusion. The Encoder and Decoder form the backbone, modeling contextual dependencies to create a hierarchical representation of features. The QGFormer Fusion then adaptively aggregates these representations to synthesize specific missing domains from coarse to fine, guided by learnable domain queries. This process is interpretable because the attention scores in Fusion indicate how much attention the target domains pay to different inputs and regions. In addition, the scalable architecture enables QGFormer to segment tumors with domain missing by replacing domain queries with segment queries. Extensive experiments demonstrate that our approach achieves consistent improvements in multi-domain imputation, cross-domain image translation and multitask of synthesis and segmentation.

Revisiting Iterative Highly Efficient Optimisation Schemes in Medical Image Registration

  • Hansen, Lasse
  • Heinrich, Mattias P
2021 Conference Proceedings, cited 0 times
Website

Classification of Lung Nodule from CT and PET/CT Images Using Artificial Neural Network

  • Hansdah, Malho
  • Singh, Koushlendra Kumar
2023 Book Section, cited 0 times
Website
This work aims to design and develop an artificial neural network (ANN) architecture for the classification of cancerous tissue in the lung. A sequential model is used for the machine learning process. ReLU and Sigmoid activation functions have been used to supply weights to the model. The present work encompasses detecting and classifying the tumor cells into four categories. The four types of lung cancer nodules are adenocarcinoma, squamous-cell carcinoma, large-cell carcinoma, and small-cell carcinoma. Computed tomography (CT) and Positron emission tomography (PET) scan DICOM images are used for the classification. The proposed approach has been validated with the subset of the original dataset. A total of 6500 images have been taken in the experiment. The approach is to feed the CT scan images into ANNs and classify the image as the correct type. The dataset is provided by The Cancer Imaging Archive (TCIA). The dataset is titled “A Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis.” The tumor cells are classified using the ANN architecture with 99.6% of validation accuracy and 4.35% loss.

Stability and Reproducibility of Radiomic Features Based Various Segmentation Technique on MR Images of Hepatocellular Carcinoma (HCC)

  • Haniff, N. S. M.
  • Abdul Karim, M. K.
  • Osman, N. H.
  • Saripan, M. I.
  • Che Isa, I. N.
  • Ibahim, M. J.
Diagnostics (Basel) 2021 Journal Article, cited 1 times
Website
Hepatocellular carcinoma (HCC) is considered as a complex liver disease and ranked as the eighth-highest mortality rate with a prevalence of 2.4% in Malaysia. Magnetic resonance imaging (MRI) has been acknowledged for its advantages, a gold technique for diagnosing HCC, and yet the false-negative diagnosis from the examinations is inevitable. In this study, 30 MR images from patients diagnosed with HCC is used to evaluate the robustness of semi-automatic segmentation using the flood fill algorithm for quantitative features extraction. The relevant features were extracted from the segmented MR images of HCC. Four types of features extraction were used for this study, which are tumour intensity, shape feature, textural feature and wavelet feature. A total of 662 radiomic features were extracted from manual and semi-automatic segmentation and compared using intra-class relation coefficient (ICC). Radiomic features extracted using semi-automatic segmentation utilized flood filling algorithm from 3D-slicer had significantly higher reproducibility (average ICC = 0.952 +/- 0.009, p < 0.05) compared with features extracted from manual segmentation (average ICC = 0.897 +/- 0.011, p > 0.05). Moreover, features extracted from semi-automatic segmentation were more robust compared to manual segmentation. This study shows that semi-automatic segmentation from 3D-Slicer is a better alternative to the manual segmentation, as they can produce more robust and reproducible radiomic features.

Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

  • Hancock, Matthew C
  • Magnan, Jerry F
2017 Conference Proceedings, cited 0 times
Website

Exploration of a noninvasive radiomics classifier for breast cancer tumor microenvironment categorization and prognostic outcome prediction

  • Han, X.
  • Gong, Z.
  • Guo, Y.
  • Tang, W.
  • Wei, X.
Eur J Radiol 2024 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVES: Breast cancer progression and treatment response are significantly influenced by the tumor microenvironment (TME). Traditional methods for assessing TME are invasive, posing a challenge for patient care. This study introduces a non-invasive approach to TME classification by integrating radiomics and machine learning, aiming to predict the TME status using imaging data, thereby aiding in prognostic outcome prediction. MATERIALS AND METHODS: Utilizing multi-omics data from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA), this study employed CIBERSORT and MCP-counter algorithms analyze immune infiltration in breast cancer. A radiomics classifier was developed using a random forest algorithm, leveraging quantitative features extracted from intratumoral and peritumoral regions of Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) scans. The classifer's ability to predict diverse TME states were and their prognostic implications were evaluated using Kaplan-Meier survival curves. RESULTS: Three distinct TME states were identified using RNA-Seq data, each displaying unique prognostic and biological characteristics. Notably, patients with increased immune cell infiltration showed significantly improved prognoses (P < 0.05). The classifier, comprising 24 radiomic features, demonstrated high predictive accuracy (AUC of training set = 0.960, 95 % CI: 0.922, 0.997; AUC of testing set = 0.853, 95 % CI: 0.687, 1.000) in differentiating these TME states. Predictions from the classifier also correlated significantly with overall patient survival (P < 0.05). CONCLUSION: This study offers a detailed analysis of the complex TME states in breast cancer and presents a reliable, noninvasive radiomics classifier for TME assessment. The classifer's accurate prediction of TME status and its correlation with prognosis highlight its potential as a tool in personalized breast cancer treatment, paving the way for more individualized and less invasive therapeutic strategies.

Multimodal Brain Image Segmentation and Analysis with Neuromorphic Attention-Based Learning

  • Han, Woo-Sup
  • Han, Il Song
2020 Book Section, cited 0 times
Website
Automated image analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning of the disease, because manual practices of segmenting tumors are time consuming, expensive and can be subject to clinician diagnostic error. We propose a novel neuromorphic attention-based learner (NABL) model to train the deep neural network for tumor segmentation, which is with challenges of typically small datasets and the difficulty of exact segmentation class determination. The core idea is to introduce the neuromorphic attention to guide the learning process of deep neural network architecture, providing the highlighted region of interest for tumor segmentation. The neuromorphic convolution filters mimicking visual cortex neurons are adopted for the neuromorphic attention generation, transferred from the pre-trained neuromorphic convolutional neural networks(CNNs) for adversarial imagery environments. Our pre-trained neuromorphic CNN has the feature extraction ability applicable to brain MRI data, verified by the overall survival prediction without the tumor segmentation training at Brain Tumor Segmentation (BraTS) Challenge 2018. NABL provides us with an affordable solution of more accurate and faster image analysis of brain tumor segmentation, by incorporating the typical encoder-decoder U-net architecture of CNN. Experiment results illustrated the effectiveness and feasibility of our proposed method with flexible requirements of clinical diagnostic decision data, from segmentation to overall survival prediction. The overall survival prediction accuracy is 55% for predicting overall survival period in days, based on the BraTS 2019 validation dataset, while 48.6% based on the BraTS 2019 test dataset.

Multimodal Brain Image Segmentation and Analysis with Neuromorphic Attention-Based Learning

  • Han, Woo-Sup
  • Han, Il Song
2020 Conference Proceedings, cited 0 times
Website
Automated image analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning of the disease, because manual practices of segmenting tumors are time consuming, expensive and can be subject to clinician diagnostic error. We propose a novel neuromorphic attention-based learner (NABL) model to train the deep neural network for tumor segmentation, which is with challenges of typically small datasets and the difficulty of exact segmentation class determination. The core idea is to introduce the neuromorphic attention to guide the learning process of deep neural network architecture, providing the highlighted region of interest for tumor segmentation. The neuromorphic convolution filters mimicking visual cortex neurons are adopted for the neuromorphic attention generation, transferred from the pre-trained neuromorphic convolutional neural networks(CNNs) for adversarial imagery environments. Our pre-trained neuromorphic CNN has the feature extraction ability applicable to brain MRI data, verified by the overall survival prediction without the tumor segmentation training at Brain Tumor Segmentation (BraTS) Challenge 2018. NABL provides us with an affordable solution of more accurate and faster image analysis of brain tumor segmentation, by incorporating the typical encoder-decoder U-net architecture of CNN. Experiment results illustrated the effectiveness and feasibility of our proposed method with flexible requirements of clinical diagnostic decision data, from segmentation to overall survival prediction. The overall survival prediction accuracy is 55% for predicting overall survival period in days, based on the BraTS 2019 validation dataset, while 48.6% based on the BraTS 2019 test dataset.

Deep Transfer Learning and Radiomics Feature Prediction of Survival of Patients with High-Grade Gliomas

  • Han, W.
  • Qin, L.
  • Bay, C.
  • Chen, X.
  • Yu, K. H.
  • Miskin, N.
  • Li, A.
  • Xu, X.
  • Young, G.
AJNR Am J Neuroradiol 2020 Journal Article, cited 16 times
Website
BACKGROUND AND PURPOSE: Patient survival in high-grade glioma remains poor, despite the recent developments in cancer treatment. As new chemo-, targeted molecular, and immune therapies emerge and show promising results in clinical trials, image-based methods for early prediction of treatment response are needed. Deep learning models that incorporate radiomics features promise to extract information from brain MR imaging that correlates with response and prognosis. We report initial production of a combined deep learning and radiomics model to predict overall survival in a clinically heterogeneous cohort of patients with high-grade gliomas. MATERIALS AND METHODS: Fifty patients with high-grade gliomas from our hospital and 128 patients with high-grade glioma from The Cancer Genome Atlas were included. For each patient, we calculated 348 hand-crafted radiomics features and 8192 deep features generated by a pretrained convolutional neural network. We then applied feature selection and Elastic Net-Cox modeling to differentiate patients into long- and short-term survivors. RESULTS: In the 50 patients with high-grade gliomas from our institution, the combined feature analysis framework classified the patients into long- and short-term survivor groups with a log-rank test P value < .001. In the 128 patients from The Cancer Genome Atlas, the framework classified patients into long- and short-term survivors with a log-rank test P value of .014. For the mixed cohort of 50 patients from our institution and 58 patients from The Cancer Genome Atlas, it yielded a log-rank test P value of .035. CONCLUSIONS: A deep learning model combining deep and radiomics features can dichotomize patients with high-grade gliomas into long- and short-term survivors.

MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks

  • Han, Lichy
  • Kamdar, Maulik R.
2018 Conference Paper, cited 5 times
Website
Glioblastoma Multiforme (GBM), a malignant brain tumor, is among the most lethal of all cancers. Temozolomide is the primary chemotherapy treatment for patients diagnosed with GBM. The methylation status of the promoter or the enhancer regions of the O6− methylguanine methyltransferase (MGMT) gene may impact the efficacy and sensitivity of temozolomide, and hence may affect overall patient survival. Microscopic genetic changes may manifest as macroscopic morphological changes in the brain tumors that can be detected using magnetic resonance imaging (MRI), which can serve as noninvasive biomarkers for determining methylation of MGMT regulatory regions. In this research, we use a compendium of brain MRI scans of GBM patients collected from The Cancer Imaging Archive (TCIA) combined with methylation data from The Cancer Genome Atlas (TCGA) to predict the methylation state of the MGMT regulatory regions in these patients. Our approach relies on a bi-directional convolutional recurrent neural network architecture (CRNN) that leverages the spatial aspects of these 3-dimensional MRI scans. Our CRNN obtains an accuracy of 67% on the validation data and 62% on the test data, with precision and recall both at 67%, suggesting the existence of MRI features that may complement existing markers for GBM patient stratification and prognosis. We have additionally presented our model via a novel neural network visualization platform, which we have developed to improve interpretability of deep learning MRI-based classification models.

Locoregional Recurrence Prediction Using a Deep Neural Network of Radiological and Radiotherapy Images

  • Han, K.
  • Joung, J. F.
  • Han, M.
  • Sung, W.
  • Kang, Y. N.
J Pers Med 2022 Journal Article, cited 1 times
Website
Radiation therapy (RT) is an important and potentially curative modality for head and neck squamous cell carcinoma (HNSCC). Locoregional recurrence (LR) of HNSCC after RT is ranging from 15% to 50% depending on the primary site and stage. In addition, the 5-year survival rate of patients with LR is low. To classify high-risk patients who might develop LR, a deep learning model for predicting LR needs to be established. In this work, 157 patients with HNSCC who underwent RT were analyzed. Based on the National Cancer Institute's multi-institutional TCIA data set containing FDG-PET/CT/dose, a 3D deep learning model was proposed to predict LR without time-consuming segmentation or feature extraction. Our model achieved an averaged area under the curve (AUC) of 0.856. Adding clinical factors into the model improved the AUC to an average of 0.892 with the highest AUC of up to 0.974. The 3D deep learning model could perform individualized risk quantification of LR in patients with HNSCC without time-consuming tumor segmentation.

Multimodal Brain Image Analysis and Survival Prediction Using Neuromorphic Attention-Based Neural Networks

  • Han, Il Song
2021 Book Section, cited 0 times
Accurate analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning, and the recent development using deep neural networks becomes of great clinical importance because of its effective and accurate performance. The 3D nature of multimodal MRI demands the large scale memory and computation, while the variety of 3D U-net is widely adopted for medical image segmentation. In this study, 2D U-net is applied to the tumor segmentation and survival period prediction, inspired by the neuromorphic neural network. The new method introduces the neuromorphic saliency map for enhancing the image analysis. By mimicking the visual cortex and implementing the neuromorphic preprocessing, the map of attention and saliency is generated and applied to improve the accurate and fast medical image analysis performance. Through the BraTS 2020 challenge, the performance of the renewed neuromorphic algorithm is evaluated and an overall review is conducted on the previous neuromorphic processing and other approach. The overall survival prediction accuracy is 55.2% for the validation data, and 43% for the test data.

A novel computer-aided detection system for pulmonary nodule identification in CT images

  • Han, Hao
  • Li, Lihong
  • Wang, Huafeng
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2014 Conference Proceedings, cited 5 times
Website
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.

Vector quantization-based automatic detection of pulmonary nodules in thoracic CT images

  • Han, Hao
  • Li, Lihong
  • Han, Fangfang
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2013 Conference Paper, cited 8 times
Website
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel CADe system for lung nodule detection based on a vector quantization (VQ) approach. Compared to existing CADe systems, the extraction of lungs from the chest CT image is fully automatic, and the detection and segmentation of initial nodule candidates (INCs) within the lung volume is fast and accurate due to the self-adaptive nature of VQ algorithm. False positives in the detected INCs are reduced by rule-based pruning in combination with a feature-based support vector machine classifier. We validate the proposed approach on 60 CT scans from a publicly available database. Preliminary results show that our CADe system is effective to detect nodules with a sensitivity of 90.53 % at a specificity level of 86.00%.

Semi-supervised learning for an improved diagnosis of COVID-19 in CT images

  • Han, C. H.
  • Kim, M.
  • Kwak, J. T.
PLoS One 2021 Journal Article, cited 0 times
Website
Coronavirus disease 2019 (COVID-19) has been spread out all over the world. Although a real-time reverse-transcription polymerase chain reaction (RT-PCR) test has been used as a primary diagnostic tool for COVID-19, the utility of CT based diagnostic tools have been suggested to improve the diagnostic accuracy and reliability. Herein we propose a semi-supervised deep neural network for an improved detection of COVID-19. The proposed method utilizes CT images in a supervised and unsupervised manner to improve the accuracy and robustness of COVID-19 diagnosis. Both labeled and unlabeled CT images are employed. Labeled CT images are used for supervised learning. Unlabeled CT images are utilized for unsupervised learning in a way that the feature representations are invariant to perturbations in CT images. To systematically evaluate the proposed method, two COVID-19 CT datasets and three public CT datasets with no COVID-19 CT images are employed. In distinguishing COVID-19 from non-COVID-19 CT images, the proposed method achieves an overall accuracy of 99.83%, sensitivity of 0.9286, specificity of 0.9832, and positive predictive value (PPV) of 0.9192. The results are consistent between the COVID-19 challenge dataset and the public CT datasets. For discriminating between COVID-19 and common pneumonia CT images, the proposed method obtains 97.32% accuracy, 0.9971 sensitivity, 0.9598 specificity, and 0.9326 PPV. Moreover, the comparative experiments with respect to supervised learning and training strategies demonstrate that the proposed method is able to improve the diagnostic accuracy and robustness without exhaustive labeling. The proposed semi-supervised method, exploiting both supervised and unsupervised learning, facilitates an accurate and reliable diagnosis for COVID-19, leading to an improved patient care and management.

Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use

  • Hamzaoui, D.
  • Montagne, S.
  • Renard-Penna, R.
  • Ayache, N.
  • Delingette, H.
J Med Imaging (Bellingham) 2022 Journal Article, cited 0 times
Website
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 +/- 2.85 for the whole gland (WG), 91.00 +/- 4.34 for the transition zone (TZ), and 79.08 +/- 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p-value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 +/- 2.94 for WG, 86.84 +/- 4.33 for TZ, and 78.40 +/- 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.

4D dosimetric-blood flow model: impact of prolonged fraction delivery times of IMRT on the dose to the circulating lymphocytes

  • Hammi, A.
Phys Med Biol 2023 Journal Article, cited 0 times
Website
To investigate the impact of prolonged fraction delivery of modern intensity-modulated radiotherapy (IMRT) on the accumulated dose to the circulating blood during the course of fractionated radiation therapy. We have developed a 4D dosimetric blood flow model (d-BFM) capable of continuously simulating the blood flow through the entire body of the cancer patient and scoring the accumulated dose to blood particles (BPs). We developed a semi-automatic approach that enables us to map the tortuous blood vessels of the surficial brain of individual patients directly from standard magnetic resonance imaging data of the patient. For the rest of the body, we developed a fully-fledged dynamic blood flow transfer model according to the International Commission on Radiological Protection human reference. We proposed a methodology enabling us to design a personalized d-BFM, such it can be tailored for individual patients by adopting intra- and inter-subject variations. The entire circulatory model tracks over 43 million BPs and has a time resolution of $\unicode{x02206}t$ = 10−3 s. A dynamic dose delivery model was implemented to emulate the spatial and temporal time-varying pattern of the dose rate during the step-and-shoot mode of IMRT. We evaluated how different configurations of the dose rate delivery, and a time prolongation of fraction delivery may impact the dose received by the circulating blood (CB).Our calculations indicate that prolonging the fraction treatment time from 7 to 18 min will augment the blood volume receiving any dose ${V}_{D\gt 0\mathrm{Gy}}$ from 36.1% to 81.5% during one single fraction. The results indicate that increasing the segment number has only a negligible effect on the irradiated blood volume, when the fraction time is kept identical. We developed a novel concept of customized 4D d-BFM that can be tailored to the hemodynamics of individual patients to quantify dose to the CB during fractionated radiotherapy. The prolonged fraction delivery and the variability of the instantaneous dose rate have a significant impact on the accumulated dose distribution during IMRT treatments. This impact should be considered during IMRT treatments design to reduce RT-induced immunosuppressive effects.

Glioma Classification Using Multimodal Radiology and Histology Data

  • Hamidinekoo, Azam
  • Pieciak, Tomasz
  • Afzali, Maryam
  • Akanyeti, Otar
  • Yuan, Yinyin
2021 Book Section, cited 0 times
Gliomas are brain tumours with a high mortality rate. There are various grades and sub-types of this tumour, and the treatment procedure varies accordingly. Clinicians and oncologists diagnose and categorise these tumours based on visual inspection of radiology and histology data. However, this process can be time-consuming and subjective. The computer-assisted methods can help clinicians to make better and faster decisions. In this paper, we propose a pipeline for automatic classification of gliomas into three sub-types: oligodendroglioma, astrocytoma, and glioblastoma, using both radiology and histopathology images. The proposed approach implements distinct classification models for radiographic and histologic modalities and combines them through an ensemble method. The classification algorithm initially carries out tile-level (for histology) and slice-level (for radiology) classification via a deep learning method, then tile/slice-level latent features are combined for a whole-slide and whole-volume sub-type prediction. The classification algorithm was evaluated using the data set provided in the CPM-RadPath 2020 challenge. The proposed pipeline achieved the F1-Score of 0.886, Cohen’s Kappa score of 0.811 and Balance accuracy of 0.860. The ability of the proposed model for end-to-end learning of diverse features enables it to give a comparable prediction of glioma tumour sub-types.

A computational model for texture analysis in images with a reaction-diffusion based filter

  • Hamid, Lefraich
  • Fahim, Houda
  • Zirhem, Mariam
  • Alaa, Nour Eddine
Journal of Mathematical Modeling 2021 Journal Article, cited 0 times
Website
As one of the most important tasks in image processing, texture analysis is related to a class of mathematical models that characterize the spatial variations of an image. In this paper, in order to extract features of interest, we propose a reaction diffusion based model which uses the variational approach. In the first place, we describe the mathematical model, then, aiming to simulate the latter accurately, we suggest an efficient numerical scheme. Thereafter, we compare our method to literature findings. Finally, we conclude our analysis by a number of experimental results showing the robustness and the performance of our algorithm.

Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans

  • Hamghalam, Mohammad
  • Lei, Baiying
  • Wang, Tianfu
2020 Book Section, cited 0 times
The magnetic resonance (MR) analysis of brain tumors is widely used for diagnosis and examination of tumor subregions. The overlapping area among the intensity distribution of healthy, enhancing, non-enhancing, and edema regions makes the automatic segmentation a challenging task. Here, we show that a convolutional neural network trained on high-contrast images can transform the intensity distribution of brain lesions in its internal subregions. Specifically, a generative adversarial network (GAN) is extended to synthesize high-contrast images. A comparison of these synthetic images and real images of brain tumor tissue in MR scans showed significant segmentation improvement and decreased the number of real channels for segmentation. The synthetic images are used as a substitute for real channels and can bypass real modalities in the multimodal brain tumor segmentation framework. Segmentation results on BraTS 2019 dataset demonstrate that our proposed approach can efficiently segment the tumor areas. In the end, we predict patient survival time based on volumetric features of the tumor subregions as well as the age of each case through several regression models.

Convolutional 3D to 2D Patch Conversion for Pixel-Wise Glioma Segmentation in MRI Scans

  • Hamghalam, Mohammad
  • Lei, Baiying
  • Wang, Tianfu
2020 Book Section, cited 0 times
Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS’19) demonstrate that our proposed method can efficiently segment the tumor regions.

Multi-faceted computational assessment of risk and progression in oligodendroglioma implicates NOTCH and PI3K pathways

  • Halani, Sameer H
  • Yousefi, Safoora
  • Vega, Jose Velazquez
  • Rossi, Michael R
  • Zhao, Zheng
  • Amrollahi, Fatemeh
  • Holder, Chad A
  • Baxter-Stoltzfus, Amelia
  • Eschbacher, Jennifer
  • Griffith, Brent
NPJ precision oncology 2018 Journal Article, cited 0 times
Website

Impact of harmonization on the reproducibility of MRI radiomic features when using different scanners, acquisition parameters, and image pre-processing techniques: a phantom study

  • Hajianfar, G.
  • Hosseini, S. A.
  • Bagherieh, S.
  • Oveisi, M.
  • Shiri, I.
  • Zaidi, H.
2024 Journal Article, cited 0 times
Website
This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis's (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.

Time-to-event overall survival prediction in glioblastoma multiforme patients using magnetic resonance imaging radiomics

  • Hajianfar, G.
  • Haddadi Avval, A.
  • Hosseini, S. A.
  • Nazari, M.
  • Oveisi, M.
  • Shiri, I.
  • Zaidi, H.
2023 Journal Article, cited 0 times
Website
PURPOSE: Glioblastoma Multiforme (GBM) represents the predominant aggressive primary tumor of the brain with short overall survival (OS) time. We aim to assess the potential of radiomic features in predicting the time-to-event OS of patients with GBM using machine learning (ML) algorithms. MATERIALS AND METHODS: One hundred nineteen patients with GBM, who had T1-weighted contrast-enhanced and T2-FLAIR MRI sequences, along with clinical data and survival time, were enrolled. Image preprocessing methods included 64 bin discretization, Laplacian of Gaussian (LOG) filters with three Sigma values and eight variations of Wavelet Transform. Images were then segmented, followed by the extraction of 1212 radiomic features. Seven feature selection (FS) methods and six time-to-event ML algorithms were utilized. The combination of preprocessing, FS, and ML algorithms (12 x 7 x 6 = 504 models) was evaluated by multivariate analysis. RESULTS: Our multivariate analysis showed that the best prognostic FS/ML combinations are the Mutual Information (MI)/Cox Boost, MI/Generalized Linear Model Boosting (GLMB) and MI/Generalized Linear Model Network (GLMN), all of which were done via the LOG (Sigma = 1 mm) preprocessing method (C-index = 0.77). The LOG filter with Sigma = 1 mm preprocessing method, MI, GLMB and GLMN achieved significantly higher C-indices than other preprocessing, FS, and ML methods (all p values < 0.05, mean C-indices of 0.65, 0.70, and 0.64, respectively). CONCLUSION: ML algorithms are capable of predicting the time-to-event OS of patients using MRI-based radiomic and clinical features. MRI-based radiomics analysis in combination with clinical variables might appear promising in assisting clinicians in the survival prediction of patients with GBM. Further research is needed to establish the applicability of radiomics in the management of GBM in the clinic.

Potential Added Value of PET/CT Radiomics for Survival Prognostication beyond AJCC 8th Edition Staging in Oropharyngeal Squamous Cell Carcinoma

  • Haider, S. P.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Sharaf, K.
  • Forghani, R.
  • Kann, B. H.
  • Judson, B. L.
  • Prasad, M. L.
  • Burtness, B.
  • Mahajan, A.
  • Payabvash, S.
Cancers (Basel) 2020 Journal Article, cited 2 times
Website
Accurate risk-stratification can facilitate precision therapy in oropharyngeal squamous cell carcinoma (OPSCC). We explored the potential added value of baseline positron emission tomography (PET)/computed tomography (CT) radiomic features for prognostication and risk stratification of OPSCC beyond the American Joint Committee on Cancer (AJCC) 8th edition staging scheme. Using institutional and publicly available datasets, we included OPSCC patients with known human papillomavirus (HPV) status, without baseline distant metastasis and treated with curative intent. We extracted 1037 PET and 1037 CT radiomic features quantifying lesion shape, imaging intensity, and texture patterns from primary tumors and metastatic cervical lymph nodes. Utilizing random forest algorithms, we devised novel machine-learning models for OPSCC progression-free survival (PFS) and overall survival (OS) using "radiomics" features, "AJCC" variables, and the "combined" set as input. We designed both single- (PET or CT) and combined-modality (PET/CT) models. Harrell's C-index quantified survival model performance; risk stratification was evaluated in Kaplan-Meier analysis. A total of 311 patients were included. In HPV-associated OPSCC, the best "radiomics" model achieved an average C-index +/- standard deviation of 0.62 +/- 0.05 (p = 0.02) for PFS prediction, compared to 0.54 +/- 0.06 (p = 0.32) utilizing "AJCC" variables. Radiomics-based risk-stratification of HPV-associated OPSCC was significant for PFS and OS. Similar trends were observed in HPV-negative OPSCC. In conclusion, radiomics imaging features extracted from pre-treatment PET/CT may provide complimentary information to the current AJCC staging scheme for survival prognostication and risk-stratification of HPV-associated OPSCC.

Prediction of post-radiotherapy locoregional progression in HPV-associated oropharyngeal squamous cell carcinoma using machine-learning analysis of baseline PET/CT radiomics

  • Haider, S. P.
  • Sharaf, K.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Forghani, R.
  • Kann, B. H.
  • Petukhova, A.
  • Judson, B. L.
  • Prasad, M. L.
  • Liu, C.
  • Burtness, B.
  • Mahajan, A.
  • Payabvash, S.
Translational oncologyTransl Oncol 2020 Journal Article, cited 0 times
Website
Locoregional failure remains a therapeutic challenge in oropharyngeal squamous cell carcinoma (OPSCC). We aimed to devise novel objective imaging biomarkers for prediction of locoregional progression in HPV-associated OPSCC. Following manual lesion delineation, 1037 PET and 1037 CT radiomic features were extracted from each primary tumor and metastatic cervical lymph node on baseline PET/CT scans. Applying random forest machine-learning algorithms, we generated radiomic models for censoring-aware locoregional progression prognostication (evaluated by Harrell's C-index) and risk stratification (evaluated in Kaplan-Meier analysis). A total of 190 patients were included; an optimized model yielded a median (interquartile range) C-index of 0.76 (0.66-0.81; p=0.01) in prognostication of locoregional progression, using combined PET/CT radiomic features from primary tumors. Radiomics-based risk stratification reliably identified patients at risk for locoregional progression within 2-, 3-, 4-, and 5-year follow-up intervals, with log-rank p-values of p=0.003, p=0.001, p=0.02, p=0.006 in Kaplan-Meier analysis, respectively. Our results suggest PET/CT radiomic biomarkers can predict post-radiotherapy locoregional progression in HPV-associated OPSCC. Pending validation in large, independent cohorts, such objective biomarkers may improve patient selection for treatment de-intensification trials in this prognostically favorable OPSCC entity, and eventually facilitate personalized therapy.

PET/CT radiomics signature of human papilloma virus association in oropharyngeal squamous cell carcinoma

  • Haider, S. P.
  • Mahajan, A.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Sharaf, K.
  • Forghani, R.
  • Kucukkaya, A. S.
  • Kann, B. H.
  • Judson, B. L.
  • Prasad, M. L.
  • Burtness, B.
  • Payabvash, S.
Eur J Nucl Med Mol Imaging 2020 Journal Article, cited 1 times
Website
PURPOSE: To devise, validate, and externally test PET/CT radiomics signatures for human papillomavirus (HPV) association in primary tumors and metastatic cervical lymph nodes of oropharyngeal squamous cell carcinoma (OPSCC). METHODS: We analyzed 435 primary tumors (326 for training, 109 for validation) and 741 metastatic cervical lymph nodes (518 for training, 223 for validation) using FDG-PET and non-contrast CT from a multi-institutional and multi-national cohort. Utilizing 1037 radiomics features per imaging modality and per lesion, we trained, optimized, and independently validated machine-learning classifiers for prediction of HPV association in primary tumors, lymph nodes, and combined "virtual" volumes of interest (VOI). PET-based models were additionally validated in an external cohort. RESULTS: Single-modality PET and CT final models yielded similar classification performance without significant difference in independent validation; however, models combining PET and CT features outperformed single-modality PET- or CT-based models, with receiver operating characteristic area under the curve (AUC) of 0.78, and 0.77 for prediction of HPV association using primary tumor lesion features, in cross-validation and independent validation, respectively. In the external PET-only validation dataset, final models achieved an AUC of 0.83 for a virtual VOI combining primary tumor and lymph nodes, and an AUC of 0.73 for a virtual VOI combining all lymph nodes. CONCLUSION: We found that PET-based radiomics signatures yielded similar classification performance to CT-based models, with potential added value from combining PET- and CT-based radiomics for prediction of HPV status. While our results are promising, radiomics signatures may not yet substitute tissue sampling for clinical decision-making.

Radiomics feature reproducibility under inter-rater variability in segmentations of CT images

  • Haarburger, C.
  • Muller-Franzes, G.
  • Weninger, L.
  • Kuhl, C.
  • Truhn, D.
  • Merhof, D.
2020 Journal Article, cited 0 times
Website
Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.

A Fully Automatic Procedure for Brain Tumor Segmentation from Multi-Spectral MRI Records Using Ensemble Learning and Atlas-Based Data Enhancement

  • Győrfi, Ágnes
  • Szilágyi, László
  • Kovács, Levente
Applied Sciences 2021 Journal Article, cited 0 times
Website
The accurate and reliable segmentation of gliomas from magnetic resonance image (MRI) data has an important role in diagnosis, intervention planning, and monitoring the tumor’s evolution during and after therapy. Segmentation has serious anatomical obstacles like the great variety of the tumor’s location, size, shape, and appearance and the modified position of normal tissues. Other phenomena like intensity inhomogeneity and the lack of standard intensity scale in MRI data represent further difficulties. This paper proposes a fully automatic brain tumor segmentation procedure that attempts to handle all the above problems. Having its foundations on the MRI data provided by the MICCAI Brain Tumor Segmentation (BraTS) Challenges, the procedure consists of three main phases. The first pre-processing phase prepares the MRI data to be suitable for supervised classification, by attempting to fix missing data, suppressing the intensity inhomogeneity, normalizing the histogram of observed data channels, generating additional morphological, gradient-based, and Gabor-wavelet features, and optionally applying atlas-based data enhancement. The second phase accomplishes the main classification process using ensembles of binary decision trees and provides an initial, intermediary labeling for each pixel of test records. The last phase reevaluates these intermediary labels using a random forest classifier, then deploys a spatial region growing-based structural validation of suspected tumors, thus achieving a high-quality final segmentation result. The accuracy of the procedure is evaluated using the multi-spectral MRI records of the BraTS 2015 and BraTS 2019 training data sets. The procedure achieves high-quality segmentation results, characterized by average Dice similarity scores of up to 86%.

OPTIMISING DELINEATION ACCURACY OF TUMOURS IN PET FOR RADIOTHERAPY PLANNING USING BLIND DECONVOLUTION

  • Guvenis, A
  • Koc, A
Radiation Protection DosimetryRadiat Prot Dosim 2015 Journal Article, cited 3 times
Website
Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error (p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy.

Somatic mutations associated with MRI-derived volumetric features in glioblastoma

  • Gutman, David A
  • Dunn Jr, William D
  • Grossmann, Patrick
  • Cooper, Lee AD
  • Holder, Chad A
  • Ligon, Keith L
  • Alexander, Brian M
  • Aerts, Hugo JWL
Neuroradiology 2015 Journal Article, cited 45 times
Website
INTRODUCTION: MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). METHODS: Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. RESULTS: Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. CONCLUSION: MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine.

Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

  • Gutman, David A
  • Dunn Jr, William D
  • Cobb, Jake
  • Stoner, Richard M
  • Kalpathy-Cramer, Jayashree
  • Erickson, Bradley
Frontiers in Neuroinformatics 2014 Journal Article, cited 12 times
Website
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set

  • Gutman, David A
  • Cooper, Lee A D
  • Hwang, Scott N
  • Holder, Chad A
  • Gao, Jingjing
  • Aurora, Tarun D
  • Dunn, William D Jr
  • Scarpace, Lisa
  • Mikkelsen, Tom
  • Jain, Rajan
  • Wintermark, Max
  • Jilwan, Manal
  • Raghavan, Prashant
  • Huang, Erich
  • Clifford, Robert J
  • Mongkolwat, Pattanasak
  • Kleper, Vladimir
  • Freymann, John
  • Kirby, Justin
  • Zinn, Pascal O
  • Moreno, Carlos S
  • Jaffe, Carl
  • Colen, Rivka
  • Rubin, Daniel L
  • Saltz, Joel
  • Flanders, Adam
  • Brat, Daniel J
RadiologyRadiology 2013 Journal Article, cited 217 times
Website
PURPOSE: To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival. MATERIALS AND METHODS: Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff alpha statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test. RESULTS: Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01). CONCLUSION: This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee AD
  • Kong, Jun
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.

The REMBRANDT study, a large collection of genomic data from brain cancer patients

  • Gusev, Yuriy
  • Bhuvaneshwar, Krithika
  • Song, Lei
  • Zenklusen, Jean-Claude
  • Fine, Howard
  • Madhavan, Subha
Scientific data 2018 Journal Article, cited 1 times
Website

Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images

  • Gupta, Suneet
  • Porwal, Rabins
International Journal of Biomedical Imaging 2016 Journal Article, cited 10 times
Website
Medical imaging systems often produce images that require enhancement, such as improving the image contrast as they are poor in contrast. Therefore, they must be enhanced before they are examined by medical professionals. This is necessary for proper diagnosis and subsequent treatment. We do have various enhancement algorithms which enhance the medical images to different extents. We also have various quantitative metrics or measures which evaluate the quality of an image. This paper suggests the most appropriate measures for two of the medical images, namely, brain cancer images and breast cancer images.

C-NMC: B-lineage acute lymphoblastic leukaemia: A blood cancer dataset

  • Gupta, Ritu
  • Gehlot, Shiv
  • Gupta, Anubha
Medical engineering & physics 2022 Journal Article, cited 0 times
Website
Development of computer-aided cancer diagnostic tools is an active research area owing to the advancements in deep-learning domain. Such technological solutions provide affordable and easily deployable diagnostic tools. Leukaemia, or blood cancer, is one of the leading cancers causing more than 0.3 million deaths every year. In order to aid the development of such an AI-enabled tool, we collected and curated a microscopic image dataset, namely C-NMC, of more than 15000 cancer cell images at a very high resolution of B-Lineage Acute Lymphoblastic Leukaemia (B-ALL). The dataset is prepared at the subject-level and contains images of both healthy and cancer patients. So far, this is the largest (as well as curated) dataset on B-ALL cancer in the public domain. C-NMC is available at The Cancer Imaging Archive (TCIA), USA and can be helpful for the research community worldwide for the development of B-ALL cancer diagnostic tools. This dataset was utilized in an international medical imaging challenge held at ISBI 2019 conference in Venice, Italy. In this paper, we present a detailed description and challenges of this dataset. We also present benchmarking results of all the methods applied so far on this dataset.

Brain Tumor Detection using Curvelet Transform and Support Vector Machine

  • Gupta, Bhawna
  • Tiwari, Shamik
International Journal of Computer Science and Mobile Computing 2014 Journal Article, cited 8 times
Website

A tool for lung nodules analysis based on segmentation and morphological operation

  • Gupta, Anindya
  • Martens, Olev
  • Le Moullec, Yannick
  • Saar, Tonis
2015 Conference Proceedings, cited 4 times
Website

Multi-branch Learning Framework with Different Receptive Fields Ensemble for Brain Tumor Segmentation

  • Guohua, Cheng
  • Mengyan, Luo
  • Linyang, He
  • Lingqiang, Mo
2020 Book Section, cited 0 times
Segmentation of brain tumors from 3D magnetic resonance images (MRIs) is one of key elements for diagnosis and treatment. Most segmentation methods depend on manual segmentation which is time consuming and subjective. In this paper, we propose a robust method for automatic segmentation of brain tumors image, the complementarity between models and training programs with different structures was fully exploited. Due to significant size difference among brain tumors, the model with single receptive field is not robust. To solve this problem, we propose our own method: i) a cascade model with a 3D U-Net like architecture which provides small receptive field focus on local details. ii) a 3D U-Net model combines VAE module which provides large receptive field focus on global information. iii) redesigned Multi-Branch Network with Cascade Attention Network, which provides different receptive field for different types of brain tumors, this allows to scale differences between various brain tumors and make full use of the prior knowledge of the task. The ensemble of all these models further improves the overall performance on the BraTS2019 [10] image segmentation. We evaluate the proposed methods on the validation DataSet of the BraTS2019 segmentation challenge and achieved dice coefficients of 0.91, 0.83 and 0.79 for the whole tumor, tumor core and enhanced tumor core respectively. Our experiments indicate that the proposed methods have a promising potential in the field of brain tumor segmentation.

Efficient Transfer Learning using Pre-trained Models on CT/MRI

  • Guobadia, Nicole
2023 Thesis, cited 0 times
Website
The medical imaging field has unique obstacles to face when performing computer vision classification tasks. The retrieval of the data, be it CT scans or MRI, is not only expensive but also limited due to the lack of publicly available labeled data. In spite of this, clinicians often need this medical imaging data to perform diagnosis and recommendations for treatment. This motivates the use of efficient transfer learning techniques to not only condense the complexity of the data as it is often volumetric, but also to achieve better results faster through the use of established machine learning techniques like transfer learning, fine-tuning, and shallow deep learning. In this paper, we introduce a three-step process to perform classification using CT scans and MRI data. The process makes use of fine-tuning to align the pretrained model with the target class, feature extraction to preserve learned information for downstream classification tasks, and shallow deep learning to perform subsequent training. Experiments are done to compare the performance of the proposed methodology as well as the time cost trade offs for using our technique compared to other baseline methods. Through these experiments we find that our proposed method outperforms all other baselines while achieving a substantial speed up in overall training time.

Novel computer‐aided lung cancer detection based on convolutional neural network‐based and feature‐based classifiers using metaheuristics

  • Guo, Z. Q.
  • Xu, L. A.
  • Si, Y. J.
  • Razmjooy, N.
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2021 Journal Article, cited 1 times
Website
This study proposes a lung cancer diagnosis system based on computed tomography (CT) scan images for the detection of the disease. The proposed method uses a sequential approach to achieve this goal. Consequently, two well-organized classifiers, the convolutional neural network (CNN) and feature-based methodology, have been used. In the first step, the CNN classifier is optimized using a newly designed optimization method called the improved Harris hawk optimizer. This method is applied to the dataset, and the classification is commenced. If the disease cannot be detected via this method, the results are conveyed to the second classifier, that is, the feature-based method. This classifier, including Haralick and LBP features, is subsequently applied to the received dataset from the CNN classifier. Finally, if the feature-based method also does not detect cancer, the case study is healthy; otherwise, the case study is cancerous.

Brain Tumor Segmentation Based on Attention Mechanism and Multi-model Fusion

  • Guo, Xutao
  • Yang, Chushu
  • Ma, Ting
  • Zhou, Pengzheng
  • Lu, Shangfeng
  • Ji, Nan
  • Li, Deling
  • Wang, Tong
  • Lv, Haiyan
2020 Book Section, cited 0 times
Brain tumor are uncontrollable and abnormal cells in the brain. The incidence and mortality of brain tumors are very high. Among them, gliomas are the most common primary malignant tumors with different degrees of invasion. The segmentation of brain tumors is a prerequisite for disease diagnosis, surgical planning and prognosis. According to the characteristics of brain tumor data, we designed a multi-model fusion brain tumor automatic segmentation algorithm based on attention mechanism [1]. Our network architecture is slightly modified based on 3D U-Net [2]. At the same time, the attention mechanism was added to the 3D U-Net model. According to the patch size and attention mechanism in the training process, four independent networks are designed. Here, we use 64 × 64 × 64 and 128 × 128 × 128 patch sizes to train different sub-networks. Finally, the results of the four models in the label layer are combined to get the final segmentation results. This multi model fusion method can effectively improve the robustness of the algorithm. At the same time, the attention method can improve the feature extraction ability of the network and improve the segmentation accuracy. Our experimental study on the newly released brats data set (brats 2019) shows that our method accurately describes brain tumors.

Domain Knowledge Based Brain Tumor Segmentation and Overall Survival Prediction

  • Guo, Xiaoqing
  • Yang, Chen
  • Lam, Pak Lun
  • Woo, Peter Y. M.
  • Yuan, Yixuan
2020 Book Section, cited 0 times
Automatically segmenting sub-regions of gliomas (necrosis, edema and enhancing tumor) and accurately predicting overall survival (OS) time from multimodal MRI sequences have important clinical significance in diagnosis, prognosis and treatment of gliomas. However, due to the high degree variations of heterogeneous appearance and individual physical state, the segmentation of sub-regions and OS prediction are very challenging. To deal with these challenges, we utilize a 3D dilated multi-fiber network (DMFNet) with weighted dice loss for brain tumor segmentation, which incorporates prior volume statistic knowledge and obtains a balance between small and large objects in MRI scans. For OS prediction, we propose a DenseNet based 3D neural network with position encoding convolutional layer (PECL) to extract meaningful features from T1 contrast MRI, T2 MRI and previously segmented sub-regions. Both labeled data and unlabeled data are utilized to prevent over-fitting for semi-supervised learning. Those learned deep features along with handcrafted features (such as ages, volume of tumor) and position encoding segmentation features are fed to a Gradient Boosting Decision Tree (GBDT) to predict a specific OS day.

Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data

  • Guo, Wentian
  • Li, Hui
  • Zhu, Yitan
  • Lan, Li
  • Yang, Shengjie
  • Drukker, Karen
  • Morris, Elizabeth
  • Burnside, Elizabeth
  • Whitman, Gary
  • Giger, Maryellen L
  • Ji, Y.
  • TCGA Breast Phenotype Research Group
Journal of Medical Imaging 2015 Journal Article, cited 57 times
Website
Genomic and radiomic imaging profiles of invasive breast carcinomas from The Cancer Genome Atlas and The Cancer Imaging Archive were integrated and a comprehensive analysis was conducted to predict clinical outcomes using the radiogenomic features. Variable selection via LASSO and logistic regression were used to select the most-predictive radiogenomic features for the clinical phenotypes, including pathological stage, lymph node metastasis, and status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). Cross-validation with receiver operating characteristic (ROC) analysis was performed and the area under the ROC curve (AUC) was employed as the prediction metric. Higher AUCs were obtained in the prediction of pathological stage, ER, and PR status than for lymph node metastasis and HER2 status. Overall, the prediction performances by genomics alone, radiomics alone, and combined radiogenomics features showed statistically significant correlations with clinical outcomes; however, improvement on the prediction performance by combining genomics and radiomics data was not found to be statistically significant, most likely due to the small sample size of 91 cancer cases with 38 radiomic features and 144 genomic features.

Comparison of performances of conventional and deep learning-based methods in segmentation of lung vessels and registration of chest radiographs

  • Guo, W.
  • Gu, X.
  • Fang, Q.
  • Li, Q.
Radiol Phys Technol 2020 Journal Article, cited 0 times
Website
Conventional machine learning-based methods have been effective in assisting physicians in making accurate decisions and utilized in computer-aided diagnosis for more than 30 years. Recently, deep learning-based methods, and convolutional neural networks in particular, have rapidly become preferred options in medical image analysis because of their state-of-the-art performance. However, the performances of conventional and deep learning-based methods cannot be compared reliably because of their evaluations on different datasets. Hence, we developed both conventional and deep learning-based methods for lung vessel segmentation and chest radiograph registration, and subsequently compared their performances on the same datasets. The results strongly indicated the superiority of deep learning-based methods over their conventional counterparts.

Texture synthesis for generating realistic-looking bronchoscopic videos

  • Guo, L.
  • Nahm, W.
Int J Comput Assist Radiol Surg 2023 Journal Article, cited 2 times
Website
PURPOSE: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. METHODS: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. RESULTS: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. CONCLUSIONS: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.

Cascaded Global Context Convolutional Neural Network for Brain Tumor Segmentation

  • Guo, Dong
  • Wang, Lu
  • Song, Tao
  • Wang, Guotai
2020 Book Section, cited 9 times
Website
A cascade of global context convolutional neural networks is proposed to segment multi-modality MR images with brain tumor into three subregions: enhancing tumor, whole tumor and tumor core. Each network is a modification of the 3D U-Net consisting of residual connection, group normalization and deep supervision. In addition, we apply Global Context (GC) block to capture long-range dependency and inter-channel dependency. We use a combination of logarithmic Dice loss and weighted cross entropy loss to focus on less accurate voxels and improve the accuracy. Experiments with BraTS 2019 validation set show the proposed method achieved average Dice scores of 0.77338, 0.90712, 0.83911 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2019 testing set were 0.79303, 0.87962, 0.82887 for enhancing tumor, whole tumor and tumor core, respectively.

Image Recovery from Synthetic Noise Artifacts in CT Scans Using Modified U-Net

  • Gunawan, Rudy
  • Tran, Yvonne
  • Zheng, Jinchuan
  • Nguyen, Hung
  • Chai, Rifai
Sensors (Basel) 2022 Journal Article, cited 0 times
Website
Computed Tomography (CT) is commonly used for cancer screening as it utilizes low radiation for the scan. One problem with low-dose scans is the noise artifacts associated with low photon count that can lead to a reduced success rate of cancer detection during radiologist assessment. The noise had to be removed to restore detail clarity. We propose a noise removal method using a new model Convolutional Neural Network (CNN). Even though the network training time is long, the result is better than other CNN models in quality score and visual observation. The proposed CNN model uses a stacked modified U-Net with a specific number of feature maps per layer to improve the image quality, observable on an average PSNR quality score improvement out of 174 images. The next best model has 0.54 points lower in the average score. The score difference is less than 1 point, but the image result is closer to the full-dose scan image. We used separate testing data to clarify that the model can handle different noise densities. Besides comparing the CNN configuration, we discuss the denoising quality of CNN compared to classical denoising in which the noise characteristics affect quality.

AIR-Net: A novel multi-task learning method with auxiliary image reconstruction for predicting EGFR mutation status on CT images of NSCLC patients

  • Gui, D.
  • Song, Q.
  • Song, B.
  • Li, H.
  • Wang, M.
  • Min, X.
  • Li, A.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
Automated and accurate EGFR mutation status prediction using computed tomography (CT) imagery is of great value for tailoring optimal treatments to non-small cell lung cancer (NSCLC) patients. However, existing deep learning based methods usually adopt a single task learning strategy to design and train EGFR mutation status prediction models with limited training data, which may be insufficient to learn distinguishable representations for promoting prediction performance. In this paper, a novel multi-task learning method named AIR-Net is proposed to precisely predict EGFR mutation status on CT images. First, an auxiliary image reconstruction task is effectively integrated with EGFR mutation status prediction, aiming at providing extra supervision at the training phase. Particularly, we adequately employ multi-level information in a shared encoder to generate more comprehensive representations of tumors. Second, a powerful feature consistency loss is further introduced to constrain semantic consistency of original and reconstructed images, which contributes to enhanced image reconstruction and offers more effective regularization to AIR-Net during training. Performance analysis of AIR-Net indicates that auxiliary image reconstruction plays an essential role in identifying EGFR mutation status. Furthermore, extensive experimental results demonstrate that our method achieves favorable performance against other competitive prediction methods. All the results executed in this study suggest that the effectiveness and superiority of AIR-Net in precisely predicting EGFR mutation status of NSCLC.

A generalized graph reduction framework for interactive segmentation of large images

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
Computer Vision and Image UnderstandingComput Vis Image Und 2016 Journal Article, cited 5 times
Website
The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into "layers" (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation. (C) 2016 Elsevier Inc. All rights reserved.

User-guided graph reduction for fast image segmentation

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
2015 Conference Proceedings, cited 2 times
Website
Graph-based segmentation methods such as the random walker (RW) are known to be computationally expensive. For high resolution images, user interaction with the algorithm is significantly affected. This paper introduces a novel seeding approach for graph-based segmentation that reduces computation time. Instead of marking foreground and background pixels, the user roughly marks the object boundary forming separate regions. The image pixels are then grouped into a hierarchy of increasingly large layers based on their distance from these markings. Next, foreground and background seeds are automatically generated according to the hierarchical layers of each region. The highest layers of the hierarchy are ignored leading to a significant graph reduction. Finally, validation experiments based on multiple automatically generated input seeds were carried out on a variety of medical images. Results show a significant gain in time for high resolution images using the new approach.

User-centered design and evaluation of interactive segmentation methods for medical images

  • Gueziri, Houssem-Eddine
2017 Thesis, cited 1 times
Website
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation. Titre traduit Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales Résumé traduit La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.

External validation of a CT-based radiomics signature in oropharyngeal cancer: Assessing sources of variation

  • Guevorguian, P.
  • Chinnery, T.
  • Lang, P.
  • Nichols, A.
  • Mattonen, S. A.
Radiother Oncol 2022 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Radiomics is a high-throughput approach that allows for quantitative analysis of imaging data for prognostic applications. Medical images are used in oropharyngeal cancer (OPC) diagnosis and treatment planning and these images may contain prognostic information allowing for treatment personalization. However, the lack of validated models has been a barrier to the translation of radiomic research to the clinic. We hypothesize that a previously developed radiomics model for risk stratification in OPC can be validated in a local dataset. MATERIALS AND METHODS: The radiomics signature predicting overall survival incorporates features derived from the primary gross tumor volume of OPC patients treated with radiation +/- chemotherapy at a single institution (n = 343). Model fit, calibration, discrimination, and utility were evaluated. The signature was compared with a clinical model using overall stage and a model incorporating both radiomics and clinical data. A model detecting dental artifacts on computed tomography images was also validated. RESULTS: The radiomics signature had a Concordance index (C-index) of 0.66 comparable to the clinical model's C-index of 0.65. The combined model significantly outperformed (C-index of 0.69, p = 0.024) the clinical model, suggesting that radiomics provides added value. The dental artifact model demonstrated strong ability in detecting dental artifacts with an area under the curve of 0.87. CONCLUSION: This work demonstrates model performance comparable to previous validation work and provides a framework for future independent and multi-center validation efforts. With sufficient validation, radiomic models have the potential to improve traditional systems of risk stratification, treatment personalization and patient outcomes.

FFCAEs : An efficient feature fusion framework using cascaded autoencoders for the identification of gliomas

  • Gudigar, Anjan
  • Raghavendra, U.
  • Rao, Tejaswi N.
  • Samanth, Jyothi
  • Rajinikanth, Venkatesan
  • Satapathy, Suresh Chandra
  • Ciaccio, Edward J.
  • Wai Yee, Chan
  • Acharya, U. Rajendra
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2022 Journal Article, cited 0 times
Website
Intracranial tumors arise from constituents of the brain and its meninges. Glioblastoma (GBM) is the most common adult primary intracranial neoplasm and is categorized as high-grade astrocytoma according to the World Health Organization (WHO). The survival rate for 5 and 10 years after diagnosis is under 10%, contributing to its grave prognosis. Early detection of GBM enables early intervention, prognostication, and treatment monitoring. Computer-aided diagnostics (CAD) is a computerized process that helps to differentiate between GBM and low-grade gliomas (LGG), using the perceptible analysis of magnetic resonance (MR) of the brain. This study proposes a framework consisting of a feature fusion algorithm with cascaded autoencoders (CAEs), referred to as FFCAEs. Here we utilized two CAEs and extracted the relevant features from multiple CAEs. Inspired by the existing work on fusion algorithms, the obtained features are then fused by using a novel fusion algorithm. Finally, the resultant fused features are classified with the Softmax classifier to arrive at an average classification accuracy of 96.7%, which is 2.45% more than the previously best-performing model. The method is shown to be efficacious thus, it can be useful as a utility program for doctors.

Glioma Grade Classification via Omics Imaging

  • Guarracino, Mario
  • Manzo, Mario
  • Manipur, Ichcha
  • Granata, Ilaria
  • Maddalena, Lucia
2020 Conference Paper, cited 0 times
Omics imaging is an emerging interdisciplinary field concerned with the integration of data collected from biomedical images and omics experiments. Bringing together information coming from different sources, it permits to reveal hidden genotype-phenotype relationships, with the aim of better understanding the onset and progression of many diseases, and identifying new diagnostic and prognostic biomarkers. In this work, we present an omics imaging approach to the classification of different grades of gliomas, which are primary brain tumors arising from glial cells, as this is of critical clinical importance for making decisions regarding initial and subsequent treatment strategies. Imaging data come from analyses available in The Cancer Imaging Archive, while omics attributes are extracted by integrating metabolic models with transcriptomic data available from the Genomic Data Commons portal. We investigate the results of feature selection for the two types of data separately, as wel l as for the integrated data, providing hints on the most distinctive ones that can be exploited as biomarkers for glioma grading. Moreover, we show how the integrated data can provide additional clinical information as compared to the two types of data separately, leading to higher performance. We believe our results can be valuable to clinical tests in practice.

Unpaired Cross-Modal Interaction Learning for COVID-19 Segmentation on Limited CT Images

  • Guan, Qingbiao
  • Xie, Yutong
  • Yang, Bing
  • Zhang, Jianpeng
  • Liao, Zhibin
  • Wu, Qi
  • Xia, Yong
2023 Book Section, cited 0 times
Accurate automated segmentation of infected regions in CT images is crucial for predicting COVID-19’s pathological stage and treatment response. Although deep learning has shown promise in medical image segmentation, the scarcity of pixel-level annotations due to their expense and time-consuming nature limits its application in COVID-19 segmentation. In this paper, we propose utilizing large-scale unpaired chest X-rays with classification labels as a means of compensating for the limited availability of densely annotated CT scans, aiming to learn robust representations for accurate COVID-19 segmentation. To achieve this, we design an Unpaired Cross-modal Interaction (UCI) learning framework. It comprises a multi-modal encoder, a knowledge condensation (KC) and knowledge-guided interaction (KI) module, and task-specific networks for final predictions. The encoder is built to capture optimal feature representations for both CT and X-ray images. To facilitate information interaction between unpaired cross-modal data, we propose the KC that introduces a momentum-updated prototype learning strategy to condense modality-specific knowledge. The condensed knowledge is fed into the KI module for interaction learning, enabling the UCI to capture critical features and relationships across modalities and enhance its representation ability for COVID-19 segmentation. The results on the public COVID-19 segmentation benchmark show that our UCI with the inclusion of chest X-rays can significantly improve segmentation performance, outperforming advanced segmentation approaches including nnUNet, CoTr, nnFormer, and Swin UNETR. Code is available at: https://github.com/GQBBBB/UCI.

Automatic Colorectal Segmentation with Convolutional Neural Network

  • Guachi, Lorena
  • Guachi, Robinson
  • Bini, Fabiano
  • Marinozzi, Franco
Computer-Aided Design and Applications 2019 Journal Article, cited 3 times
Website
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.

Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography

  • Gu, Y.
  • Lu, X.
  • Zhang, B.
  • Zhao, Y.
  • Yu, D.
  • Gao, L.
  • Cui, G.
  • Wu, L.
  • Zhou, T.
PLoS One 2019 Journal Article, cited 0 times
Website
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.

Multi-View Radiomics Feature Fusion Reveals Distinct Immuno-Oncological Characteristics and Clinical Prognoses in Hepatocellular Carcinoma

  • Gu, Yu
  • Huang, Hao
  • Tong, Qi
  • Cao, Meng
  • Ming, Wenlong
  • Zhang, Rongxin
  • Zhu, Wenyong
  • Wang, Yuqi
  • Sun, Xiao
Cancers 2023 Journal Article, cited 0 times
Website
Hepatocellular carcinoma (HCC) is one of the most prevalent malignancies worldwide, and the pronounced intra- and inter-tumor heterogeneity restricts clinical benefits. Dissecting molecular heterogeneity in HCC is commonly explored by endoscopic biopsy or surgical forceps, but invasive tissue sampling and possible complications limit the broadeer adoption. The radiomics framework is a promising non-invasive strategy for tumor heterogeneity decoding, and the linkage between radiomics and immuno-oncological characteristics is worth further in-depth study. In this study, we extracted multi-view imaging features from contrast-enhanced CT (CE-CT) scans of HCC patients, followed by developing a fused imaging feature subtyping (FIFS) model to identify two distinct radiomics subtypes. We observed two subtypes of patients with distinct texture-dominated radiomics profiles and prognostic outcomes, and the radiomics subtype identified by FIFS model was an independent prognostic factor. The heterogeneity was mainly attributed to inflammatory pathway activity and the tumor immune microenvironment. The predominant radiogenomics association was identified between texture-related features and immune-related pathways by integrating network analysis, and was validated in two independent cohorts. Collectively, this work described the close connections between multi-view radiomics features and immuno-oncological characteristics in HCC, and our integrative radiogenomics analysis strategy may provide clues to non-invasive inflammation-based risk stratification.

CycleGAN denoising of extreme low-dose cardiac CT using wavelet-assisted noise disentanglement

  • Gu, J.
  • Yang, T. S.
  • Ye, J. C.
  • Yang, D. H.
Med Image Anal 2021 Journal Article, cited 1 times
Website
In electrocardiography (ECG) gated cardiac CT angiography (CCTA), multiple images covering the entire cardiac cycle are taken continuously, so reduction of the accumulated radiation dose could be an important issue for patient safety. Although ECG-gated dose modulation (so-called ECG pulsing) is used to acquire many phases of CT images at a low dose, the reduction of the radiation dose introduces noise into the image reconstruction. To address this, we developed a high performance unsupervised deep learning method using noise disentanglement that can effectively learn the noise patterns even from extreme low dose CT images. For noise disentanglement, we use a wavelet transform to extract the high-frequency signals that contain the most noise. Since matched low-dose and high-dose cardiac CT data are impossible to obtain in practice, our neural network was trained in an unsupervised manner using cycleGAN for the extracted high frequency signals from the low-dose and unpaired high-dose CT images. Once the network is trained, denoised images are obtained by subtracting the estimated noise components from the input images. Image quality evaluation of the denoised images from only 4% dose CT images was performed by experienced radiologists for several anatomical structures. Visual grading analysis was conducted according to the sharpness level, noise level, and structural visibility. Also, the signal-to-noise ratio was calculated. The evaluation results showed that the quality of the images produced by the proposed method is much improved compared to low-dose CT images and to the baseline cycleGAN results. The proposed noise-disentangled cycleGAN with wavelet transform effectively removed noise from extreme low-dose CT images compared to the existing baseline algorithms. It can be an important denoising platform for low-dose CT.

Development and verification of radiomics framework for computed tomography image segmentation

  • Gu, Jiabing
  • Li, Baosheng
  • Shu, Huazhong
  • Zhu, Jian
  • Qiu, Qingtao
  • Bai, Tong
Medical Physics 2022 Journal Article, cited 0 times
Website

Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data

  • Gsaxner, Christina
  • Roth, Peter M
  • Wallner, Jurgen
  • Egger, Jan
PLoS One 2019 Journal Article, cited 0 times
Website
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.

Automatic tumour segmentation in brain images: moving towards clinical implementation

  • Gryska, Emilia
2022 Thesis, cited 0 times
Website
The aim of this thesis was to examine and enhance the scientific groundwork for translating deep learning (DL) algorithms for brain tumour segmentation into clinical decision support tools. Paper II describes a scoping review conducted to map the field of automatic brain lesion segmentation on magnetic resonance (MR) images according to a predefined and peer-reviewed study protocol (Paper I). Insufficient preprocessing description was identified as one factor hindering clinical implementation of the reviewed algorithms. A reproducibility and replicability analysis of two algorithms was described in Paper III. The two algorithms and their validation studies were previously assessed as reproducible. In this experimental investigation, the original validation results were reproduced and replicated for one algorithm. Analysing the reasons for failure to reproduce validation of the second algorithm led to a suggested update to a commonly-used reproducibility checklist; the importance of a thorough description of preprocessing was highlighted. In Paper IV, radiologists' perception of DL-generated brain tumour labels in tumour volume growth assessment was examined. Ten radiologists participated in a reading/questionnaire session of 20 MR examination cases. The readers were confident that the label-derived volume change is more accurate than their visual assessment, even when the inter-rater agreement on the label quality was poor. In Paper V, the broad theme of trust in artificial intelligence (AI) in radiology was explored. A semi-structured interview study with twenty-six AI implementation stakeholders was conducted. Four requirements of the implemented tools and procedures were identified that promote trust in AI: reliability, quality control, transparency, and inter-organisational compatibility. The findings indicate that current strategies to validate DL algorithms do not suffice to assess their accuracy in a clinical setting. Despite the recognition from radiologists that DL algorithms can improve the accuracy of tumour volume assessment, implementation strategies require more work and the involvement of multiple stakeholders.

Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy

  • Gruselius, Hanna
Mathematics 2018 Thesis, cited 0 times
Website

Smooth extrapolation of unknown anatomy via statistical shape models

  • Grupp, RB
  • Chiang, H
  • Otake, Y
  • Murphy, RJ
  • Gordon, CR
  • Armand, M
  • Taylor, RH
2015 Conference Proceedings, cited 2 times
Website

Using Deep Learning for Pulmonary Nodule Detection & Diagnosis

  • Gruetzemacher, Richard
  • Gupta, Ashish
2016 Conference Paper, cited 0 times

Brain Tumor Segmentation and Associated Uncertainty Evaluation Using Multi-sequences MRI Mixture Data Preprocessing

  • Groza, Vladimir
  • Tuchinov, Bair
  • Amelina, Evgeniya
  • Pavlovskiy, Evgeniy
  • Tolstokulakov, Nikolay
  • Amelin, Mikhail
  • Golushko, Sergey
  • Letyagin, Andrey
2021 Book Section, cited 0 times
The brain tumor segmentation is one of the crucial tasks nowadays among other directions and domains where daily clinical workflow requires to put a lot of efforts while studying computer tomography (CT) or structural magnetic resonance imaging (MRI) scans of patients with various pathologies. MRI is the most common method of primary detection, non-invasive diagnostics and a source of recommendations for further treatment of brain diseases. The brain is a complex structure, different areas of which have different functional significance. In this paper, we extend the previous research work on the robust pre-processing methods which allow to consider all available information from MRI scans by the composition of T1, T1C, T2 and T2-Flair sequences in the unique input. Such approach enriches the input data for the segmentation process and helps to improve the accuracy of the segmentation and associated uncertainty evaluation performance. Proposed in this paper method also demonstrates strong improvement on the segmentation problem. This conclusion was done with respect to Dice metric, Sensitivity and Specificity compare to identical training/validation procedure based only on any single sequence and regardless of the chosen neural network architecture. Obtained results demonstrate significant performance improvement while combining three MRI sequences in the 3-channel RGB like image for considered tasks of brain tumor segmentation. In this work we provide the comparison of various gradient descent optimization methods and of the different backbone architectures.

Quantitative Computed Tomographic Descriptors Associate Tumor Shape Complexity and Intratumor Heterogeneity with Prognosis in Lung Adenocarcinoma

  • Grove, Olya
  • Berglund, Anders E
  • Schabath, Matthew B
  • Aerts, Hugo JWL
  • Dekker, Andre
  • Wang, Hua
  • Velazquez, Emmanuel Rios
  • Lambin, Philippe
  • Gu, Yuhua
  • Balagurunathan, Yoganand
  • Eikman, E.
  • Gatenby, Robert A
  • Eschrich, S
  • Gillies, Robert J
PLoS One 2015 Journal Article, cited 87 times
Website
Two CT features were developed to quantitatively describe lung adenocarcinomas by scoring tumor shape complexity (feature 1: convexity) and intratumor density variation (feature 2: entropy ratio) in routinely obtained diagnostic CT scans. The developed quantitative features were analyzed in two independent cohorts (cohort 1: n = 61; cohort 2: n = 47) of patients diagnosed with primary lung adenocarcinoma, retrospectively curated to include imaging and clinical data. Preoperative chest CTs were segmented semi-automatically. Segmented tumor regions were further subdivided into core and boundary sub-regions, to quantify intensity variations across the tumor. Reproducibility of the features was evaluated in an independent test-retest dataset of 32 patients. The proposed metrics showed high degree of reproducibility in a repeated experiment (concordance, CCC>/=0.897; dynamic range, DR>/=0.92). Association with overall survival was evaluated by Cox proportional hazard regression, Kaplan-Meier survival curves, and the log-rank test. Both features were associated with overall survival (convexity: p = 0.008; entropy ratio: p = 0.04) in Cohort 1 but not in Cohort 2 (convexity: p = 0.7; entropy ratio: p = 0.8). In both cohorts, these features were found to be descriptive and demonstrated the link between imaging characteristics and patient survival in lung adenocarcinoma.

Machine Learning-based Differentiation of Benign and Premalignant Colorectal Polyps Detected with CT Colonography in an Asymptomatic Screening Population: A Proof-of-Concept Study

  • Grosu, S.
  • Wesp, P.
  • Graser, A.
  • Maurus, S.
  • Schulz, C.
  • Knosel, T.
  • Cyran, C. C.
  • Ricke, J.
  • Ingrisch, M.
  • Kazmierczak, P. M.
RadiologyRadiology 2021 Journal Article, cited 0 times
Website
Background CT colonography does not enable definite differentiation between benign and premalignant colorectal polyps. Purpose To perform machine learning-based differentiation of benign and premalignant colorectal polyps detected with CT colonography in an average-risk asymptomatic colorectal cancer screening sample with external validation using radiomics. Materials and Methods In this secondary analysis of a prospective trial, colorectal polyps of all size categories and morphologies were manually segmented on CT colonographic images and were classified as benign (hyperplastic polyp or regular mucosa) or premalignant (adenoma) according to the histopathologic reference standard. Quantitative image features characterizing shape (n = 14), gray level histogram statistics (n = 18), and image texture (n = 68) were extracted from segmentations after applying 22 image filters, resulting in 1906 feature-filter combinations. Based on these features, a random forest classification algorithm was trained to predict the individual polyp character. Diagnostic performance was validated in an external test set. Results The random forest model was fitted using a training set consisting of 107 colorectal polyps in 63 patients (mean age, 63 years +/- 8 [standard deviation]; 40 men) comprising 169 segmentations on CT colonographic images. The external test set included 77 polyps in 59 patients comprising 118 segmentations. Random forest analysis yielded an area under the receiver operating characteristic curve of 0.91 (95% CI: 0.85, 0.96), a sensitivity of 82% (65 of 79) (95% CI: 74%, 91%), and a specificity of 85% (33 of 39) (95% CI: 72%, 95%) in the external test set. In two subgroup analyses of the external test set, the area under the receiver operating characteristic curve was 0.87 in the size category of 6-9 mm and 0.90 in the size category of 10 mm or larger. The most important image feature for decision making (relative importance of 3.7%) was quantifying first-order gray level histogram statistics. Conclusion In this proof-of-concept study, machine learning-based image analysis enabled noninvasive differentiation of benign and premalignant colorectal polyps with CT colonography. (c) RSNA, 2021 Online supplemental material is available for this article.

Defining the biological and clinical basis of radiomics: towards clinical imaging biomarkers

  • Großmann, P. B. H. J.
  • Grossmann, Patrick Benedict Hans Juan
2018 Thesis, cited 0 times
Website

Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma

  • Grossmann, Patrick
  • Gutman, David A
  • Dunn, William D
  • Holder, Chad A
  • Aerts, Hugo JWL
BMC Cancer 2016 Journal Article, cited 21 times
Website
Background Glioblastoma (GBM) tumors exhibit strong phenotypic differences that can be quantified using magnetic resonance imaging (MRI), but the underlying biological drivers of these imaging phenotypes remain largely unknown. An Imaging-Genomics analysis was performed to reveal the mechanistic associations between MRI derived quantitative volumetric tumor phenotype features and molecular pathways. Methods One hundred fourty one patients with presurgery MRI and survival data were included in our analysis. Volumetric features were defined, including the necrotic core (NE), contrast-enhancement (CE), abnormal tumor volume assessed by post-contrast T1w (tumor bulk or TB), tumor-associated edema based on T2-FLAIR (ED), and total tumor volume (TV), as well as ratios of these tumor components. Based on gene expression where available (n = 91), pathway associations were assessed using a preranked gene set enrichment analysis. These results were put into context of molecular subtypes in GBM and prognostication. Results Volumetric features were significantly associated with diverse sets of biological processes (FDR < 0.05). While NE and TB were enriched for immune response pathways and apoptosis, CE was associated with signal transduction and protein folding processes. ED was mainly enriched for homeostasis and cell cycling pathways. ED was also the strongest predictor of molecular GBM subtypes (AUC = 0.61). CE was the strongest predictor of overall survival (C-index = 0.6; Noether test, p = 4x10−4). Conclusion GBM volumetric features extracted from MRI are significantly enriched for information about the biological state of a tumor that impacts patient outcomes. Clinical decision-support systems could exploit this information to develop personalized treatment strategies on the basis of noninvasive imaging.

Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy

  • Grossberg, Aaron J
  • Mohamed, Abdallah SR
  • El Halawani, Hesham
  • Bennett, William C
  • Smith, Kirk E
  • Nolan, Tracy S
  • Williams, Bowman
  • Chamchod, Sasikarn
  • Heukelom, Jolien
  • Kantor, Michael E
Scientific data 2018 Journal Article, cited 0 times
Website

LiverHccSeg: A publicly available multiphasic MRI dataset with liver and HCC tumor segmentations and inter-rater agreement analysis

  • Gross, M.
  • Arora, S.
  • Huber, S.
  • Kucukkaya, A. S.
  • Onofrey, J. A.
2023 Journal Article, cited 0 times
Website
Accurate segmentation of liver and tumor regions in medical imaging is crucial for the diagnosis, treatment, and monitoring of hepatocellular carcinoma (HCC) patients. However, manual segmentation is time-consuming and subject to inter- and intra-rater variability. Therefore, automated methods are necessary but require rigorous validation of high-quality segmentations based on a consensus of raters. To address the need for reliable and comprehensive data in this domain, we present LiverHccSeg, a dataset that provides liver and tumor segmentations on multiphasic contrast-enhanced magnetic resonance imaging from two board-approved abdominal radiologists, along with an analysis of inter-rater agreement. LiverHccSeg provides a curated resource for liver and HCC tumor segmentation tasks. The dataset includes a scientific reading and co-registered contrast-enhanced multiphasic magnetic resonance imaging (MRI) scans with corresponding manual segmentations by two board-approved abdominal radiologists and relevant metadata and offers researchers a comprehensive foundation for external validation, and benchmarking of liver and tumor segmentation algorithms. The dataset also provides an analysis of the agreement between the two sets of liver and tumor segmentations. Through the calculation of appropriate segmentation metrics, we provide insights into the consistency and variability in liver and tumor segmentations among the radiologists. A total of 17 cases were included for liver segmentation and 14 cases for HCC tumor segmentation. Liver segmentations demonstrates high segmentation agreement (mean Dice, 0.95 +/- 0.01 [standard deviation]) and HCC tumor segmentations showed higher variation (mean Dice, 0.85 +/- 0.16 [standard deviation]). The applications of LiverHccSeg can be manifold, ranging from testing machine learning algorithms on public external data to radiomic feature analyses. Leveraging the inter-rater agreement analysis within the dataset, researchers can investigate the impact of variability on segmentation performance and explore methods to enhance the accuracy and robustness of liver and tumor segmentation algorithms in HCC patients. By making this dataset publicly available, LiverHccSeg aims to foster collaborations, facilitate innovative solutions, and ultimately improve patient outcomes in the diagnosis and treatment of HCC.

Towards Population-Based Histologic Stain Normalization of Glioblastoma

  • Grenko, Caleb M.
  • Viaene, Angela N.
  • Nasrallah, MacLean P.
  • Feldman, Michael D.
  • Akbari, Hamed
  • Bakas, Spyridon
Brainlesion 2020 Journal Article, cited 0 times
Website
Glioblastoma (‘GBM’) is the most aggressive type of primary malignant adult brain tumor, with very heterogeneous radiographic, histologic, and molecular profiles. A growing body of advanced computational analyses are conducted towards further understanding the biology and variation in glioblastoma. To address the intrinsic heterogeneity among different computational studies, reference standards have been established to facilitate both radiographic and molecular analyses, e.g., anatomical atlas for image registration and housekeeping genes, respectively. However, there is an apparent lack of reference standards in the domain of digital pathology, where each independent study uses an arbitrarily chosen slide from their evaluation dataset for normalization purposes. In this study, we introduce a novel stain normalization approach based on a composite reference slide comprised of information from a large population of anatomically annotated hematoxylin and eosin (‘H&E’) whole-slide images from the Ivy Glioblastoma Atlas Project (‘IvyGAP’). Two board-certified neuropathologists manually reviewed and selected annotations in 509 slides, according to the World Health Organization definitions. We computed summary statistics from each of these approved annotations and weighted them based on their percent contribution to overall slide (‘PCOS’), to form a global histogram and stain vectors. Quantitative evaluation of pre- and post-normalization stain density statistics for each annotated region with PCOS>0.05% yielded a significant (largest p=0.001, two-sided Wilcoxon rank sum test) reduction of its intensity variation for both ‘H’ & ‘E’. Subject to further large-scale evaluation, our findings support the proposed approach as a potentially robust population-based reference for stain normalization.

Interoperable encoding and 3D printing of anatomical structures resulting from manual or automated delineation

  • Gregoir, Thibault
2023 Thesis, cited 0 times
Website
The understanding and visualization of the human body have been instrumental in the progress of medical science. Over time, the shift from cumbersome and invasive methods to modern scanners highlights the significance of expertise in retrieving, utilizing, and comprehending the resulting data. 3D rendering and printing of organic structures offer promising applications such as surgical planning and medical education. However, challenges arise as technological advancements generate increasingly vast amounts of data, necessitating seamless manipulation and transfer within the medical field. The goal of this master thesis is to explore interoperability in encoding 3D models and the ability to print those models resulting from 3D reconstruction on medical input data. This exploration will be done for models that were originally segmented by manual delineation or in an automated way. Different parts of this thematic were already explored in a specific way like for the surface reconstruction or the automatic segmentation. The idea here will be to combine the different aspects of this thesis in a single tool available and usable by everyone.

Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique

  • Greenspan, Hayit
  • van Ginneken, Bram
  • Summers, Ronald M
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 395 times
Website

Relationship between visceral adipose tissue and genetic mutations (VHL and KDM5C) in clear cell renal cell carcinoma

  • Greco, Federico
  • Mallio, Carlo Augusto
La radiologia medica 2021 Journal Article, cited 0 times
Website

Spatial Decomposition For Robust Domain Adaptation In Prostate Cancer Detection

  • Grebenisan, Andrew
  • Sedghi, Alireza
  • Izard, Jason
  • Siemens, Robert
  • Menard, Alexandre
  • Mousavi, Parvin
2021 Conference Paper, cited 0 times
Website
The utility of high-quality imaging of Prostate Cancer (PCa) using 3.0 Tesla MRI (versus 1.5 Tesla) is well established, yet a vast majority of MRI units across many countries are 1.5 Tesla. Recently, Deep Learning has been applied successfully to augment radiological interpretation of medical images. However, training such models requires very large amount of data, and often the models do not generalize well to data with different acquisition parameters. To address this, we introduce domain standardization, a novel method that enables image synthesis between domains by separating anatomy- and modality-related factors of images. Our results show an improved PCa classification with an AUC of 0.75 compared to traditional transfer learning methods. We envision domain standardization to be applied as a promising tool towards enhancing the interpretation of lower resolution MRI images, reducing the barriers of the potential uptake of deep models for jurisdictions with smaller populations.

Making head and neck cancer clinical data Findable-Accessible-Interoperable-Reusable to support multi-institutional collaboration and federated learning

  • Gouthamchand, Varsha
  • Choudhury, Ananya
  • Hoebers, Frank J. P.
  • Wesseling, Frederik W. R.
  • Welch, Mattea
  • Kim, Sejin
  • Kazmierska, Joanna
  • Dekker, Andre
  • Haibe-Kains, Benjamin
  • van Soest, Johan
  • Wee, Leonard
2024 Journal Article, cited 0 times
Abstract Objectives Federated learning (FL) is a group of methodologies where statistical modelling can be performed without exchanging identifiable patient data between cooperating institutions. To realize its potential for AI development on clinical data, a number of bottlenecks need to be addressed. One of these is making data Findable-Accessible-Interoperable-Reusable (FAIR). The primary aim of this work is to show that tools making data FAIR allow consortia to collaborate on privacy-aware data exploration, data visualization, and training of models on each other’s original data. Methods We propose a “Schema-on-Read” FAIR-ification method that adapts for different (re)analyses without needing to change the underlying original data. The procedure involves (1) decoupling the contents of the data from its schema and database structure, (2) annotation with semantic ontologies as a metadata layer, and (3) readout using semantic queries. Open-source tools are given as Docker containers to help local investigators prepare their data on-premises. Results We created a federated privacy-preserving visualization dashboard for case mix exploration of 5 distributed datasets with no common schema at the point of origin. We demonstrated robust and flexible prognostication model development and validation, linking together different data sources—clinical risk factors and radiomics. Conclusions Our procedure leads to successful (re)use of data in FL-based consortia without the need to impose a common schema at every point of origin of data. Advances in knowledge This work supports the adoption of FL within the healthcare AI community by sharing means to make data more FAIR.

Privacy-Preserving Dashboard for F.A.I.R Head and Neck Cancer data supporting multi-centered collaborations

  • Gouthamchand, Varsha
  • Choudhury, Ananya
  • Hoebers, Frank
  • Wesseling, Frederik
  • Welch, Mattea
  • Kim, Sejin
  • Haibe-Kains, Benjamin
  • Kazmierska, Joanna
  • Dekker, Andre
  • Van Soest, Johan
  • Wee, Leonard
2023 Conference Paper, cited 0 times
Website
Research in modern healthcare requires vast volumes of data from various healthcare centers across the globe. It is not always feasible to centralize clinical data without compromising privacy. A tool addressing these issues and facilitating reuse of clinical data is the need of the hour. The Federated Learning approach, governed in a set of agreements such as the Personal Health Train (PHT) manages to tackle these concerns by distributing models to the data centers instead of the traditional approach of centralizing datasets. One of the prerequisites of PHT is using semantically interoperable datasets for the models to be able to find them. FAIR (Findable, Accessible, Interoperable, Reusable) principles help in building interoperable and reusable data by adding knowledge representation and providing descriptive metadata. However, the process of making data FAIR is not always easy and straight-forward. Our main objective is to disentangle this process by using domain and technical expertise and get data prepared for federated learning. This paper introduces applications that are easily deployable as Docker containers, which will automate parts of the aforementioned process and significantly simplify the task of creating FAIR clinical data. Our method bypasses the need for clinical researchers to have a high degree of technical skills. We demonstrate the FAIR-ification process by applying it to five Head and Neck cancer datasets (four public and one private). The PHT paradigm is explored by building a distributed visualization dashboard from the aggregated summaries of the FAIR-ified datasets. Using the PHT infrastructure for exchanging only statistical summaries or model coefficients allows researchers to explore data from multiple centers without breaching privacy.

Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies

  • Götz, Michael
  • Maier-Hein, Klaus H
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website
Conducting side experiments termed robustness experiments, to identify features that are stable with respect to rescans, annotation, or other confounding effects is an important element in radiomics research. However, the matter of how to include the finding of these experiments into the model building process still needs to be explored. Three different methods for incorporating prior knowledge into a radiomics modelling process were evaluated: the naive approach (ignoring feature quality), the most common approach consisting of removing unstable features, and a novel approach using data augmentation for information transfer (DAFIT). Multiple experiments were conducted using both synthetic and publicly available real lung imaging patient data. Ignoring additional information from side experiments resulted in significantly overestimated model performances meaning the estimated mean area under the curve achieved with a model was increased. Removing unstable features improved the performance estimation, while slightly decreasing the model performance, i.e. decreasing the area under curve achieved with the model. The proposed approach was superior both in terms of the estimation of the model performance and the actual model performance. Our experiments show that data augmentation can prevent biases in performance estimation and has several advantages over the plain omission of the unstable feature. The actual gain that can be obtained depends on the quality and applicability of the prior information on the features in the given domain. This will be an important topic of future research.

Pulmonary Lung Cancer Classification Using Deep Neural Networks

  • Goswami, Jagriti
  • Singh, Koushlendra Kumar
2023 Book Section, cited 0 times
Lung cancer is the leading cause of cancer-related deaths globally. Computer-assisted detection (CAD) systems have previously been used for various disease diagnosis and hence can serve as an efficient tool for lung cancer diagnosis. In this paper, we study the problem of lung cancer classification using chest computed tomography (CT) scans and positron emission tomography–computed tomography (PET-CT). A subset of publicly available Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis (Lung-PET-CT-Dx) is used to train four different deep learning models using transfer learning for classifying three different types of lung cancer: Adenocarcinoma, Small Cell Carcinoma and Squamous Cell Carcinoma, by passing raw nodule patches to the network. The models are evaluated on metrics such as accuracy, precision, recall, F1-score and Cohen’s Kappa score. ROC curves and confusion matrices are also presented to provide a graphical representation of the models’ performance.

Radiogenomic analysis: 1p/19q codeletion based subtyping of low-grade glioma by analysing advanced biomedical texture descriptors

  • Gore, Sonal
  • Jagtap, Jayant
Journal of King Saud University - Computer and Information Sciences 2021 Journal Article, cited 1 times
Website
Presurgical discrimination of 1p/19q codeletion status may have prognostic and diagnostic value for glioma patients for immediate personalized treatment. Artificial intelligence-based models have been proved as effective method to demonstrate computer aided diagnostic system for glioma cancer. An objective of study is to present an advanced biomedical texture descriptor to perform machine learning-assisted identification of 1p/19q codeletion status of low-grade glioma (LGG) cancer. An aim is to verify efficacy of textures, extracted using local binary pattern and derived from gray level co-occurrence matrix (GLCM). Proposed study used random forest-assisted radiomics model to analyse MRI images of 159 subjects. Four different advanced biomedical texture descriptors are proposed by experimenting different extensions of LBP method. These variants-(as variant I to IV) with 8-bit or 16-bit or 24-bit LBP codes are applied with different orientations in 5 × 5, 7 × 7 square-sized neighbourhood, which are recorded in LBP histograms. These histogram features are concatenated by GLCM-based textures including energy, correlation, contrast and homogeneity. Texture descriptors performed best with classification accuracy of 87.50% (AUC: 0.917, sensitivity: 95%, specificity: 75%, f1-score: 90.48%) using 8-bit LBP variant-I. 10-fold cross-validated accuracy of all four sets range from 65.62% to 87.50% using random forest classifier and mean-AUC range from 0.646 to 0.917.

MRI based genomic analysis of glioma using three pathway deep convolutional neural network for IDH classification

  • GORE, SONAL
  • JAGTAP, JAYANT
Turkish Journal of Electrical Engineering & Computer Sciences 2021 Journal Article, cited 0 times
Website

IDH-Based Radiogenomic Characterization of Glioma Using Local Ternary Pattern Descriptor Integrated with Radiographic Features and Random Forest Classifier

  • Gore, Sonal
  • Jagtap, Jayant
International Journal of Image and Graphics 2021 Journal Article, cited 0 times
Website
Mutations in family of Isocitrate Dehydrogenase (IDH) gene occur early in oncogenesis, especially with glioma brain tumor. Molecular diagnostic of glioma using machine learning has grabbed attention to some extent from last couple of years. The development of molecular-level predictive approach carries great potential in radiogenomic field. But more focused efforts need to be put to develop such approaches. This study aims to develop an integrative genomic diagnostic method to assess the significant utility of textures combined with other radiographic and clinical features for IDH classification of glioma into IDH mutant and IDH wild type. Random forest classifier is used for classification of combined set of clinical features and radiographic features extracted from axial T2-weighted Magnetic Resonance Imaging (MRI) images of low- and high-grade glioma. Such radiogenomic analysis is performed on The Cancer Genome Atlas (TCGA) data of 74 patients of IDH mutant and 104 patients of IDH wild type. Texture features are extracted using uniform, rotation invariant Local Ternary Pattern (LTP) method. Other features such as shape, first-order statistics, image contrast-based, clinical data like age, histologic grade are combined with LTP features for IDH discrimination. Proposed random forest-assisted model achieved an accuracy of 85.89% with multivariate analysis of integrated set of feature descriptors using Glioblastoma and Low-Grade Glioma dataset available with The Cancer Imaging Archive (TCIA). Such an integrated feature analysis using LTP textures and other descriptors can effectively predict molecular class of glioma as IDH mutant and wild type.

Local Binary Pattern-Based Texture Analysis to Predict IDH Genotypes of Glioma Cancer Using Supervised Machine Learning Classifiers

  • Gore, Sonal
  • Jagtap, Jayant
2020 Conference Proceedings, cited 0 times
Website
Nowadays, machine learning-based quantified assessment of glioma has recently gained more attention by researchers in the field of medical image analysis. Such analysis makes use of either hand-crafted radiographic features with radiomic-based methods or auto-extracted features using deep learning-based methods. Radiomic-based methods cover a wide spectrum of radiographic features including texture, shape, volume, intensity, histogram, etc. The objective of the paper is to demonstrate the discriminative role of textures for molecular categorization of glioma using supervised machine learning techniques. This work aims to make state-of-the-art machine learning solutions available for magnetic resonance imaging (MRI)-based genomic analysis of glioma as a simple and sufficient technique based on single feature type, i.e., textures. The potential of this work demonstrates importance of texture features using simple, computationally efficient local binary pattern (LBP) method for isocitrate dehydrogenase (IDH)-based discrimination of glioma as IDH mutant and IDH wild type. Further, such texture-based discriminative analysis alone can definitely facilitate an immediate recommendation for further diagnostic decisions and personalized treatment plans for glioma patients.

3D Brain Tumor Segmentation and Survival Prediction Using Ensembles of Convolutional Neural Networks

  • González, S. Rosas
  • Zemmoura, I.
  • Tauber, C.
2021 Book Section, cited 0 times
Convolutional Neural Networks (CNNs) are the state of the art in many medical image applications, including brain tumor segmentation. However, no successful studies using CNNs have been reported for survival prediction in glioma patients. In this work, we present two different solutions: tumor segmentation and the other for survival prediction. We proposed using an ensemble of asymmetric U-Net like architectures to improve segmentation results in the enhancing tumor region and the use of a DenseNet model for survival prognosis. We quantitatively compare deep learning with classical regression and classification models based on radiomics features and growth tumor models features for survival prediction on the BraTS 2020 database, and we provide an insight into the limitations of these models to accurately predict survival. Our method's current performance on the BraTS 2020 test set is dice scores of 0.80, 0.87, and 0.80 for enhancing tumor, whole tumor, and tumor core, respectively, with an overall dice of 0.82. For the survival prediction task, we got a 0.57 accuracy. In addition, we proposed a voxel-wise uncertainty estimation of our segmentation method that can be used effectively to improve brain tumor segmentation.

Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis

  • Gong, J.
  • Liu, J. Y.
  • Wang, L. J.
  • Sun, X. W.
  • Zheng, B.
  • Nie, S. D.
Physica Medica 2018 Journal Article, cited 4 times
Website

CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification

  • Goncharov, M.
  • Pisov, M.
  • Shevtsov, A.
  • Shirokikh, B.
  • Kurmukov, A.
  • Blokhin, I.
  • Chernina, V.
  • Solovev, A.
  • Gombolevskiy, V.
  • Morozov, S.
  • Belyaev, M.
Med Image Anal 2021 Journal Article, cited 83 times
Website
The current COVID-19 pandemic overloads healthcare systems, including radiology departments. Though several deep learning approaches were developed to assist in CT analysis, nobody considered study triage directly as a computer science problem. We describe two basic setups: Identification of COVID-19 to prioritize studies of potentially infected patients to isolate them as early as possible; Severity quantification to highlight patients with severe COVID-19, thus direct them to a hospital or provide emergency medical care. We formalize these tasks as binary classification and estimation of affected lung percentage. Though similar problems were well-studied separately, we show that existing methods could provide reasonable quality only for one of these setups. We employ a multitask approach to consolidate both triage approaches and propose a convolutional neural network to leverage all available labels within a single model. In contrast with the related multitask approaches, we show the benefit from applying the classification layers to the most spatially detailed feature map at the upper part of U-Net instead of the less detailed latent representation at the bottom. We train our model on approximately 1500 publicly available CT studies and test it on the holdout dataset that consists of 123 chest CT studies of patients drawn from the same healthcare system, specifically 32 COVID-19 and 30 bacterial pneumonia cases, 30 cases with cancerous nodules, and 31 healthy controls. The proposed multitask model outperforms the other approaches and achieves ROC AUC scores of 0.87+/-0.01 vs. bacterial pneumonia, 0.93+/-0.01 vs. cancerous nodules, and 0.97+/-0.01 vs. healthy controls in Identification of COVID-19, and achieves 0.97+/-0.01 Spearman Correlation in Severity quantification. We have released our code and shared the annotated lesions masks for 32 CT images of patients with COVID-19 from the test dataset.

Pulmonary nodule segmentation in computed tomography with deep learning

  • Gomes, João Henriques Oliveira
2017 Thesis, cited 0 times
Website
Early detection of lung cancer is essential for treating the disease. Lung nodule segmentation systems can be used together with Computer-Aided Detection (CAD) systems, and help doctors diagnose and manage lung cancer. In this work, we create a lung nodule segmentation system based on deep learning. Deep learning is a sub-field of machine learning responsible for state-of-the-art results in several segmentation datasets such as the PASCAL VOC 2012. Our model is a modified 3D U-Net, trained on the LIDC-IDRI dataset, using the intersection over union (IOU) loss function. We show our model works for multiple types of lung nodules. Our model achieves state-of-the-art performance on the LIDC test set, using nodules annotated by at least 3 radiologists and with a consensus truth of 50%.

Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning

  • Golla, A. K.
  • Tonnes, C.
  • Russ, T.
  • Bauer, D. F.
  • Froelich, M. F.
  • Diehl, S. J.
  • Schoenberg, S. O.
  • Keese, M.
  • Schad, L. R.
  • Zollner, F. G.
  • Rink, J. S.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
Abdominal aortic aneurysms (AAA) may remain clinically silent until they enlarge and patients present with a potentially lethal rupture. This necessitates early detection and elective treatment. The goal of this study was to develop an easy-to-train algorithm which is capable of automated AAA screening in CT scans and can be applied to an intra-hospital environment. Three deep convolutional neural networks (ResNet, VGG-16 and AlexNet) were adapted for 3D classification and applied to a dataset consisting of 187 heterogenous CT scans. The 3D ResNet outperformed both other networks. Across the five folds of the first training dataset it achieved an accuracy of 0.856 and an area under the curve (AUC) of 0.926. Subsequently, the algorithms performance was verified on a second data set containing 106 scans, where it ran fully automated and resulted in an accuracy of 0.953 and an AUC of 0.971. A layer-wise relevance propagation (LRP) made the decision process interpretable and showed that the network correctly focused on the aortic lumen. In conclusion, the deep learning-based screening proved to be robust and showed high performance even on a heterogeneous multi-center data set. Integration into hospital workflow and its effect on aneurysm management would be an exciting topic of future research.

Lung nodule detection in CT images using deep convolutional neural networks

  • Golan, Rotem
  • Jacob, Christian
  • Denzinger, Jörg
2016 Conference Proceedings, cited 26 times
Website
Early detection of lung nodules in thoracic Computed Tomography (CT) scans is of great importance for the successful diagnosis and treatment of lung cancer. Due to improvements in screening technologies, and an increased demand for their use, radiologists are required to analyze an ever increasing amount of image data, which can affect the quality of their diagnoses. Computer-Aided Detection (CADe) systems are designed to assist radiologists in this endeavor. Here, we present a CADe system for the detection of lung nodules in thoracic CT images. Our system is based on (1) the publicly available Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which contains 1018 thoracic CT scans with nodules of different shape and size, and (2) a deep Convolutional Neural Network (CNN), which is trained, using the back-propagation algorithm, to extract valuable volumetric features from the input data and detect lung nodules in sub-volumes of CT images. Considering only those test nodules that have been annotated by four radiologists, our CADe system achieves a sensitivity (true positive rate) of 78.9% with 20 false positives (FPs) per scan, or a sensitivity of 71.2% with 10 FPs per scan. This is achieved without using any segmentation or additional FP reduction procedures, both of which are commonly used in other CADe systems. Furthermore, our CADe system is validated on a larger number of lung nodules compared to other studies, which increases the variation in their appearance, and therefore, makes their detection by a CADe system more challenging.

DeepCADe: A Deep Learning Architecture for the Detection of Lung Nodules in CT Scans

  • Golan, Rotem
2018 Thesis, cited 0 times
Website
Early detection of lung nodules in thoracic Computed Tomography (CT) scans is of great importance for the successful diagnosis and treatment of lung cancer. Due to improvements in screening technologies, and an increased demand for their use, radiologists are required to analyze an ever increasing amount of image data, which can affect the quality of their diagnoses. Computer-Aided Detection (CADe) systems are designed to assist radiologists in this endeavor. In this thesis, we present DeepCADe, a novel CADe system for the detection of lung nodules in thoracic CT scans which produces improved results compared to the state-of-the-art in this field of research. CT scans are grayscale images, so the terms scans and images are used interchangeably in this work. DeepCADe was trained with the publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database, which contains 1018 thoracic CT scans with nodules of different shape and size, and is built on a Deep Convolutional Neural Network (DCNN), which is trained using the backpropagation algorithm to extract volumetric features from the input data and detect lung nodules in sub-volumes of CT images. Considering only lung nodules that have been annotated by at least three radiologists, DeepCADe achieves a 2.1% improvement in sensitivity (true positive rate) over the best result in the current published scientific literature, assuming an equal number of false positives (FPs) per scan. More specifically, it achieves a sensitivity of 89.6% with 4 FPs per scan, or a sensitivity of 92.8% with 10 FPs per scan. Furthermore, DeepCADe is validated on a larger number of lung nodules compared to other studies (Table 5.2). This increases the variation in the appearance of nodules and therefore makes their detection by a CADe system more challenging. We study the application of Deep Convolutional Neural Networks (DCNNs) for the detection of lung nodules in thoracic CT scans. We explore some of the meta parameters that affect the performance of such models, which include: 1. the network architecture, i.e. its structure in terms of convolution layers, fully-connected layers, pooling layers, and activation functions, 2. the receptive field of the network, which defines the dimensions of its input, i.e. how much of the CT scan is processed by the network in a single forward pass, 3. a threshold value, which affects the sliding window algorithm with which the network is used to detect nodules in complete CT scans, and 4. the agreement level, which is used to interpret the independent nodule annotations of four experienced radiologists. Finally, we visualize the shape and location of annotated lung nodules and compare them to the output of DeepCADe. This demonstrates the compactness and flexibility in shape of the nodule predictions made by our proposed CADe system. In addition to the 5-fold cross validation results presented in this thesis, these visual results support the applicability of our proposed CADe system in real-world medical practice.

On the classification of simple and complex biological images using Krawtchouk moments and Generalized pseudo-Zernike moments: a case study with fly wing images and breast cancer mammograms

  • Goh, J. Y.
  • Khang, T. F.
PeerJ Comput Sci 2021 Journal Article, cited 0 times
Website
In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations. Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n = 74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n = 1,151), we found that image features generated from the Generalized pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.

Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration

  • Goerres, J.
  • Uneri, A.
  • Jacobson, M.
  • Ramsay, B.
  • De Silva, T.
  • Ketcha, M.
  • Han, R.
  • Manbachi, A.
  • Vogt, S.
  • Kleinszig, G.
  • Wolinsky, J. P.
  • Osgood, G.
  • Siewerdsen, J. H.
Phys Med Biol 2017 Journal Article, cited 4 times
Website
Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4 degrees and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.

Real-Time Computed Tomography-based Medical Diagnosis Using Deep Learning

  • Goel, Garvit
2022 Thesis, cited 0 times
Website
Computed tomography has been widely used in medical diagnosis to generate accurate images of the body's internal organs. However, cancer risk is associated with high X-ray dose CT scans, limiting its applicability in medical diagnosis and telemedicine applications. CT scans acquired at low X-ray dose generate low-quality images with noise and streaking artifacts. Therefore we develop a deep learning-based CT image enhancement algorithm for improving the quality of low-dose CT images. Our algorithm uses a convolution neural network called DenseNet and Deconvolution network (DDnet) to remove noise and artifacts from the input image. To evaluate its advantages in medical diagnosis, we use DDnet to enhance chest CT scans of COVID-19 patients. We show that image enhancement can improve the accuracy of COVID-19 diagnosis (~5% improvement), using a framework consisting of AI-based tools. For training and inference of the image enhancement AI model, we use heterogeneous computing platform for accelerating the execution and decreasing the turnaround time. Specifically, we use multiple GPUs in distributed setup to exploit batch-level parallelism during training. We achieve approximately 7x speedup with 8 GPUs running in parallel compared to training DDnet on a single GPU. For inference, we implement DDnet using OpenCL and evaluate its performance on multi-core CPU, many-core GPU, and FPGA. Our OpenCL implementation is at least 2x faster than analogous PyTorch implementation on each platform and achieves comparable performance between CPU and FPGA, while FPGA operated at a much lower frequency.

Interpretable Machine Learning Model for Locoregional Relapse Prediction in Oropharyngeal Cancers

  • Giraud, Paul
  • Giraud, Philippe
  • Nicolas, Eliot
  • Boisselier, Pierre
  • Alfonsi, Marc
  • Rives, Michel
  • Bardet, Etienne
  • Calugaru, Valentin
  • Noel, Georges
  • Chajon, Enrique
Cancers 2021 Journal Article, cited 0 times
Website

An Uncertainty-aware Workflow for Keyhole Surgery Planning using Hierarchical Image Semantics

  • Gillmann, Christina
  • Maack, Robin G. C.
  • Post, Tobias
  • Wischgoll, Thomas
  • Hagen, Hans
Visual Informatics 2018 Journal Article, cited 1 times
Website
Keyhole surgeries become increasingly important in clinical daily routine as they help minimizing the damage of a patient’s healthy tissue. The planning of keyhole surgeries is based on medical imaging and an important factor that influences the surgeries’ success. Due to the image reconstruction process, medical image data contains uncertainty that exacerbates the planning of a keyhole surgery. In this paper we present a visual workflow that helps clinicians to examine and compare different surgery paths as well as visualizing the patients’ affected tissue. The analysis is based on the concept of hierarchical image semantics, that segment the underlying image data with respect to the input images’ uncertainty and the users understanding of tissue composition. Users can define arbitrary surgery paths that they need to investigate further. The defined paths can be queried by a rating function to identify paths that fulfill user-defined properties. The workflow allows a visual inspection of the affected tissues and its substructures. Therefore, the workflow includes a linked view system indicating the three-dimensional location of selected surgery paths as well as how these paths affect the patients tissue. To show the effectiveness of the presented approach, we applied it to the planning of a keyhole surgery of a brain tumor removal and a kneecap surgery.

Intuitive Error Space Exploration of Medical Image Data in Clinical Daily Routine

  • Gillmann, Christina
  • Arbeláez, Pablo
  • Peñaloza, José Tiberio Hernández
  • Hagen, Hans
  • Wischgoll, Thomas
2017 Conference Paper, cited 3 times
Website

Radiomics: Images are more than pictures, they are data

  • Gillies, Robert J
  • Kinahan, Paul E
  • Hricak, Hedvig
RadiologyRadiology 2015 Journal Article, cited 694 times
Website
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.

Machine Learning in Medical Imaging

  • Giger, M. L.
J Am Coll Radiol 2018 Journal Article, cited 157 times
Website
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.

Deep Learning Architecture to Improve Edge Accuracy of Auto-Contouring for Head and Neck Radiotherapy

  • Gifford, Ryan C
2022 Thesis, cited 0 times
Website
The manual delineation of the gross tumor volume (GTV) for Head and Neck Cancer (HNC) patients is an essential step in the radiotherapy treatment process. Methods to automate this process have the potential to decrease the amount of time it takes for a clinician to complete a plan, while also decreasing the inter-observer variability between clinicians. Deep learning (DL) methods have shown great promise in auto-segmentation problems. For HNC, we show that DL methods systematically fail at the axial edges of GTV where the segmentation is dependent on both information from the center of the tumor and nearby slices. These failures may decrease trust and usage of proposed Auto-Contouring Systems if not accounted for. In this paper we propose a modified version of the U-Net, a fully convolutional network for image segmentation, which can more accurately process dependence between slices to create a more robust GTV contour. We also show that it can outperform the current proposed methods that capture slice dependencies by leveraging 3D convolutions. Our method uses Convolutional Recurrent Neural Networks throughout the decoder section of the U-Net to capture both spatial and adjacent-slice information when considering a contour. To account for shifts in anatomical structures through adjacent CT slices, we allow an affine transformation to the adjacent feature space using Spatial Transformer Networks. Our proposed model increases accuracy at the edges by 12% inferiorly and 26% superiorly over a baseline 2D U-Net, which has no inherent way to capture information between adjacent slices.

Vessel extraction from breast MR

  • Gierlinger, Marco
  • Brandner, Dinah
  • Zagar, Bernhard G.
2021 Conference Proceedings, cited 0 times
We present an extension of the previous work, where a multi-seed region growing algorithm was shown, that extracts segments from breast MRI. The algorithm of our extended work filters elongated segments from the segments derived by the MSRG algorithm to obtain vessel-like structures. This filter is a skeletonization-like algorithm that collects useful information about the segments' thickness, length, etc. A model is shown that scans through the solution space of the MSRG algorithm by adjusting its parameters and by providing shape information for the filter. We further elaborate on the usefulness of the algorithm to assist medical experts in their daignosis of diseases relevant to angiography.

Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination

  • Gierada, David S
  • Pinsky, Paul
  • Nath, Hrudaya
  • Chiles, Caroline
  • Duan, Fenghai
  • Aberle, Denise R
2014 Journal Article, cited 74 times
Website
Background Computed tomography (CT) screening for lung cancer has been associated with a high frequency of false positive results because of the high prevalence of indeterminate but usually benign small pulmonary nodules. The acceptability of reducing false-positive rates and diagnostic evaluations by increasing the nodule size threshold for a positive screen depends on the projected balance between benefits and risks. Methods We examined data from the National Lung Screening Trial (NLST) to estimate screening CT performance and outcomes for scans with nodules above the 4 mm NLST threshold used to classify a CT screen as positive. Outcomes assessed included screening results, subsequent diagnostic tests performed, lung cancer histology and stage distribution, and lung cancer mortality. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated for the different nodule size thresholds. All statistical tests were two-sided. Results In 64% of positive screens (11 598/18 141), the largest nodule was 7 mm or less in greatest transverse diameter. By increasing the threshold, the percentages of lung cancer diagnoses that would have been missed or delayed and false positives that would have been avoided progressively increased, for example from 1.0% and 15.8% at a 5 mm threshold to 10.5% and 65.8% at an 8 mm threshold, respectively. The projected reductions in postscreening follow-up CT scans and invasive procedures also increased as the threshold was raised. Differences across nodules sizes for lung cancer histology and stage distribution were small but statistically significant. There were no differences across nodule sizes in survival or mortality. Conclusion Raising the nodule size threshold for a positive screen would substantially reduce false-positive CT screenings and medical resource utilization with a variable impact on screening outcomes.

Quantitative CT assessment of emphysema and airways in relation to lung cancer risk

  • Gierada, David S
  • Guniganti, Preethi
  • Newman, Blake J
  • Dransfield, Mark T
  • Kvale, Paul A
  • Lynch, David A
  • Pilgram, Thomas K
RadiologyRadiology 2011 Journal Article, cited 41 times
Website

Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks

  • Gibson, Eli
  • Giganti, Francesco
  • Hu, Yipeng
  • Bonmati, Ester
  • Bandula, Steve
  • Gurusamy, Kurinchi
  • Davidson, Brian R
  • Pereira, Stephen P
  • Clarkson, Matthew J
  • Barratt, Dean C
2017 Conference Proceedings, cited 14 times
Website

Role of Imaging in the Era of Precision Medicine

  • Giardino, Angela
  • Gupta, Supriya
  • Olson, Emmi
  • Sepulveda, Karla
  • Lenchik, Leon
  • Ivanidze, Jana
  • Rakow-Penner, Rebecca
  • Patel, Midhir J
  • Subramaniam, Rathan M
  • Ganeshan, Dhakshinamoorthy
Academic Radiology 2017 Journal Article, cited 12 times
Website
Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine.

When the machine does not know measuring uncertainty in deep learning models of medical images

  • Ghoshal, Biraja Prasad
2022 Thesis, cited 0 times
Website
Recently, Deep learning (DL), which involves powerful black box predictors, has outperformed human experts in several medical diagnostic problems. However, these methods focus exclusively on improving the accuracy of point predictions without assessing their outputs’ quality and ignore the asymmetric cost involved in different types of misclassification errors. Neural networks also do not deliver confidence in predictions and suffer from over and under confidence, i.e. are not well calibrated. Knowing how much confidence there is in a prediction is essential for gaining clinicians’ trust in the technology. Calibrated uncertainty quantification is a challenging problem as no ground truth is available. To address this, we make two observations: (i) cost-sensitive deep neural networks with Dropweights models better quantify calibrated predictive uncertainty, and (ii) estimated uncertainty with point predictions in Deep Ensembles Bayesian Neural Networks with DropWeights can lead to a more informed decision and improve prediction quality. This dissertation focuses on quantifying uncertainty using concepts from cost-sensitive neural networks, calibration of confidence, and Dropweights ensemble method. First, we show how to improve predictive uncertainty by deep ensembles of neural networks with Dropweights learning an approximate distribution over its weights in medical image segmentation and its application in active learning. Second, we use the Jackknife resampling technique to correct bias in quantified uncertainty in image classification and propose metrics to measure uncertainty performance. The third part of the thesis is motivated by the discrepancy between the model predictive error and the objective in quantified uncertainty when costs for misclassification errors or unbalanced datasets are asymmetric. We develop cost-sensitive modifications of the neural networks in disease detection and propose metrics to measure the quality of quantified uncertainty. Finally, we leverage an adaptive binning strategy to measure uncertainty calibration error that directly corresponds to estimated uncertainty performance and address problematic evaluation methods. We evaluate the effectiveness of the tools on nuclei images segmentation, multi-class Brain MRI image classification, multi-level cell type-specific protein expression prediction in ImmunoHistoChemistry (IHC) images and cost-sensitive classification for Covid-19 detection from X-Rays and CT image dataset. Our approach is thoroughly validated by measuring the quality of uncertainty. It produces an equally good or better result and paves the way for the future that addresses the practical problems at the intersection of deep learning and Bayesian decision theory. In conclusion, our study highlights the opportunities and challenges of the application of estimated uncertainty in deep learning models of medical images, representing the confidence of the model’s prediction, and the uncertainty quality metrics show a significant improvement when using Deep Ensembles Bayesian Neural Networks with DropWeights.

Tumor Segmentation in Brain MRI: U-Nets versus Feature Pyramid Network

  • Ghosh, Sourodip
  • Santosh, K.C.
2021 Conference Paper, cited 0 times
Website
Manifestations of brain tumors can trigger various psychiatric symptoms. Brain tumor detection can efficiently solve or reduce chances of occurrences of diseases, such as Alzheimer's disease, dementia-based disorders, multiple sclerosis and bipolar disorder. In this paper, we propose a segmentation-based approach to detect brain tumors in MRI 1 1 . We provide a comparative study between two different U-Net architectures (U-Net: baseline and U-Net: ResNeXt50 backbone) and a Feature Pyramid Network (FPN) that are trained/validated on the TCGA-LGG dataset of size 3, 929 images. U-Net architecture with ResNeXt50 backbone achieves the best Dice coefficient of 0.932, while baseline U-Net and FPN separately achieve Dice coefficients of 0.846 and 0.899, respectively. The results obtained from U-Net with ResNeXt50 backbone outperform previous works.

A Deep Learning Framework Integrating the Spectral and Spatial Features for Image-Assisted Medical Diagnostics

  • Ghosh, S.
  • Das, S.
  • Mallipeddi, R.
IEEE Access 2021 Journal Article, cited 0 times
Website
The development of a computer-aided disease detection system to ease the long and arduous manual diagnostic process is an emerging research interest. Living through the recent outbreak of the COVID-19 virus, we propose a machine learning and computer vision algorithms-based automatic diagnostic solution for detecting the COVID-19 infection. Our proposed method applies to chest radiograph that uses readily available infrastructure. No studies in this direction have considered the spatial aspect of the medical images. This motivates us to investigate the role of spectral-domain information of medical images along with the spatial content towards improved disease detection ability. Successful integration of spatial and spectral features is demonstrated on the COVID-19 infection detection task. Our proposed method comprises three stages - Feature extraction, Dimensionality reduction via projection, and prediction. At first, images are transformed into spectral and spatio-spectral domains by using Discrete cosine transform (DCT) and Discrete Wavelet transform (DWT), two powerful image processing algorithms. Next, features from spatial, spectral, and spatio-spectral domains are projected into a lower dimension through the Convolutional Neural Network (CNN), and those three types of projected features are then fed to Multilayer Perceptron (MLP) for final prediction. The combination of the three types of features yielded superior performance than any of the features when used individually. This indicates the presence of complementary information in the spectral domain of the chest radiograph to characterize the considered medical condition. Moreover, saliency maps corresponding to classes representing different medical conditions demonstrate the reliability of the proposed method. The study is further extended to identify different medical conditions using diverse medical image datasets and shows the efficiency of leveraging the combined features. Altogether, the proposed method exhibits potential as a generalized and robust medical image-assisted diagnostic solution.

Brain tumor detection from MRI image: An approach

  • Ghosh, Debjyoti
  • Bandyopadhyay, Samir Kumar
International Journal of Applied Research 2017 Journal Article, cited 0 times
Website
A brain tumor is an abnormal growth of cells within the brain, which can be cancerous or noncancerous (benign). This paper detects different types of tumors and cancerous growth within the brain and other associated areas within the brain by using computerized methods on MRI images of a patient. It is also possible to track the growth patterns of such tumors.

Binary Classification of Mammograms Using Horizontal Visibility Graph

  • Ghosh, Anirban
  • Ranjan, Priya
  • Chilamkurthy, Naga Srinivasarao
  • Gulati, Richa
  • Janardhanan, Rajiv
  • Ramakant, Pooja
2023 Book Section, cited 0 times
Breast carcinoma, the most common cancer in women across the world now accounts for almost 30% of new malignant tumor cases. Despite the high incidence rate, breast cancer mortality has been maintained under control thanks to recent advances in molecular biology technology and an enhanced level of complete diagnosis and standard therapy. The method strives to overcome the clinical dilemma of undetected and misdiagnosed breast cancer, resulting in a poor clinical prognosis. Early computer-aided detection by mammography is an important aspect of the plan. In most of the diagnostic strategies currently in vogue, undue importance has been given to one of the performance metrics instead of a more balanced result. In our present study, we aim to resolve this dogma by first converting the mammograms into their equivalent graphical representation and then finding the network similarity between two such generated graphs. Subsequently, we will also elaborate on the use of horizontal visibility graph (HVG) representation to classify images and use Hamming-Ipsen-Mikhailov (HIM) network similarity (distance) metric to develop novel triage mammograms according to the severity of the disease. Our HVG-HIM metric-based classification of mammograms had an accuracy of 88.37%, specificity of 92%, and sensitivity of 83.33%. We also clearly highlight the trade off between performance and processing time.

Artificial Intelligence Using Open Source BI-RADS Data Exemplifying Potential Future Use

  • Ghosh, A.
J Am Coll Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVES: With much hype about artificial intelligence (AI) rendering radiologists redundant, a simple radiologist-augmented AI workflow is evaluated; the premise is that inclusion of a radiologist's opinion into an AI algorithm would make the algorithm achieve better accuracy than an algorithm trained on imaging parameters alone. Open-source BI-RADS data sets were evaluated to see whether inclusion of a radiologist's opinion (in the form of BI-RADS classification) in addition to image parameters improved the accuracy of prediction of histology using three machine learning algorithms vis-a-vis algorithms using image parameters alone. MATERIALS AND METHODS: BI-RADS data sets were obtained from the University of California, Irvine Machine Learning Repository (data set 1) and the Digital Database for Screening Mammography repository (data set 2); three machine learning algorithms were trained using 10-fold cross-validation. Two sets of models were trained: M1, using lesion shape, margin, density, and patient age for data set 1 and image texture parameters for data set 2, and M2, using the previous image parameters and the BI-RADS classification provided by radiologists. The area under the curve and the Gini coefficient for M1 and M2 were compared for the validation data set. RESULTS: The models using the radiologist-provided BI-RADS classification performed significantly better than the models not using them (P < .0001). CONCLUSION: AI and radiologist working together can achieve better results, helping in case-based decision making. Further evaluation of the metrics involved in predictor handling by AI algorithms will provide newer insights into imaging.

Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer

  • Gholizadeh-Ansari, M.
  • Alirezaie, J.
  • Babyn, P.
J Digit Imaging 2019 Journal Article, cited 1 times
Website
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.

Medical Imaging Segmentation Assessment via Bayesian Approaches to Fusion, Accuracy and Variability Estimation with Application to Head and Neck Cancer

  • Ghattas, Andrew Emile
2017 Thesis, cited 0 times
Website
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability. The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.

FDSR: A new fuzzy discriminative sparse representation method for medical image classification

  • Ghasemi, Majid
  • Kelarestaghi, Manoochehr
  • Eshghi, Farshad
  • Sharifi, Arash
2020 Journal Article, cited 10 times
Website
Recent developments in medical image analysis techniques make them essential tools in medical diagnosis. Medical imaging is always involved with different kinds of uncertainties. Managing these uncertainties has motivated extensive research on medical image classification methods, particularly for the past decade. Despite being a powerful classification tool, the sparse representation suffers from the lack of sufficient discrimination and robustness, which are required to manage the uncertainty and noisiness in medical image classification issues. It is tried to overcome this deficiency by introducing a new fuzzy discriminative robust sparse representation classifier, which benefits from the fuzzy terms in its optimization function of the dictionary learning process. In this work, we present a new medical image classification approach, fuzzy discriminative sparse representation (FDSR). The proposed fuzzy terms increase the inter-class representation difference and the intra-class representation similarity. Also, an adaptive fuzzy dictionary learning approach is used to learn dictionary atoms. FDSR is applied on Magnetic Resonance Images (MRI) from three medical image databases. The comprehensive experimental results clearly show that our approach outperforms its series of rival techniques in terms of accuracy, sensitivity, specificity, and convergence speed.

T2-FDL: A robust sparse representation method using adaptive type-2 fuzzy dictionary learning for medical image classification

  • Ghasemi, Majid
  • Kelarestaghi, Manoochehr
  • Eshghi, Farshad
  • Sharifi, Arash
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
In this paper, a robust sparse representation for medical image classification is proposed based on the adaptive type-2 fuzzy learning (T2-FDL) system. In the proposed method, sparse coding and dictionary learning processes are executed iteratively until a near-optimal dictionary is obtained. The sparse coding step aiming at finding a combination of dictionary atoms to represent the input data efficiently, and the dictionary learning step rigorously adjusts a minimum set of dictionary items. The two-step operation helps create an adaptive sparse representation algorithm by involving the type-2 fuzzy sets in the design process of image classification. Since the existing image measurements are not made under the same conditions and with the same accuracy, the performance of medical diagnosis is always affected by noise and uncertainty. By introducing an adaptive type-2 fuzzy learning method, a better approximation in an environment with higher degrees of uncertainty and noise is achieved. The experiments are executed over two open-access brain tumor magnetic resonance image databases, REMBRANDT and TCGA-LGG, from The Cancer Imaging Archive (TCIA). The experimental results of a brain tumor classification task show that the proposed T2-FDL method can adequately minimize the negative effects of uncertainty in the input images. The results demonstrate the outperformance of T2-FDL compared to other important classification methods in the literature, in terms of accuracy, specificity, and sensitivity.

Automated Brain Tumour Segmentation Using Cascaded 3D Densely-Connected U-Net

  • Ghaffari, Mina
  • Sowmya, Arcot
  • Oliver, Ruth
2021 Book Section, cited 0 times
Accurate brain tumour segmentation is a crucial step towards improving disease diagnosis and proper treatment planning. In this paper, we propose a deep-learning based method to segment a brain tumour into its subregions: whole tumour, tumour core and enhancing tumour. The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture of Ronneberger et al. [17] with three main modifications: (i) a heavy encoder, light decoder structure using residual blocks (ii) employment of dense blocks instead of skip connections, and (iii) utilization of self-ensembling in the decoder part of the network. The network was trained and tested using two different approaches: a multitask framework to segment all tumour subregions at the same time, and a three-stage cascaded framework to segment one subregion at a time. An ensemble of the results from both frameworks was also computed. To address the class imbalance issue, appropriate patch extraction was employed in a pre-processing step. Connected component analysis was utilized in the post-processing step to reduce the false positive predictions. Experimental results on the BraTS20 validation dataset demonstrates that the proposed model achieved average Dice Scores of 0.90, 0.83, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.

Classification of COVID-19 and Nodule in CT Images using Deep Convolutional Neural Network

  • Ghaemi, Amirhossein
  • Mobarakeh, Seyyed Amir Mousavi
  • Danyali, Habibollah
  • Kazemi, Kamran
2022 Conference Paper, cited 0 times
Website
Distinguishing between coronavirus disease 2019 (COVID-19) and nodule as an early indicator of lung cancer in Computed Tomography (CT) images has been a challenge that radiologists have faced since COVID-19 was announced as a pandemic. The similarity between these two infections is the main reason that brings dilemmas for them and may lead to a misdiagnosis. As a result, manual classification is not as efficient as automated classification. This paper proposes an automated approach to classify COVID-19 infections from nodules in CT images. Convolutional Neural Networks (CNNs) have significantly meliorated automated image classification tasks, particularly for medical images. Accordingly, we propose a refined CNN-based architecture through modifications in the network layers to reduce complexity. Furthermore, to vanquish the lack of training data, data augmentation approaches are utilized. In our method, Multi Layer Perceptron (MLP) is obligated to categorize the feature vectors extracted from denoised input images by convolutional layers into two main classes of COVID-19 infections and nodules. To the best of our knowledge, other state-of-the-art methods can only classify one of the two classes listed above. Compared to the mentioned counterparts, our proposed method has a promising performance with an accuracy of 97.80%.

Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results

  • Gevaert, Olivier
  • Xu, Jiajing
  • Hoang, Chuong D
  • Leung, Ann N
  • Xu, Yue
  • Quon, Andrew
  • Rubin, Daniel L
  • Napel, Sandy
  • Plevritis, Sylvia K
RadiologyRadiology 2012 Journal Article, cited 187 times
Website
PURPOSE: To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets. MATERIALS AND METHODS: A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available. RESULTS: There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance. CONCLUSION: This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.

Imaging-AMARETTO: An Imaging Genomics Software Tool to Interrogate Multiomics Networks for Relevance to Radiography and Histopathology Imaging Biomarkers of Clinical Outcomes

  • Gevaert, O.
  • Nabian, M.
  • Bakr, S.
  • Everaert, C.
  • Shinde, J.
  • Manukyan, A.
  • Liefeld, T.
  • Tabor, T.
  • Xu, J.
  • Lupberger, J.
  • Haas, B. J.
  • Baumert, T. F.
  • Hernaez, M.
  • Reich, M.
  • Quintana, F. J.
  • Uhlmann, E. J.
  • Krichevsky, A. M.
  • Mesirov, J. P.
  • Carey, V.
  • Pochet, N.
JCO Clin Cancer Inform 2020 Journal Article, cited 1 times
Website
PURPOSE: The availability of increasing volumes of multiomics, imaging, and clinical data in complex diseases such as cancer opens opportunities for the formulation and development of computational imaging genomics methods that can link multiomics, imaging, and clinical data. METHODS: Here, we present the Imaging-AMARETTO algorithms and software tools to systematically interrogate regulatory networks derived from multiomics data within and across related patient studies for their relevance to radiography and histopathology imaging features predicting clinical outcomes. RESULTS: To demonstrate its utility, we applied Imaging-AMARETTO to integrate three patient studies of brain tumors, specifically, multiomics with radiography imaging data from The Cancer Genome Atlas (TCGA) glioblastoma multiforme (GBM) and low-grade glioma (LGG) cohorts and transcriptomics with histopathology imaging data from the Ivy Glioblastoma Atlas Project (IvyGAP) GBM cohort. Our results show that Imaging-AMARETTO recapitulates known key drivers of tumor-associated microglia and macrophage mechanisms, mediated by STAT3, AHR, and CCR2, and neurodevelopmental and stemness mechanisms, mediated by OLIG2. Imaging-AMARETTO provides interpretation of their underlying molecular mechanisms in light of imaging biomarkers of clinical outcomes and uncovers novel master drivers, THBS1 and MAP2, that establish relationships across these distinct mechanisms. CONCLUSION: Our network-based imaging genomics tools serve as hypothesis generators that facilitate the interrogation of known and uncovering of novel hypotheses for follow-up with experimental validation studies. We anticipate that our Imaging-AMARETTO imaging genomics tools will be useful to the community of biomedical researchers for applications to similar studies of cancer and other complex diseases with available multiomics, imaging, and clinical data.

Glioblastoma Multiforme: Exploratory Radiogenomic Analysis by Using Quantitative Image Features

  • Gevaert, Olivier
  • Mitchell, Lex A
  • Achrol, Achal S
  • Xu, Jiajing
  • Echegaray, Sebastian
  • Steinberg, Gary K
  • Cheshier, Samuel H
  • Napel, Sandy
  • Zaharchuk, Greg
  • Plevritis, Sylvia K
RadiologyRadiology 2014 Journal Article, cited 151 times
Website
Purpose: To derive quantitative image features from magnetic resonance (MR) images that characterize the radiographic phenotype of glioblastoma multiforme (GBM) lesions and to create radiogenomic maps associating these features with various molecular data. Materials and Methods: Clinical, molecular, and MR imaging data for GBMs in 55 patients were obtained from the Cancer Genome Atlas and the Cancer Imaging Archive after local ethics committee and institutional review board approval. Regions of interest (ROIs) corresponding to enhancing necrotic portions of tumor and peritumoral edema were drawn, and quantitative image features were derived from these ROIs. Robust quantitative image features were defined on the basis of an intraclass correlation coefficient of 0.6 for a digital algorithmic modification and a test-retest analysis. The robust features were visualized by using hierarchic clustering and were correlated with survival by using Cox proportional hazards modeling. Next, these robust image features were correlated with manual radiologist annotations from the Visually Accessible Rembrandt Images (VASARI) feature set and GBM molecular subgroups by using nonparametric statistical tests. A bioinformatic algorithm was used to create gene expression modules, defined as a set of coexpressed genes together with a multivariate model of cancer driver genes predictive of the module's expression pattern. Modules were correlated with robust image features by using the Spearman correlation test to create radiogenomic maps and to link robust image features with molecular pathways. Results: Eighteen image features passed the robustness analysis and were further analyzed for the three types of ROIs, for a total of 54 image features. Three enhancement features were significantly correlated with survival, 77 significant correlations were found between robust quantitative features and the VASARI feature set, and seven image features were correlated with molecular subgroups (P < .05 for all). A radiogenomics map was created to link image features with gene expression modules and allowed linkage of 56% (30 of 54) of the image features with biologic processes. Conclusion: Radiogenomic approaches in GBM have the potential to predict clinical and molecular characteristics of tumors noninvasively.

Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients

  • Ger, Rachel B
  • Zhou, Shouhao
  • Elgohari, Baher
  • Elhalawani, Hesham
  • Mackin, Dennis M
  • Meier, Joseph G
  • Nguyen, Callistus M
  • Anderson, Brian M
  • Gay, Casey
  • Ning, Jing
  • Fuller, Clifton D
  • Li, Heng
  • Howell, Rebecca M
  • Layman, Rick R
  • Mawlawi, Osama
  • Stafford, R Jason
  • Aerts, Hugo JWL
  • Court, Laurence E.
PLoS One 2019 Journal Article, cited 0 times
Website
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.

Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging

  • Ger, Rachel B
  • Yang, Jinzhong
  • Ding, Yao
  • Jacobsen, Megan C
  • Cardenas, Carlos E
  • Fuller, Clifton D
  • Howell, Rebecca M
  • Li, Heng
  • Stafford, R Jason
  • Zhou, Shouhao
Medical Physics 2018 Journal Article, cited 0 times
Website

Ultra-Fast 3D GPGPU Region Extractions for Anatomy Segmentation

  • George, Jose
  • Mysoon, N. S.
  • Antony, Nixima
2019 Conference Paper, cited 0 times
Website
Region extractions are ubiquitous in any anatomy segmentation. Region growing is one such method. Starting from an initial seed point, it grows a region of interest until all valid voxels are checked, thereby resulting in an object segmentation. Although widely used, it is computationally expensive because of its sequential approach. In this paper, we present a parallel and high performance alternate for region growing using GPGPU capability. The idea is to approximate region growing requirements within an algorithm using a parallel connected-component labeling (CCL) solution. To showcase this, we selected a typical lung segmentation problem using region growing. In CPU, sequential approach consists of 3D region growing inside a mask, that is created after applying a threshold. In GPU, parallel alternative is to apply parallel CCL and select the biggest region of interest. We evaluated our approach on 45 clinical chest CT scans in LIDC data from TCIA repository. With respect to CPU, our CUDA based GPU facilitated an average performance improvement of 240× approximately. Speed up is so profound that it can be even applied to 4D lung segmentation at 6 fps.

A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis

  • Gehlot, S.
  • Gupta, A.
  • Gupta, R.
Med Image Anal 2021 Journal Article, cited 0 times
Website
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.

SDCT-AuxNet(theta): DCT augmented stain deconvolutional CNN with auxiliary classifier for cancer diagnosis

  • Gehlot, Shiv
  • Gupta, Anubha
  • Gupta, Ritu
Medical Image Analysis 2020 Journal Article, cited 6 times
Website
Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell cancer across the globe. With the popularity of convolutional neural networks (CNNs), computer-aided diagnosis of cancer has attracted considerable attention. Such tools are easily deployable and are cost-effective. Hence, these can enable extensive coverage of cancer diagnostic facilities. However, the development of such a tool for ALL cancer was challenging so far due to the non-availability of a large training dataset. The visual similarity between the malignant and normal cells adds to the complexity of the problem. This paper discusses the recent release of a large dataset and presents a novel deep learning architecture for the classification of cell images of ALL cancer. The proposed architecture, namely, SDCT-AuxNet(theta) is a 2-module framework that utilizes a compact CNN as the main classifier in one module and a Kernel SVM as the auxiliary classifier in the other one. While CNN classifier uses features through bilinear-pooling, spectral-averaged features are used by the auxiliary classifier. Further, this CNN is trained on the stain deconvolved quantity images in the optical density domain instead of the conventional RGB images. A novel test strategy is proposed that exploits both the classifiers for decision making using the confidence scores of their predicted class labels. Elaborate experiments have been carried out on our recently released public dataset of 15114 images of ALL cancer and healthy cells to establish the validity of the proposed methodology that is also robust to subject-level variability. A weighted F1 score of 94.8% is obtained that is best so far on this challenging dataset.

Machine Learning Methods for Image Analysis in Medical Applications From Alzheimer’s Disease, Brain Tumors, to Assisted Living

  • Chenjie Ge
2020 Thesis, cited 0 times
Website
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer's disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications.

Segmentation of colon and removal of opacified fluid for virtual colonoscopy

  • Gayathri, Devi K
  • Radhakrishnan, R
  • Rajamani, Kumar
Pattern Analysis and Applications 2017 Journal Article, cited 0 times
Website
Colorectal cancer (CRC) is the third most common type of cancer. The use of techniques such as flexible sigmoidoscopy and capsule endoscopy for the screening of colorectal cancer causes physical pain and hardship to the patients. Hence, to overcome the above disadvantages, computed tomography (CT) can be employed for the identification of polyps or growth, while screening for CRC. This proposed approach was implemented to improve the accuracy and to reduce the computation time of the accurate segmentation of the colon segments from the abdominal CT images which contain anatomical organs such as lungs, small bowels, large bowels (Colon), ribs, opacified fluid and bones. The segmentation is performed in two major steps. The first step segments the air-filled colon portions by placing suitable seed points using modified 3D seeded region growing which identify and match the similar voxels by 6-neighborhood connectivity technique. The segmentation of the opacified fluid portions is done using fuzzy connectedness approach enhanced with interval thresholding. The membership classes are defined and the voxels are categorized based on the class value. Interval thresholding is performed so that the bones and opacified fluid parts may be extracted. The bones are removed by the placement of seed points as the existence of the continuity of the bone region is more in the axial slices. The resultant image containing bones is subtracted from the threshold output to segment the opacified fluid segments in all the axial slices of a dataset. Finally, concatenation of the opacified fluid with the segmented colon is performed for the 3D rendering of the segmented colon. This method was implemented in 15 datasets downloaded from TCIA and in real-time dataset in both supine and prone position and the accuracy achieved was 98.73%.

Automatic Segmentation of Colon in 3D CT Images and Removal of Opacified Fluid Using Cascade Feed Forward Neural Network

  • Gayathri Devi, K
  • Radhakrishnan, R
Computational and Mathematical Methods in Medicine 2015 Journal Article, cited 5 times
Website

Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume

  • Gavrielides, Marios A
  • Zeng, Rongping
  • Myers, Kyle J
  • Sahiner, Berkman
  • Petrick, Nicholas
Academic Radiology 2013 Journal Article, cited 23 times
Website
RATIONALE AND OBJECTIVES: The aim of this study was to quantify the effect of overlapping reconstruction on the precision and accuracy of lung nodule volume estimates in a phantom computed tomographic (CT) study. MATERIALS AND METHODS: An anthropomorphic phantom was used with a vasculature insert on which synthetic lung nodules were attached. Repeated scans of the phantom were acquired using a 64-slice CT scanner. Overlapping and contiguous reconstructions were performed for a range of CT imaging parameters (exposure, slice thickness, pitch, reconstruction kernel) and a range of nodule characteristics (size, density). Nodule volume was estimated with a previously developed matched-filter algorithm. RESULTS: Absolute percentage bias across all nodule sizes (n = 2880) was significantly lower when overlapping reconstruction was used, with an absolute percentage bias of 6.6% (95% confidence interval [CI], 6.4-6.9), compared to 13.2% (95% CI, 12.7-13.8) for contiguous reconstruction. Overlapping reconstruction also showed a precision benefit, with a lower standard percentage error of 7.1% (95% CI, 6.9-7.2) compared with 15.3% (95% CI, 14.9-15.7) for contiguous reconstructions across all nodules. Both effects were more pronounced for the smaller, subcentimeter nodules. CONCLUSIONS: These results support the use of overlapping reconstruction to improve the quantitative assessment of nodule size with CT imaging.

A resource for the assessment of lung nodule size estimation methods: database of thoracic CT scans of an anthropomorphic phantom

  • Gavrielides, Marios A
  • Kinnard, Lisa M
  • Myers, Kyle J
  • Peregoy, Jennifer
  • Pritchard, William F
  • Zeng, Rongping
  • Esparza, Juan
  • Karanian, John
  • Petrick, Nicholas
Optics express 2010 Journal Article, cited 50 times
Website
A number of interrelated factors can affect the precision and accuracy of lung nodule size estimation. To quantify the effect of these factors, we have been conducting phantom CT studies using an anthropomorphic thoracic phantom containing a vasculature insert to which synthetic nodules were inserted or attached. Ten repeat scans were acquired on different multi-detector scanners, using several sets of acquisition and reconstruction protocols and various nodule characteristics (size, shape, density, location). This study design enables both bias and variance analysis for the nodule size estimation task. The resulting database is in the process of becoming publicly available as a resource to facilitate the assessment of lung nodule size estimation methodologies and to enable comparisons between different methods regarding measurement error. This resource complements public databases of clinical data and will contribute towards the development of procedures that will maximize the utility of CT imaging for lung cancer screening and tumor therapy evaluation.

An Improved Mammogram Classification Approach Using Back Propagation Neural Network

  • Gautam, Aman
  • Bhateja, Vikrant
  • Tiwari, Ananya
  • Satapathy, Suresh Chandra
2017 Book Section, cited 16 times
Website
Mammograms are generally contaminated by quantum noise, degrading their visual quality and thereby the performance of the classifier in Computer-Aided Diagnosis (CAD). Hence, enhancement of mammograms is necessary to improve the visual quality and detectability of the anomalies present in the breasts. In this paper, a sigmoid based non-linear function has been applied for contrast enhancement of mammograms. The enhanced mammograms are used to define the texture of the detected anomaly using Gray Level Co-occurrence Matrix (GLCM) features. Later, a Back Propagation Artificial Neural Network (BP-ANN) is used as a classification tool for segregating the mammogram into abnormal or normal. The proposed classifier approach has reported to be the one with considerably better accuracy in comparison to other existing approaches.

An efficient magnetic resonance image data quality screening dashboard

  • Gates, E. D. H.
  • Celaya, A.
  • Suki, D.
  • Schellingerhout, D.
  • Fuentes, D.
J Appl Clin Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Complex data processing and curation for artificial intelligence applications rely on high-quality data sets for training and analysis. Manually reviewing images and their associated annotations is a very laborious task and existing quality control tools for data review are generally limited to raw images only. The purpose of this work was to develop an imaging informatics dashboard for the easy and fast review of processed magnetic resonance (MR) imaging data sets; we demonstrated its ability in a large-scale data review. METHODS: We developed a custom R Shiny dashboard that displays key static snapshots of each imaging study and its annotations. A graphical interface allows the structured entry of review data and download of tabulated review results. We evaluated the dashboard using two large data sets: 1380 processed MR imaging studies from our institution and 285 studies from the 2018 MICCAI Brain Tumor Segmentation Challenge (BraTS). RESULTS: Studies were reviewed at an average rate of 100/h using the dashboard, 10 times faster than using existing data viewers. For data from our institution, 1181 of the 1380 (86%) studies were of acceptable quality. The most commonly identified failure modes were tumor segmentation (9.6% of cases) and image registration (4.6% of cases). Tumor segmentation without visible errors on the dashboard had much better agreement with reference tumor volume measurements (root-mean-square error 12.2 cm(3) ) than did segmentations with minor errors (20.5 cm(3) ) or failed segmentations (27.4 cm(3) ). In the BraTS data, 242 of 285 (85%) studies were acceptable quality after processing. Among the 43 cases that failed review, 14 had unacceptable raw image quality. CONCLUSION: Our dashboard provides a fast, effective tool for reviewing complex processed MR imaging data sets. It is freely available for download at https://github.com/EGates1/MRDQED.

Simultaneous emission and attenuation reconstruction in time-of-flight PET using a reference object

  • Garcia-Perez, P.
  • Espana, S.
EJNMMI Phys 2020 Journal Article, cited 0 times
Website
BACKGROUND: Simultaneous reconstruction of emission and attenuation images in time-of-flight (TOF) positron emission tomography (PET) does not provide a unique solution. In this study, we propose to solve this limitation by including additional information given by a reference object with known attenuation placed outside the patient. Different configurations of the reference object were studied including geometry, material composition, and activity, and an optimal configuration was defined. In addition, this configuration was tested for different timing resolutions and noise levels. RESULTS: The proposed strategy was tested in 2D simulations obtained by forward projection of available PET/CT data and noise was included using Monte Carlo techniques. Obtained results suggest that the optimal configuration corresponds to a water cylinder inserted in the patient table and filled with activity. In that case, mean differences between reconstructed and true images were below 10%. However, better results can be obtained by increasing the activity of the reference object. CONCLUSION: This study shows promising results that might allow to obtain an accurate attenuation map from pure TOF-PET data without prior knowledge obtained from CT, MRI, or transmission scans.

Regression Approach for Cranioplasty Modeling

  • Garcia, M.G.M.
  • Furuie, S.S.
2022 Conference Paper, cited 0 times
Website
Patient-specific implants provide important advantages for patients and medical professionals. The state of the art of cranioplasty implant production is based on the bone structure reconstruction and use of patient’s own anatomical information for filling the bone defect. The present work proposes a two-dimensional investigation of which dataset results in the closest polynomial regression to a gold standard structure combining points of the bone defect region and points of the healthy contralateral skull hemisphere. The similarity measures used to compare datasets are the root mean square error (RMSE) and the Hausdorff distance. The objective is to use the most successful dataset in future development and testing of a semi-automatic methodology for cranial prosthesis modeling. The present methodology was implemented in Python scripts and uses five series of skull computed tomography images to generate phantoms with small, medium and large bone defects. Results from statistical tests and observations made from the mean RMSE and mean Hausdorff distance allow to determine that the dataset formed by the phantom contour points twice and the mirrored contour points is the one that significantly increases the similarity measures.

Imaging Biomarker Development for Lower Back Pain Using Machine Learning: How Image Analysis Can Help Back Pain

  • Gaonkar, B.
  • Cook, K.
  • Yoo, B.
  • Salehi, B.
  • Macyszyn, L.
Methods Mol Biol 2022 Journal Article, cited 0 times
Website
State-of-the-art diagnosis of radiculopathy relies on "highly subjective" radiologist interpretation of magnetic resonance imaging of the lower back. Currently, the treatment of lumbar radiculopathy and associated lower back pain lacks coherence due to an absence of reliable, objective diagnostic biomarkers. Using emerging machine learning techniques, the subjectivity of interpretation may be replaced by the objectivity of automated analysis. However, training computer vision methods requires a curated database of imaging data containing anatomical delineations vetted by a team of human experts. In this chapter, we outline our efforts to develop such a database of curated imaging data alongside the required delineations. We detail the processes involved in data acquisition and subsequent annotation. Then we explain how the resulting database can be utilized to develop a machine learning-based objective imaging biomarker. Finally, we present an explanation of how we validate our machine learning-based anatomy delineation algorithms. Ultimately, we hope to allow validated machine learning models to be used to generate objective biomarkers from imaging data-for clinical use to diagnose lumbar radiculopathy and guide associated treatment plans.

Improving the Subtype Classification of Non-small Cell Lung Cancer by Elastic Deformation Based Machine Learning

  • Gao, Yang
  • Song, Fan
  • Zhang, Peng
  • Liu, Jian
  • Cui, Jingjing
  • Ma, Yingying
  • Zhang, Guanglei
  • Luo, Jianwen
J Digit Imaging 2021 Journal Article, cited 0 times
Website
Non-invasive image-based machine learning models have been used to classify subtypes of non-small cell lung cancer (NSCLC). However, the classification performance is limited by the dataset size, because insufficient data cannot fully represent the characteristics of the tumor lesions. In this work, a data augmentation method named elastic deformation is proposed to artificially enlarge the image dataset of NSCLC patients with two subtypes (squamous cell carcinoma and large cell carcinoma) of 3158 images. Elastic deformation effectively expanded the dataset by generating new images, in which tumor lesions go through elastic shape transformation. To evaluate the proposed method, two classification models were trained on the original and augmented dataset, respectively. Using augmented dataset for training significantly increased classification metrics including area under the curve (AUC) values of receiver operating characteristics (ROC) curves, accuracy, sensitivity, specificity, and f1-score, thus improved the NSCLC subtype classification performance. These results suggest that elastic deformation could be an effective data augmentation method for NSCLC tumor lesion images, and building classification models with the help of elastic deformation has the potential to serve for clinical lung cancer diagnosis and treatment design.

Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation

  • Gao, H.
  • Lyu, M.
  • Zhao, X.
  • Yang, F.
  • Bai, X.
Med Image Anal 2023 Journal Article, cited 0 times
Website
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.

Transformer based multiple instance learning for WSI breast cancer classification

  • Gao, Chengyang
  • Sun, Qiule
  • Zhu, Wen
  • Zhang, Lizhi
  • Zhang, Jianxin
  • Liu, Bin
  • Zhang, Junxing
Biomedical Signal Processing and Control 2024 Journal Article, cited 0 times
The computer-aided diagnosis method based on deep learning provides pathologists with preliminary diagnostic opinions and improves their work efficiency. Inspired by the widespread use of transformers in computer vision, we try to explore their effectiveness and potential in classifying breast cancer tissues in WSIs, and propose a hybrid multiple instance learning method called HTransMIL. Specifically, its first stage is to select informative instances based on hierarchical Swin Transformer, which can capture global and local information of pathological images and is beneficial for obtaining accurate discriminative instances. The second stage aims to strengthen the correlation between selected instances via another transformer encoder consistently and produce powerful bag-level features by aggregating interactived instances for classification. Besides, visualization analysis is utilized to better understand the weakly supervised classification model for WSIs. The extensive evaluation results on a private and two public WSI breast cancer datasets demonstrate the effectiveness and competitiveness of HTransMIL. The code and models are publicly available at https://github.com/Chengyang852/Transformer-for-WSI-classification.

Performance analysis for nonlinear tomographic data processing

  • Gang, Grace J
  • Guo, Xueqi
  • Stayman IV, J Webster
2019 Conference Proceedings, cited 0 times
Website

Extraction of pulmonary vessels and tumour from plain computed tomography sequence

  • Ganapathy, Sridevi
  • Ashar, Kinnari
  • Kathirvelu, D
2018 Conference Proceedings, cited 0 times
Website

In Silico Approach for the Definition of radiomiRNomic Signatures for Breast Cancer Differential Diagnosis

  • Gallivanone, F.
  • Cava, C.
  • Corsi, F.
  • Bertoli, G.
  • Castiglioni, I.
Int J Mol Sci 2019 Journal Article, cited 2 times
Website
Personalized medicine relies on the integration and consideration of specific characteristics of the patient, such as tumor phenotypic and genotypic profiling. BACKGROUND: Radiogenomics aim to integrate phenotypes from tumor imaging data with genomic data to discover genetic mechanisms underlying tumor development and phenotype. METHODS: We describe a computational approach that correlates phenotype from magnetic resonance imaging (MRI) of breast cancer (BC) lesions with microRNAs (miRNAs), mRNAs, and regulatory networks, developing a radiomiRNomic map. We validated our approach to the relationships between MRI and miRNA expression data derived from BC patients. We obtained 16 radiomic features quantifying the tumor phenotype. We integrated the features with miRNAs regulating a network of pathways specific for a distinct BC subtype. RESULTS: We found six miRNAs correlated with imaging features in Luminal A (miR-1537, -205, -335, -337, -452, and -99a), seven miRNAs (miR-142, -155, -190, -190b, -1910, -3617, and -429) in HER2+, and two miRNAs (miR-135b and -365-2) in Basal subtype. We demonstrate that the combination of correlated miRNAs and imaging features have better classification power of Luminal A versus the different BC subtypes than using miRNAs or imaging alone. CONCLUSION: Our computational approach could be used to identify new radiomiRNomic profiles of multi-omics biomarkers for BC differential diagnosis and prognosis.

Interpretable Medical Image Classification Using Prototype Learning and Privileged Information

  • Gallée, Luisa
  • Beer, Meinrad
  • Götz, Michael
2023 Book Section, cited 0 times
Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information. Evaluating the proposed solution on the LIDC-IDRI dataset shows that it combines increased interpretability with above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6 % higher accuracy in predicting both malignancy (93.0 % ) and mean characteristic features of lung nodules. Simultaneously, the model provides case-based reasoning with prototype representations that allow visual validation of radiologist-defined attributes.

A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks

  • Galib, Shaikat M
  • Lee, Hyoung K
  • Guy, Christopher L
  • Riblett, Matthew J
  • Hugo, Geoffrey D
Med Phys 2020 Journal Article, cited 1 times
Website
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.

Alternative Tool for the Diagnosis of Diseases Through Virtual Reality

  • Galeano, Sara Daniela Galeano
  • Gonzalez, Miguel Esteban Mora
  • Medina, Ricardo Alonso Espinosa
2021 Conference Paper, cited 0 times
Website
Virtual reality (VR) presents objects or simulated scenes to reproduce situations in a way similar to the real thing. In medicine, processing and 3D reconstruction of medical images is an important step in VR. We propose a methodology for processing medical images, to segment organs, reconstruct structures in 3D and represent structures in a VR environment, in order to provide the specialist with an alternative tool for the analysis of medical images. We present a method of image segmentation based on area differentiation and other image processing techniques; the 3D reconstruction was by the 'isosurface' method. Different studies show the benefits of VR applied to clinical practice, adding its uses as an educational tool. A VR environment was created to be visualized with glasses for this purpose, this can be an alternative tool in the identification and visualization of COVID-19 affected lungs through medical image processing and subsequent 3D reconstruction.

RMTF-Net: Residual Mix Transformer Fusion Net for 2D Brain Tumor Segmentation

  • Gai, D.
  • Zhang, J.
  • Xiao, Y.
  • Min, W.
  • Zhong, Y.
  • Zhong, Y.
2022 Journal Article, cited 0 times
Website
Due to the complexity of medical imaging techniques and the high heterogeneity of glioma surfaces, image segmentation of human gliomas is one of the most challenging tasks in medical image analysis. Current methods based on convolutional neural networks concentrate on feature extraction while ignoring the correlation between local and global. In this paper, we propose a residual mix transformer fusion net, namely RMTF-Net, for brain tumor segmentation. In the feature encoder, a residual mix transformer encoder including a mix transformer and a residual convolutional neural network (RCNN) is proposed. The mix transformer gives an overlapping patch embedding mechanism to cope with the loss of patch boundary information. Moreover, a parallel fusion strategy based on RCNN is utilized to obtain local-global balanced information. In the feature decoder, a global feature integration (GFI) module is applied, which can enrich the context with the global attention feature. Extensive experiments on brain tumor segmentation from LGG, BraTS2019 and BraTS2020 demonstrated that our proposed RMTF-Net is superior to existing state-of-art methods in subjective visual performance and objective evaluation.

Optimized U-Net for Brain Tumor Segmentation

  • Futrega, Michał
  • Milesi, Alexandre
  • Marcinkiewicz, Michał
  • Ribalta, Pablo
2022 Book Section, cited 0 times
We propose an optimized U-Net architecture for a brain tumor segmentation task in the BraTS21 challenge. To find the optimal model architecture and the learning schedule, we have run an extensive ablation study to test: deep supervision loss, Focal loss, decoder attention, drop block, and residual connections. Additionally, we have searched for the optimal depth of the U-Net encoder, number of convolutional channels and post-processing strategy. Our method won the validation phase and took third place in the test phase. We have open-sourced the code to reproduce our BraTS21 submission at the NVIDIA Deep Learning Examples GitHub Repository (https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/Segmentation/nnUNet/notebooks/BraTS21.ipynb).

Tuning U-Net for Brain Tumor Segmentation

  • Futrega, Michał
  • Marcinkiewicz, Michał
  • Ribalta, Pablo
2023 Conference Proceedings, cited 0 times
Website
We propose a solution for BraTS22 challenge that builds on top of our previous submission—Optimized U-Net method. This year we focused on improving the model architecture and training schedule. The proposed method further improves scores on both our internal cross validation and challenge validation data. The validation mean dice scores are: ET 0.8381, TC 0.8802, WT 0.9292, and mean Hausdorff95: ET 14.460, TC 5.840, WT 3.594.

Textural radiomic features and time-intensity curve data analysis by dynamic contrast-enhanced MRI for early prediction of breast cancer therapy response: preliminary data

  • Fusco, Roberta
  • Granata, Vincenza
  • Maio, Francesca
  • Sansone, Mario
  • Petrillo, Antonella
Eur Radiol Exp 2020 Journal Article, cited 1 times
Website
BACKGROUND: To investigate the potential of semiquantitative time-intensity curve parameters compared to textural radiomic features on arterial phase images by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for early prediction of breast cancer neoadjuvant therapy response. METHODS: A retrospective study of 45 patients subjected to DCE-MRI by public datasets containing examination performed prior to the start of treatment and after the treatment first cycle ('QIN Breast DCE-MRI' and 'QIN-Breast') was performed. In total, 11 semiquantitative parameters and 50 texture features were extracted. Non-parametric test, receiver operating characteristic analysis with area under the curve (ROC-AUC), Spearman correlation coefficient, and Kruskal-Wallis test with Bonferroni correction were applied. RESULTS: Fifteen patients with pathological complete response (pCR) and 30 patients with non-pCR were analysed. Significant differences in median values between pCR patients and non-pCR patients were found for entropy, long-run emphasis, and busyness among the textural features, for maximum signal difference, washout slope, washin slope, and standardised index of shape among the dynamic semiquantitative parameters. The standardised index of shape had the best results with a ROC-AUC of 0.93 to differentiate pCR versus non-pCR patients. CONCLUSIONS: The standardised index of shape could become a clinical tool to differentiate, in the early stages of treatment, responding to non-responding patients.

AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images

  • Fu, Yu
  • Dong, Shunjie
  • Niu, Meng
  • Xue, Le
  • Guo, Hanning
  • Huang, Yanyan
  • Xu, Yuanfan
  • Yu, Tianbai
  • Shi, Kuangyu
  • Yang, Qianqian
  • Shi, Yiyu
  • Zhang, Hong
  • Tian, Mei
  • Zhuo, Cheng
Medical Image Analysis 2023 Journal Article, cited 0 times
Website
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.

Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks

  • Fu, Ling
  • Ma, Jingchen
  • Chen, Yizhi
  • Larsson, Rasmus
  • Zhao, Jun
Journal of Shanghai Jiaotong University (Science) 2019 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.

A novel approach to 2D/3D registration of X-ray images using Grangeat's relation

  • Frysch, R.
  • Pfeiffer, T.
  • Rose, G.
Med Image Anal 2020 Journal Article, cited 0 times
Website
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.

Supervised Machine-Learning Framework and Classifier Evaluation for Automated Three-dimensional Medical Image Segmentation based on Body MRI

  • Frischmann, Patrick
2013 Thesis, cited 0 times
Website

Classification of COVID-19 in chest radiographs: assessing the impact of imaging parameters using clinical and simulated images

  • Fricks, Rafael
  • Abadi, Ehsan
  • Ria, Francesco
  • Samei, Ehsan
  • Drukker, Karen
  • Mazurowski, Maciej A.
2021 Conference Paper, cited 1 times
Website
As computer-aided diagnostics develop to address new challenges in medical imaging, including emerging diseases such as COVID-19, the initial development is hampered by availability of imaging data. Deep learning algorithms are particularly notorious for performance that tends to improve proportionally to the amount of available data. Simulated images, as available through advanced virtual trials, may present an alternative in data-constrained applications. We begin with our previously trained COVID-19 x-ray classification model (denoted as CVX) that leveraged additional training with existing pre-pandemic chest radiographs to improve classification performance in a set of COVID-19 chest radiographs. The CVX model achieves demonstrably better performance on clinical images compared to an equivalent model that applies standard transfer learning from ImageNet weights. The higher performing CVX model is then shown to generalize effectively to a set of simulated COVID-19 images, both quantitative comparisons of AUCs from clinical to simulated image sets, but also in a qualitative sense where saliency map patterns are consistent when compared between sets. We then stratify the classification results in simulated images to examine dependencies in imaging parameters when patient features are constant. Simulated images show promise in optimizing imaging parameters for accurate classification in data-constrained applications.

Memory Efficient Brain Tumor Segmentation Using an Autoencoder-Regularized U-Net

  • Frey, Markus
  • Nau, Matthias
2020 Book Section, cited 13 times
Website
Early diagnosis and accurate segmentation of brain tumors are imperative for successful treatment. Unfortunately, manual segmentation is time consuming, costly and despite extensive human expertise often inaccurate. Here, we present an MRI-based tumor segmentation framework using an autoencoder-regularized 3D-convolutional neural network. We trained the model on manually segmented structural T1, T1ce, T2, and Flair MRI images of 335 patients with tumors of variable severity, size and location. We then tested the model using independent data of 125 patients and successfully segmented brain tumors into three subregions: the tumor core (TC), the enhancing tumor (ET) and the whole tumor (WT). We also explored several data augmentations and preprocessing steps to improve segmentation performance. Importantly, our model was implemented on a single NVIDIA GTX1060 graphics unit and hence optimizes tumor segmentation for widely affordable hardware. In sum, we present a memory-efficient and affordable solution to tumor segmentation to support the accurate diagnostics of oncological brain pathologies.

SABOS-Net: Self-supervised attention based network for automatic organ segmentation of head and neck CT images

  • Francis, S.
  • Pooloth, G.
  • Singam, S. B. S.
  • Puzhakkal, N.
  • Narayanan, P. P.
  • Balakrishnan, J. P.
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2022 Journal Article, cited 0 times
Website
The segmentation of Organs At Risk (OAR) in Computed Tomography (CT) images is an essential part of the planning phase of radiation treatment to avoid the adverse effects of cancer radiotherapy treatment. Accurate segmentation is a tedious task in the head and neck region due to a large number of small and sensitive organs and the low contrast of CT images. Deep learning-based automatic contouring algorithms can ease this task even when the organs have irregular shapes and size variations. This paper proposes a fully automatic deep learning-based self-supervised 3D Residual UNet architecture with CBAM(Convolution Block Attention Mechanism) for the organ segmentation in head and neck CT images. The Model Genesis structure and image context restoration techniques are used for self-supervision, which can help the network learn image features from unlabeled data, hence solving the annotated medical data scarcity problem in deep networks. A new loss function is applied for training by integrating Focal loss, Tversky loss, and Cross-entropy loss. The proposed model outperforms the state-of-the-art methods in terms of dice similarity coefficient in segmenting the organs. Our self-supervised model could achieve a 4% increase in the dice score of Chiasm, which is a small organ that is present only in a very few CT slices. The proposed model exhibited better accuracy for 5 out of 7 OARs than the recent state-of-the-art models. The proposed model could simultaneously segment all seven organs in an average time of 0.02 s. The source code of this work is made available at .

Breast Lesion Segmentation in DCE- MRI Imaging

  • Frackiewicz, Mariusz
  • Koper, Zuzanna
  • Palus, Henryk
  • Borys, Damian
  • Psiuk-Maksymowicz, Krzysztof
2018 Conference Proceedings, cited 0 times
Website
Breast cancer is one of the most common cancers in women. Typically, the course of the disease is asymptomatic in the early stages of breast cancer. Imaging breast examinations allow early detection of the cancer, which is associated with increased chances of a complete cure. There are many breast imaging techniques such as: mammography (MM), ultrasound imaging (US), positron-emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). These imaging techniques differ in terms of effectiveness, price, type of physical phenomenon, the impact on the patient and its availability. In this paper, we focus on MRI imaging and we compare three breast lesion segmentation algorithms that have been tested on QIN Breast DCE-MRI database, which is publicly available. The obtained values of Dice and Jaccard indices indicate the segmentation using k-means algorithm.

3D MRI Brain Tumour Segmentation with Autoencoder Regularization and Hausdorff Distance Loss Function

  • Fonov, Vladimir S.
  • Rosa-Neto, Pedro
  • Collins, D. Louis
2022 Conference Paper, cited 0 times
Website
Manual segmentation of the Glioblastoma is a challenging task for the radiologists, essential for treatment planning. In recent years deep convolutional neural networks have been shown to perform exceptionally well, in particular the winner of the BraTS challenge 2019 uses 3D U-net architecture in combination with variational autoencoder, using Dice overlap measure as a cost function. In this work we are proposing a loss function that approximates Hausdorff Distance metric that is used to evaluate performance of different segmentation in the hopes that it will allow achieving better performance of the segmentation on new data.

Impact of signal intensity normalization of MRI on the generalizability of radiomic-based prediction of molecular glioma subtypes

  • Foltyn-Dumitru, Martha
  • Schell, Marianne
  • Rastogi, Aditya
  • Sahm, Felix
  • Kessler, Tobias
  • Wick, Wolfgang
  • Bendszus, Martin
  • Brugnara, Gianluca
  • Vollmuth, Philipp
European Radiology 2023 Journal Article, cited 0 times
Website
Radiomic features have demonstrated encouraging results for non-invasive detection of molecular biomarkers, but the lack of guidelines for pre-processing MRI-data has led to poor generalizability. Here, we assessed the influence of different MRI-intensity normalization techniques on the performance of radiomics-based models for predicting molecular glioma subtypes.

Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?

  • Foley, Finbar
  • Rajagopalan, Srinivasan
  • Raghunath, Sushravya M
  • Boland, Jennifer M
  • Karwoski, Ronald A
  • Maldonado, Fabien
  • Bartholmai, Brian J
  • Peikert, Tobias
2016 Conference Proceedings, cited 7 times
Website

Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images

  • Florescu, L. M.
  • Streba, C. T.
  • Serbanescu, M. S.
  • Mamuleanu, M.
  • Florescu, D. N.
  • Teica, R. V.
  • Nica, R. E.
  • Gheonea, I. A.
2022 Journal Article, cited 0 times
Website
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially declared on 26 February 2020. (2) Methods: This study proposes a federated learning approach with pre-trained deep learning models for COVID-19 detection. Three clients were locally deployed with their own dataset. The goal of the clients was to collaborate in order to obtain a global model without sharing samples from the dataset. The algorithm we developed was connected to our internal picture archiving and communication system and, after running backwards, it encountered chest CT changes suggestive for COVID-19 in a patient investigated in our medical imaging department on the 28 January 2020. (4) Conclusions: Based on our results, we recommend using an automated AI-assisted software in order to detect COVID-19 based on the lung imaging changes as an adjuvant diagnostic method to the current gold standard (RT-PCR) in order to greatly enhance the management of these patients and also limit the spread of the disease, not only to the general population but also to healthcare professionals.

The ASNR-ACR-RSNA Common Data Elements Project: What Will It Do for the House of Neuroradiology?

  • Flanders, AE
  • Jordan, JE
American Journal of Neuroradiology 2018 Journal Article, cited 0 times
Website

A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer

  • Fischer, Sarah
  • Tahoun, Mohamed
  • Klaan, Bastian
  • Thierfelder, Kolja M
  • Weber, Marc-Andre
  • Krause, Bernd J
  • Hakenberg, Oliver
  • Fuellen, Georg
  • Hamed, Mohamed
Cancers (Basel) 2019 Journal Article, cited 0 times
Website
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.

Prompt tuning for parameter-efficient medical image segmentation

  • Fischer, Marc
  • Bartler, Alexander
  • Yang, Bin
Med Image Anal 2023 Journal Article, cited 1 times
Website
Neural networks pre-trained on a self-supervision scheme have become the standard when operating in data rich environments with scarce annotations. As such, fine-tuning a model to a downstream task in a parameter-efficient but effective way, e.g. for a new set of classes in the case of semantic segmentation, is of increasing importance. In this work, we propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets. Relying on the recently popularized prompt tuning approach, we provide a prompt-able UNETR (PUNETR) architecture, that is frozen after pre-training, but adaptable throughout the network by class-dependent learnable prompt tokens. We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes (contrastive prototype assignment, CPA) of a student teacher combination. Concurrently, an additional segmentation loss is applied for a subset of classes during pre-training, further increasing the effectiveness of leveraged prompts in the fine-tuning phase. We demonstrate that the resulting method is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models on CT imaging datasets. To this end, the difference between fully fine-tuned and prompt-tuned variants amounts to 7.81 pp for the TCIA/BTCV dataset as well as 5.37 and 6.57 pp for subsets of the TotalSegmentator dataset in the mean Dice Similarity Coefficient (DSC, in %) while only adjusting prompt tokens, corresponding to 0.51% of the pre-trained backbone model with 24.4M frozen parameters. The code for this work is available on https://github.com/marcdcfischer/PUNETR.

Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy

  • Firmino, Macedo
  • Angelo, Giovani
  • Morais, Higor
  • Dantas, Marcel R
  • Valentim, Ricardo
BioMedical Engineering OnLine 2016 Journal Article, cited 63 times
Website
BACKGROUND: CADe and CADx systems for the detection and diagnosis of lung cancer have been important areas of research in recent decades. However, these areas are being worked on separately. CADe systems do not present the radiological characteristics of tumors, and CADx systems do not detect nodules and do not have good levels of automation. As a result, these systems are not yet widely used in clinical settings. METHODS: The purpose of this article is to develop a new system for detection and diagnosis of pulmonary nodules on CT images, grouping them into a single system for the identification and characterization of the nodules to improve the level of automation. The article also presents as contributions: the use of Watershed and Histogram of oriented Gradients (HOG) techniques for distinguishing the possible nodules from other structures and feature extraction for pulmonary nodules, respectively. For the diagnosis, it is based on the likelihood of malignancy allowing more aid in the decision making by the radiologists. A rule-based classifier and Support Vector Machine (SVM) have been used to eliminate false positives. RESULTS: The database used in this research consisted of 420 cases obtained randomly from LIDC-IDRI. The segmentation method achieved an accuracy of 97 % and the detection system showed a sensitivity of 94.4 % with 7.04 false positives per case. Different types of nodules (isolated, juxtapleural, juxtavascular and ground-glass) with diameters between 3 mm and 30 mm have been detected. For the diagnosis of malignancy our system presented ROC curves with areas of: 0.91 for nodules highly unlikely of being malignant, 0.80 for nodules moderately unlikely of being malignant, 0.72 for nodules with indeterminate malignancy, 0.67 for nodules moderately suspicious of being malignant and 0.83 for nodules highly suspicious of being malignant. CONCLUSIONS: From our preliminary results, we believe that our system is promising for clinical applications assisting radiologists in the detection and diagnosis of lung cancer.

LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada

  • Firmino Filho, José Macêdo
2015 Thesis, cited 1 times
Website

Generalized Wasserstein Dice Score, Distributionally Robust Deep Learning, and Ranger for Brain Tumor Segmentation: BraTS 2020 Challenge

  • Fidon, Lucas
  • Ourselin, Sébastien
  • Vercauteren, Tom
2021 Book Section, cited 0 times
Training a deep neural network is an optimization problem with four main ingredients: the design of the deep neural network, the per-sample loss function, the population loss function, and the optimizer. However, methods developed to compete in recent BraTS challenges tend to focus only on the design of deep neural network architectures, while paying less attention to the three other aspects. In this paper, we experimented with adopting the opposite approach. We stuck to a generic and state-of-the-art 3D U-Net architecture and experimented with a non-standard per-sample loss function, the generalized Wasserstein Dice loss, a non-standard population loss function, corresponding to distributionally robust optimization, and a non-standard optimizer, Ranger. Those variations were selected specifically for the problem of multi-class brain tumor segmentation. The generalized Wasserstein Dice loss is a per-sample loss function that allows taking advantage of the hierarchical structure of the tumor regions labeled in BraTS. Distributionally robust optimization is a generalization of empirical risk minimization that accounts for the presence of underrepresented subdomains in the training dataset. Ranger is a generalization of the widely used Adam optimizer that is more stable with small batch size and noisy labels. We found that each of those variations of the optimization of deep neural networks for brain tumor segmentation leads to improvements in terms of Dice scores and Hausdorff distances. With an ensemble of three deep neural networks trained with various optimization procedures, we achieved promising results on the validation dataset and the testing dataset of the BraTS 2020 challenge. Our ensemble ranked fourth out of 78 for the segmentation task of the BraTS 2020 challenge with mean Dice scores of 88.9, 84.1, and 81.4, and mean Hausdorff distances at 95% of 6.4, 19.4, and 15.8 for the whole tumor, the tumor core, and the enhancing tumor.

Enhanced Numerical Method for the Design of 3-D-Printed Holographic Acoustic Lenses for Aberration Correction of Single-Element Transcranial Focused Ultrasound

  • Marcelino Ferri
  • José M. Bravo
  • Javier Redondo
  • Juan V. Sánchez-Pérez
Ultrasound in Medicine & Biology 2018 Journal Article, cited 0 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant issue for enhancing various non-invasive medical treatments. The emission through multi-element phased arrays has been the most widely accepted method to improve focusing in recent years; however, the number and size of transducers represent a bottleneck that limits the focusing accuracy of the technique. To overcome this limitation, a new disruptive technology, based on 3-D-printed acoustic lenses, has recently been proposed. As the submillimeter precision of the latest generation of 3-D printers has been proven to overcome the spatial limitations of phased arrays, a new challenge is to improve the accuracy of the numerical simulations required to design this type of ultrasound lens. In the study described here, we evaluated two improvements in the numerical model applied in previous works for the design of 3-D-printed lenses: (i) allowing the propagation of shear waves in the skull by means of its simulation as an isotropic solid and (ii) introduction of absorption into the set of equations that describes the dynamics of the wave in both fluid and solid media. The results obtained in the numerical simulations are evidence that the inclusion of both s-waves and absorption significantly improves focusing.

On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations

  • Ferri, Marcelino
  • Bravo, Jose Maria
  • Redondo, Javier
  • Jimenez-Gambin, Sergio
  • Jimenez, Noe
  • Camarena, Francisco
  • Sanchez-Perez, Juan Vicente
Polymers (Basel) 2019 Journal Article, cited 2 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.

Characterization of Pulmonary Nodules Based on Features of Margin Sharpness and Texture

  • Ferreira, José Raniery
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
2017 Journal Article, cited 1 times
Website

HEVC optimizations for medical environments

  • Fernández, DG
  • Del Barrio, AA
  • Botella, Guillermo
  • García, Carlos
  • Meyer-Baese, Uwe
  • Meyer-Baese, Anke
2016 Conference Proceedings, cited 5 times
Website
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.

Deep Learning Model for Automatic Contouring of Cardiovascular Substructures on Radiotherapy Planning CT Images: Dosimetric Validation and Reader Study based Clinical Acceptability Testing

  • Fernandes, Miguel Garrett
  • Bussink, Johan
  • Stam, Barbara
  • Wijsman, Robin
  • Schinagl, Dominic AX
  • Teuwen, Jonas
  • Monshouwer, René
Radiotherapy and Oncology 2021 Journal Article, cited 0 times
Website

Identifying BAP1 Mutations in Clear-Cell Renal Cell Carcinoma by CT Radiomics: Preliminary Findings

  • Feng, Zhan
  • Zhang, Lixia
  • Qi, Zhong
  • Shen, Qijun
  • Hu, Zhengyu
  • Chen, Feng
Frontiers in Oncology 2020 Journal Article, cited 0 times
Website
To evaluate the potential application of computed tomography (CT) radiomics in the prediction of BRCA1-associated protein 1 (BAP1) mutation status in patients with clear-cell renal cell carcinoma (ccRCC). In this retrospective study, clinical and CT imaging data of 54 patients were retrieved from The Cancer Genome Atlas–Kidney Renal Clear Cell Carcinoma database. Among these, 45 patients had wild-type BAP1 and nine patients had BAP1 mutation. The texture features of tumor images were extracted using the Matlab-based IBEX package. To produce class-balanced data and improve the stability of prediction, we performed data augmentation for the BAP1 mutation group during cross validation. A model to predict BAP1 mutation status was constructed using Random Forest Classification algorithms, and was evaluated using leave-one-out-cross-validation. Random Forest model of predict BAP1 mutation status had an accuracy of 0.83, sensitivity of 0.72, specificity of 0.87, precision of 0.65, AUC of 0.77, F-score of 0.68. CT radiomics is a potential and feasible method for predicting BAP1 mutation status in patients with ccRCC.

Brain Tumor Segmentation for Multi-Modal MRI with Missing Information

  • Feng, X.
  • Ghimire, K.
  • Kim, D. D.
  • Chandra, R. S.
  • Zhang, H.
  • Peng, J.
  • Han, B.
  • Huang, G.
  • Chen, Q.
  • Patel, S.
  • Bettagowda, C.
  • Sair, H. I.
  • Jones, C.
  • Jiao, Z.
  • Yang, L.
  • Bai, H.
J Digit Imaging 2023 Journal Article, cited 0 times
Website
Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.

Brain Tumor Segmentation with Uncertainty Estimation and Overall Survival Prediction

  • Feng, Xue
  • Dou, Quan
  • Tustison, Nicholas
  • Meyer, Craig
2020 Book Section, cited 16 times
Website
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multimodal MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. Most models in brain tumor segmentation use a 2D/3D patch to predict the class label for the center voxel and variant patch sizes and scales are used to improve the model performance. However, it has low computation efficiency and also has limited receptive field. U-Net is a widely used network structure for end-to-end segmentation and can be used on the entire image or extracted patches to provide classification labels over the entire input voxels so that it is more efficient and expect to yield better performance with larger input size. In this paper we developed a deep-learning-based segmentation method using an ensemble of 3D U-Nets with different hyper-parameters. Furthermore, we estimated the uncertainty of the segmentation from the probabilistic outputs of each network and studied the correlation between the uncertainty and the performances. Preliminary results showed effectiveness of the segmentation model. Finally, we developed a linear model for survival prediction using extracted imaging and non-imaging features, which, despite the simplicity, can effectively reduce overfitting and regression errors.

Brain Tumor Segmentation with Patch-Based 3D Attention UNet from Multi-parametric MRI

  • Feng, Xue
  • Bai, Harrison
  • Kim, Daniel
  • Maragkos, Georgios
  • Machaj, Jan
  • Kellogg, Ryan
2022 Book Section, cited 0 times
Website
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multiparametric MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. In this paper we developed a deep-learning-based segmentation method using a patch-based 3D UNet with the attention block. Hyper-parameters tuning and training and testing augmentations were applied to increase the model performance. Preliminary results showed effectiveness of the segmentation model and achieved mean Dice scores of 0.806 (ET), 0.863 (TC) and 0.918 (WT) in the validation dataset.

Comparison of methods for sensitivity correction in Talbot-Lau computed tomography

  • Felsner, L.
  • Roser, P.
  • Maier, A.
  • Riess, C.
Int J Comput Assist Radiol Surg 2021 Journal Article, cited 0 times
Website
PURPOSE: In Talbot-Lau X-ray phase contrast imaging, the measured phase value depends on the position of the object in the measurement setup. When imaging large objects, this may lead to inhomogeneous phase contributions within the object. These inhomogeneities introduce artifacts in tomographic reconstructions of the object. METHODS: In this work, we compare recently proposed approaches to correct such reconstruction artifacts. We compare an iterative reconstruction algorithm, a known operator network and a U-net. The methods are qualitatively and quantitatively compared on the Shepp-Logan phantom and on the anatomy of a human abdomen. We also perform a dedicated experiment on the noise behavior of the methods. RESULTS: All methods were able to reduce the specific artifacts in the reconstructions for the simulated and virtual real anatomy data. The results show method-specific residual errors that are indicative for the inherently different correction approaches. While all methods were able to correct the artifacts, we report a different noise behavior. CONCLUSION: The iterative reconstruction performs very well, but at the cost of a high runtime. The known operator network shows consistently a very competitive performance. The U-net performs slightly worse, but has the benefit that it is a general-purpose network that does not require special application knowledge.

An annotated test-retest collection of prostate multiparametric MRI

  • Fedorov, Andriy
  • Schwier, Michael
  • Clunie, David
  • Herz, Christian
  • Pieper, Steve
  • Kikinis, Ron
  • Tempany, Clare
  • Fennessy, Fiona
Scientific data 2018 Journal Article, cited 0 times
Website

A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation

  • Fedorov, Andriy
  • Fluckiger, Jacob
  • Ayers, Gregory D
  • Li, Xia
  • Gupta, Sandeep N
  • Tempany, Clare
  • Mulkern, Robert
  • Yankeelov, Thomas E
  • Fennessy, Fiona M
Magnetic resonance imaging 2014 Journal Article, cited 30 times
Website
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in significantly different (p<0.05) mean estimates of ve, but not of K(trans). Comparing cAIF, different estimates for both ve, and K(trans) were obtained. Intra-method comparison between the iAIF- and cAIF-driven analyses showed the lack of effect on ve, while K(trans) values were significantly different for one of the methods. Our results indicate that the choice of the algorithm used for automated image-based AIF determination can lead to significant differences in the values of the estimated PK parameters. K(trans) estimates are more sensitive to the choice between cAIF/iAIF as compared to ve, leading to potentially significant differences depending on the AIF method. These observations may have practical consequences in evaluating the PK analysis results obtained in a multi-site setting.

DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

  • Fedorov, Andriy
  • Clunie, David
  • Ulrich, Ethan
  • Bauer, Christian
  • Wahle, Andreas
  • Brown, Bartley
  • Onken, Michael
  • Riesmeier, Jörg
  • Pieper, Steve
  • Kikinis, Ron
  • Buatti, John
  • Beichel, Reinhard R
PeerJ 2016 Journal Article, cited 20 times
Website
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM((R))) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

Quantitative Imaging Informatics for Cancer Research

  • Fedorov, Andrey
  • Beichel, Reinhard
  • Kalpathy-Cramer, Jayashree
  • Clunie, David
  • Onken, Michael
  • Riesmeier, Jorg
  • Herz, Christian
  • Bauer, Christian
  • Beers, Andrew
  • Fillion-Robin, Jean-Christophe
  • Lasso, Andras
  • Pinter, Csaba
  • Pieper, Steve
  • Nolden, Marco
  • Maier-Hein, Klaus
  • Herrmann, Markus D
  • Saltz, Joel
  • Prior, Fred
  • Fennessy, Fiona
  • Buatti, John
  • Kikinis, Ron
JCO Clin Cancer Inform 2020 Journal Article, cited 0 times
Website
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.

B2C3NetF2: Breast cancer classification using an end‐to‐end deep learning feature fusion and satin bowerbird optimization controlled Newton Raphson feature selection

  • Fatima, Mamuna
  • Khan, Muhammad Attique
  • Shaheen, Saima
  • Almujally, Nouf Abdullah
  • Wang, Shui‐Hua
2023 Journal Article, cited 0 times
Website
Currently, the improvement in AI is mainly related to deep learning techniques that are employed for the classification, identification, and quantification of patterns in clinical images. The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks, such as skin cancer, colorectal cancer, brain tumour, cardiac disease, Breast cancer (BrC), and a few more. The manual diagnosis of medical issues always requires an expert and is also expensive. Therefore, developing some computer diagnosis techniques based on deep learning is essential. Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage. It is estimated that patients with BrC will rise to 70% in the next 20 years. If diagnosed at a later stage, the survival rate of patients with BrC is shallow. Hence, early detection is essential, increasing the survival rate to 50%. A new framework for BrC classification is presented that utilises deep learning and feature optimization. The significant steps of the presented framework include (i) hybrid contrast enhancement of acquired images, (ii) data augmentation to facilitate better learning of the Convolutional Neural Network (CNN) model, (iii) a pre-trained ResNet-101 model is utilised and modified according to selected dataset classes, (iv) deep transfer learning based model training for feature extraction, (v) the fusion of features using the proposed highly corrected function-controlled canonical correlation analysis approach, and (vi) optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers. The experiments of the proposed framework have been carried out using the most critical and publicly available dataset, such as CBIS-DDSM, and obtained the best accuracy of 94.5% along with improved computation time. The comparison depicts that the presented method surpasses the current state-of-the-art approaches.

Radiogenomic Prediction of MGMT Using Deep Learning with Bayesian Optimized Hyperparameters

  • Farzana, Walia
  • Temtam, Ahmed G.
  • Shboul, Zeina A.
  • Rahman, M. Monibor
  • Sadique, M. Shibly
  • Iftekharuddin, Khan M.
2022 Book Section, cited 0 times
Website
Glioblastoma (GBM) is the most aggressive primary brain tumor. The standard radiotherapeutic treatment for newly diagnosed GBM patients is Temozolomide (TMZ). O6-methylguanine-DNA-methyltransferase (MGMT) gene methylation status is a genetic biomarker for patient response to the treatment and is associated with a longer survival time. The standard method of assessing genetic alternation is surgical resection which is invasive and time-consuming. Recently, imaging genomics has shown the potential to associate imaging phenotype with genetic alternation. Imaging genomics provides an opportunity for noninvasive assessment of treatment response. Accordingly, we propose a convolutional neural network (CNN) framework with Bayesian optimized hyperparameters for the prediction of MGMT status from multimodal magnetic resonance imaging (mMRI). The goal of the proposed method is to predict the MGMT status noninvasively. Using the RSNA-MICCAI dataset, the proposed framework achieves an area under the curve (AUC) of 0.718 and 0.477 for validation and testing phase, respectively.

Signal intensity analysis of ecological defined habitat in soft tissue sarcomas to predict metastasis development

  • Farhidzadeh, Hamidreza
  • Chaudhury, Baishali
  • Scott, Jacob G
  • Goldgof, Dmitry B
  • Hall, Lawrence O
  • Gatenby, Robert A
  • Gillies, Robert J
  • Raghavan, Meera
2016 Conference Proceedings, cited 6 times
Website
Magnetic Resonance Imaging (MRI) is the standard of care in the clinic for diagnosis and follow up of Soft Tissue Sarcomas (STS) which presents an opportunity to explore the heterogeneity inherent in these rare tumors. Tumor heterogeneity is a challenging problem to quantify and has been shown to exist at many scales, from genomic to radiomic, existing both within an individual tumor, between tumors from the same primary in the same patient and across different patients. In this paper, we propose a method which focuses on spatially distinct sub-regions or habitats in the diagnostic MRI of patients with STS by using pixel signal intensity. Habitat characteristics likely represent areas of differing underlying biology within the tumor, and delineation of these differences could provide clinically relevant information to aid in selecting a therapeutic regimen (chemotherapy or radiation). To quantify tumor heterogeneity, first we assay intra-tumoral segmentations based on signal intensity and then build a spatial mapping scheme from various MRI modalities. Finally, we predict clinical outcomes, using in this paper the appearance of distant metastasis - the most clinically meaningful endpoint. After tumor segmentation into high and low signal intensities, a set of quantitative imaging features based on signal intensity is proposed to represent variation in habitat characteristics. This set of features is utilized to predict metastasis in a cohort of STS patients. We show that this framework, using only pre-therapy MRI, predicts the development of metastasis in STS patients with 72.41% accuracy, providing a starting point for a number of clinical hypotheses.

A study of machine learning and deep learning models for solving medical imaging problems

  • Farhat, Fadi G.
2019 Thesis, cited 0 times
Website
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task. Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.

Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans

  • M. Mehdi Farhangi
  • Nicholas Petrick
  • Berkman Sahiner
  • Hichem Frigui
  • Amir A. Amini
  • Aria Pezeshk
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.

Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network

  • Farahani, Keyvan
  • Kalpathy-Cramer, Jayashree
  • Chenevert, Thomas L
  • Rubin, Daniel L
  • Sunderland, John J
  • Nordstrom, Robert J
  • Buatti, John
  • Hylton, Nola
Tomography 2016 Journal Article, cited 2 times
Website
The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.

Hybrid intelligent approach for diagnosis of the lung nodule from CT images using spatial kernelized fuzzy c-means and ensemble learning

  • Farahani, Farzad Vasheghani
  • Ahmadi, Abbas
  • Zarandi, Mohammad Hossein Fazel
Mathematics and Computers in Simulation 2018 Journal Article, cited 1 times
Website

Feature fusion for lung nodule classification

  • Farag, Amal A
  • Ali, Asem
  • Elshazly, Salwa
  • Farag, Aly A
International Journal of Computer Assisted Radiology and Surgery 2017 Journal Article, cited 3 times
Website

Image analysis-based tumor infiltrating lymphocytes measurement predicts breast cancer pathologic complete response in SWOG S0800 neoadjuvant chemotherapy trial

  • Fanucci, Kristina A.
  • Bai, Yalai
  • Pelekanou, Vasiliki
  • Nahleh, Zeina A.
  • Shafi, Saba
  • Burela, Sneha
  • Barlow, William E.
  • Sharma, Priyanka
  • Thompson, Alastair M.
  • Godwin, Andrew K.
  • Rimm, David L.
  • Hortobagyi, Gabriel N.
  • Liu, Yihan
  • Wang, Leona
  • Wei, Wei
  • Pusztai, Lajos
  • Blenman, Kim R. M.
npj Breast Cancer 2023 Journal Article, cited 0 times
Website
We assessed the predictive value of an image analysis-based tumor-infiltrating lymphocytes (TILs) score for pathologic complete response (pCR) and event-free survival in breast cancer (BC). About 113 pretreatment samples were analyzed from patients with stage IIB-IIIC HER-2-negative BC randomized to neoadjuvant chemotherapy ± bevacizumab. TILs quantification was performed on full sections using QuPath open-source software with a convolutional neural network cell classifier (CNN11). We used easTILs% as a digital metric of TILs score defined as [sum of lymphocytes area (mm2)/stromal area(mm2)] × 100. Pathologist-read stromal TILs score (sTILs%) was determined following published guidelines. Mean pretreatment easTILs% was significantly higher in cases with pCR compared to residual disease (median 36.1 vs.14.8%, p < 0.001). We observed a strong positive correlation (r = 0.606, p < 0.0001) between easTILs% and sTILs%. The area under the prediction curve (AUC) was higher for easTILs% than sTILs%, 0.709 and 0.627, respectively. Image analysis-based TILs quantification is predictive of pCR in BC and had better response discrimination than pathologist-read sTILs%.

Resolution enhancement for lung 4D-CT based on transversal structures by using multiple Gaussian process regression learning

  • Fang, Shiting
  • Hu, Runyue
  • Yuan, Xinrui
  • Liu, Shangqing
  • Zhang, Yuan
Phys Med 2020 Journal Article, cited 0 times
Website
PURPOSE: Four-dimensional computed tomography (4D-CT) plays a useful role in many clinical situations. However, due to the hardware limitation of system, dense sampling along superior-inferior direction is often not practical. In this paper, we develop a novel multiple Gaussian process regression model to enhance the superior-inferior resolution for lung 4D-CT based on transversal structures. METHODS: The proposed strategy is based on the observation that high resolution transversal images can recover missing pixels in the superior-inferior direction. Based on this observation and motived by random forest algorithm, we employ multiple Gaussian process regression model learned from transversal images to improve superior-inferior resolution. Specifically, we first randomly sample 3 x 3 patches from original transversal images. The central pixel of these patches and the eight-neighbour pixels of their corresponding degraded versions form the label and input of training data, respectively. Multiple Gaussian process regression model is then built on the basis of multiple training subsets obtained by random sampling. Finally, the central pixel of the patch is estimated based on the proposed model, with the eight-neighbour pixels of each 3 x 3 patch from interpolated superior-inferior direction images as inputs. RESULTS: The performance of our method is extensively evaluated using simulated and publicly available datasets. Our experiments show the remarkable performance of the proposed method. CONCLUSIONS: In this paper, we propose a new approach to improve the 4D-CT resolution, which does not require any external data and hardware support, and can produce clear coronal/sagittal images for easy viewing.

EGFR/SRC/ERK-stabilized YTHDF2 promotes cholesterol dysregulation and invasive growth of glioblastoma

  • Fang, Runping
  • Chen, Xin
  • Zhang, Sicong
  • Shi, Hui
  • Ye, Youqiong
  • Shi, Hailing
  • Zou, Zhongyu
  • Li, Peng
  • Guo, Qing
  • Ma, Li
Nature Communications 2021 Journal Article, cited 14 times
Website

Tumour heterogeneity revealed by unsupervised decomposition of dynamic contrast-enhanced magnetic resonance imaging is associated with underlying gene expression patterns and poor survival in breast cancer patients

  • Fan, M.
  • Xia, P.
  • Liu, B.
  • Zhang, L.
  • Wang, Y.
  • Gao, X.
  • Li, L.
Breast Cancer Res 2019 Journal Article, cited 3 times
Website
BACKGROUND: Heterogeneity is a common finding within tumours. We evaluated the imaging features of tumours based on the decomposition of tumoural dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data to identify their prognostic value for breast cancer survival and to explore their biological importance. METHODS: Imaging features (n = 14), such as texture, histogram distribution and morphological features, were extracted to determine their associations with recurrence-free survival (RFS) in patients in the training cohort (n = 61) from The Cancer Imaging Archive (TCIA). The prognostic value of the features was evaluated in an independent dataset of 173 patients (i.e. the reproducibility cohort) from the TCIA I-SPY 1 TRIAL dataset. Radiogenomic analysis was performed in an additional cohort, the radiogenomic cohort (n = 87), using DCE-MRI from TCGA-BRCA and corresponding gene expression data from The Cancer Genome Atlas (TCGA). The MRI tumour area was decomposed by convex analysis of mixtures (CAM), resulting in 3 components that represent plasma input, fast-flow kinetics and slow-flow kinetics. The prognostic MRI features were associated with the gene expression module in which the pathway was analysed. Furthermore, a multigene signature for each prognostic imaging feature was built, and the prognostic value for RFS and overall survival (OS) was confirmed in an additional cohort from TCGA. RESULTS: Three image features (i.e. the maximum probability from the precontrast MR series, the median value from the second postcontrast series and the overall tumour volume) were independently correlated with RFS (p values of 0.0018, 0.0036 and 0.0032, respectively). The maximum probability feature from the fast-flow kinetics subregion was also significantly associated with RFS and OS in the reproducibility cohort. Additionally, this feature had a high correlation with the gene expression module (r = 0.59), and the pathway analysis showed that Ras signalling, a breast cancer-related pathway, was significantly enriched (corrected p value = 0.0044). Gene signatures (n = 43) associated with the maximum probability feature were assessed for associations with RFS (p = 0.035) and OS (p = 0.027) in an independent dataset containing 1010 gene expression samples. Among the 43 gene signatures, Ras signalling was also significantly enriched. CONCLUSIONS: Dynamic pattern deconvolution revealed that tumour heterogeneity was associated with poor survival and cancer-related pathways in breast cancer.

Radiogenomic analysis of cellular tumor-stroma heterogeneity as a prognostic predictor in breast cancer

  • Fan, M.
  • Wang, K.
  • Zhang, Y.
  • Ge, Y.
  • Lu, Z.
  • Li, L.
2023 Journal Article, cited 0 times
Website
BACKGROUND: The tumor microenvironment and intercellular communication between solid tumors and the surrounding stroma play crucial roles in cancer initiation, progression, and prognosis. Radiomics provides clinically relevant information from radiological images; however, its biological implications in uncovering tumor pathophysiology driven by cellular heterogeneity between the tumor and stroma are largely unknown. We aimed to identify radiogenomic signatures of cellular tumor-stroma heterogeneity (TSH) to improve breast cancer management and prognosis analysis. METHODS: This retrospective multicohort study included five datasets. Cell subpopulations were estimated using bulk gene expression data, and the relative difference in cell subpopulations between the tumor and stroma was used as a biomarker to categorize patients into good- and poor-survival groups. A radiogenomic signature-based model utilizing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) was developed to target TSH, and its clinical significance in relation to survival outcomes was independently validated. RESULTS: The final cohorts of 1330 women were included for cellular TSH biomarker identification (n = 112, mean age, 57.3 years +/- 14.6) and validation (n = 886, mean age, 58.9 years +/- 13.1), radiogenomic signature of TSH identification (n = 91, mean age, 55.5 years +/- 11.4), and prognostic (n = 241) assessments. The cytotoxic lymphocyte biomarker differentiated patients into good- and poor-survival groups (p < 0.0001) and was independently validated (p = 0.014). The good survival group exhibited denser cell interconnections. The radiogenomic signature of TSH was identified and showed a positive association with overall survival (p = 0.038) and recurrence-free survival (p = 3 x 10(-4)). CONCLUSION: Radiogenomic signatures provide insights into prognostic factors that reflect the imbalanced tumor-stroma environment, thereby presenting breast cancer-specific biological implications and prognostic significance.

Detection of effective genes in colon cancer: A machine learning approach

  • Fahami, Mohammad Amin
  • Roshanzamir, Mohamad
  • Izadi, Navid Hoseini
  • Keyvani, Vahideh
  • Alizadehsani, Roohallah
Informatics in Medicine Unlocked 2021 Journal Article, cited 0 times
Website
Nowadays, a variety of cancers have become common among humans which unfortunately are the cause of death for many of these people. Early detection and diagnosis of cancers can have a significant impact on the survival of patients and treatment cost reduction. Colon cancer is the third and the second main cause of women's and men's death worldwide among cancers. Hence, many researchers have been trying to provide new methods for early diagnosis of colon cancer. In this study, we apply statistical hypothesis tests such as t-test and Mann–Whitney–Wilcoxon and machine learning methods such as Neural Network, KNN and Decision Tree to detect the most effective genes in the vital status of colon cancer patients. We normalize the dataset using a new two-step method. In the first step, the genes within each sample (patient) are normalized to have zero mean and unit variance. In the second step, normalization is done for each gene across the whole dataset. Analyzing the results shows that this normalization method is more efficient than the others and improves the overall performance of the research. Afterwards, we apply unsupervised learning methods to find the meaningful structures in colon cancer gene expressions. In this regard, the dimensionality of the dataset is reduced by employing Principle Component Analysis (PCA). Next, we cluster the patients according to the PCA extracted features. We then check the labeling results of unsupervised learning methods using different supervised learning algorithms. Finally, we determine genes which have major impact on colon cancer mortality rate in each cluster. Our conducted study is the first which suggests that the colon cancer patients can be categorized into two clusters. In each cluster, 20 effective genes were extracted which can be important for early diagnosis of colon cancer. Many of these genes have been identified for the first time.

A Comparison of Three Different Deep Learning-Based Models to Predict the MGMT Promoter Methylation Status in Glioblastoma Using Brain MRI

  • Faghani, S.
  • Khosravi, B.
  • Moassefi, M.
  • Conte, G. M.
  • Erickson, B. J.
J Digit Imaging 2023 Journal Article, cited 0 times
Website
Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 x 32 x 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.

Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation

  • Estienne, T.
  • Lerousseau, M.
  • Vakalopoulou, M.
  • Alvarez Andres, E.
  • Battistella, E.
  • Carre, A.
  • Chandra, S.
  • Christodoulidis, S.
  • Sahasrabudhe, M.
  • Sun, R.
  • Robert, C.
  • Talbot, H.
  • Paragios, N.
  • Deutsch, E.
Front Comput Neurosci 2020 Journal Article, cited 15 times
Website
Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.

Towards Fully Automatic X-Ray to CT Registration

  • Esteban, Javier
  • Grimm, Matthias
  • Unberath, Mathias
  • Zahnd, Guillaume
  • Navab, Nassir
2019 Journal Article, cited 3 times
Website
The main challenge preventing a fully-automatic X-ray to CT registration is an initialization scheme that brings the X-ray pose within the capture range of existing intensity-based registration methods. By providing such an automatic initialization, the present study introduces the first end-to-end fully-automatic registration framework. A network is first trained once on artificial X-rays to extract 2D landmarks resulting from the projection of CT-labels. A patient-specific refinement scheme is then carried out: candidate points detected from a new set of artificial X-rays are back-projected onto the patient CT and merged into a refined meaningful set of landmarks used for network re-training. This network-landmarks combination is finally exploited for intraoperative pose-initialization with a runtime of 102 ms. Evaluated on 6 pelvis anatomies (486 images in total), the mean Target Registration Error was 15.0±7.3 mm. When used to initialize the BOBYQA optimizer with normalized cross-correlation, the average (± STD) projection distance was 3.4±2.3 mm, and the registration success rate (projection distance <2.5% of the detector width) greater than 97%.

Comparison of Accuracy of Color Spaces in Cell Features Classificationin Images of Leukemia types ALL and MM

  • Espinoza-Del Angel, Cinthia
  • Femat-Diaz, Aurora
2022 Journal Article, cited 0 times
Website
This study presents a methodology for identifying the color space that provides the best performance in an image processing application. When measurements are performed without selecting the appropriate color model, the accuracy of the results is considerably altered. It is significant in computation, mainly when a diagnostic is based on stained cell microscopy images. This work shows how the proper selection of the color model provides better characterization in two types of cancer, acute lymphoid leukemia, and multiple myeloma. The methodology uses images from a public database. First, the nuclei are segmented, and then statistical moments are calculated for class identification. After, a principal component analysis is performed to reduce the extracted features and identify the most significant ones. At last, the predictive model is evaluated using the k-nearest neighbor algorithm and a confusion matrix. For the images used, the results showed that the CIE L*a*b color space best characterized the analyzed cancer types with an average accuracy of 95.52%. With an accuracy of 91.81%, RGB and CMY spaces followed. HSI and HSV spaces had an accuracy of 87.86% and 89.39%, respectively, and the worst performer was grayscale with an accuracy of 55.56%.

Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization

  • Esmaeili, Morteza
  • Vettukattil, Riyas
  • Banitalebi, Hasan
  • Krogh, Nina R
  • Geitung, Jonn Terje
J Pers Med 2021 Journal Article, cited 0 times
Website
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human-machine interactions and assist in the selection of optimal training methods.

Computer-aided detection of Pulmonary Nodules based on SVM in thoracic CT images

  • Eskandarian, Parinaz
  • Bagherzadeh, Jamshid
2015 Conference Proceedings, cited 12 times
Website
Computer-Aided diagnosis of Solitary Pulmonary Nodules using the method of X-ray CT images is the early detection of lung cancer. In this study, a computer-aided system for detection of pulmonary nodules on CT scan based support vector machine classifier is provided for the diagnosis of solitary pulmonary nodules. So at the first step, by data mining techniques, volume of data are reduced. Then divided by the area of the chest, the suspicious nodules are identified and eventually nodules are detected. In comparison with the threshold-based methods, support vector machine classifier to classify more accurately describes areas of the lungs. In this study, the false positive rate is reduced by combination of threshold with support vector machine classifier. Experimental results based on data from 147 patients with lung LIDC image database show that the proposed system is able to obtained sensitivity of 89.9% and false positive of 3.9 per scan. In comparison to previous systems, the proposed system demonstrates good performance.

New prognostic factor telomerase reverse transcriptase promotor mutation presents without MR imaging biomarkers in primary glioblastoma

  • Ersoy, Tunc F
  • Keil, Vera C
  • Hadizadeh, Dariusch R
  • Gielen, Gerrit H
  • Fimmers, Rolf
  • Waha, Andreas
  • Heidenreich, Barbara
  • Kumar, Rajiv
  • Schild, Hans H
  • Simon, Matthias
Neuroradiology 2017 Journal Article, cited 1 times
Website
PURPOSE: Magnetic resonance (MR) imaging biomarkers can assist in the non-invasive assessment of the genetic status in glioblastomas (GBMs). Telomerase reverse transcriptase (TERT) promoter mutations are associated with a negative prognosis. This study was performed to identify MR imaging biomarkers to forecast the TERT mutation status. METHODS: Pre-operative MRIs of 64/67 genetically confirmed primary GBM patients (51/67 TERT-mutated with rs2853669 polymorphism) were analyzed according to Visually AcceSAble Rembrandt Images (VASARI) ( https://wiki.cancerimagingarchive.net/display/Public/VASARI+Research+Project ) imaging criteria by three radiological raters. TERT mutation and O(6)-methylguanine-DNA methyltransferase (MGMT) hypermethylation data were obtained through direct and pyrosequencing as described in a previous study. Clinical data were derived from a prospectively maintained electronic database. Associations of potential imaging biomarkers and genetic status were assessed by Fisher and Mann-Whitney U tests and stepwise linear regression. RESULTS: No imaging biomarkers could be identified to predict TERT mutational status (alone or in conjunction with TERT promoter polymorphism rs2853669 AA-allele). TERT promoter mutations were more common in patients with tumor-associated seizures as first symptom (26/30 vs. 25/37, p = 0.07); these showed significantly smaller tumors [13.1 (9.0-19.0) vs. 24.0 (16.6-37.5) all cm(3); p = 0.007] and prolonged median overall survival [17.0 (11.5-28.0) vs. 9.0 (4.0-12.0) all months; p = 0.02]. TERT-mutated GBMs were underrepresented in the extended angularis region (p = 0.03), whereas MGMT-methylated GBMs were overrepresented in the corpus callosum (p = 0.03) and underrepresented temporomesially (p = 0.01). CONCLUSION: Imaging biomarkers for prediction of TERT mutation status remain weak and cannot be derived from the VASARI protocol. Tumor-associated seizures are less common in TERT mutated glioblastomas.

Fusing clinical and image data for detecting the severity level of hospitalized symptomatic COVID-19 patients using hierarchical model

  • Ershadi, Mohammad Mahdi
  • Rise, Zeinab Rahimi
2023 Journal Article, cited 0 times
Website
Purpose Based on medical reports, it is hard to find levels of different hospitalized symptomatic COVID-19 patients according to their features in a short time. Besides, there are common and special features for COVID-19 patients at different levels based on physicians’ knowledge that make diagnosis difficult. For this purpose, a hierarchical model is proposed in this paper based on experts’ knowledge, fuzzy C-mean (FCM) clustering, and adaptive neuro-fuzzy inference system (ANFIS) classifier. Methods Experts considered a special set of features for different groups of COVID-19 patients to find their treatment plans. Accordingly, the structure of the proposed hierarchical model is designed based on experts’ knowledge. In the proposed model, we applied clustering methods to patients’ data to determine some clusters. Then, we learn classifiers for each cluster in a hierarchical model. Regarding different common and special features of patients, FCM is considered for the clustering method. Besides, ANFIS had better performances than other classification methods. Therefore, FCM and ANFIS were considered to design the proposed hierarchical model. FCM finds the membership degree of each patient’s data based on common and special features of different clusters to reinforce the ANFIS classifier. Next, ANFIS identifies the need of hospitalized symptomatic COVID-19 patients to ICU and to find whether or not they are in the end-stage (mortality target class). Two real datasets about COVID-19 patients are analyzed in this paper using the proposed model. One of these datasets had only clinical features and another dataset had both clinical and image features. Therefore, some appropriate features are extracted using some image processing and deep learning methods. Results According to the results and statistical test, the proposed model has the best performance among other utilized classifiers. Its accuracies based on clinical features of the first and second datasets are 92% and 90% to find the ICU target class. Extracted features of image data increase the accuracy by 94%. Conclusion The accuracy of this model is even better for detecting the mortality target class among different classifiers in this paper and the literature review. Besides, this model is compatible with utilized datasets about COVID-19 patients based on clinical data and both clinical and image data, as well. Highlights • A new hierarchical model is proposed using ANFIS classifiers and FCM clustering method in this paper. Its structure is designed based on experts’ knowledge and real medical process. FCM reinforces the ANFIS classification learning phase based on the features of COVID-19 patients. • Two real datasets about COVID-19 patients are studied in this paper. One of these datasets has both clinical and image data. Therefore, appropriate features are extracted based on its image data and considered with available meaningful clinical data. Different levels of hospitalized symptomatic COVID-19 patients are considered in this paper including the need of patients to ICU and whether or not they are in end-stage. • Well-known classification methods including case-based reasoning (CBR), decision tree, convolutional neural networks (CNN), K-nearest neighbors (KNN), learning vector quantization (LVQ), multi-layer perceptron (MLP), Naive Bayes (NB), radial basis function network (RBF), support vector machine (SVM), recurrent neural networks (RNN), fuzzy type-I inference system (FIS), and adaptive neuro-fuzzy inference system (ANFIS) are designed for these datasets and their results are analyzed for different random groups of the train and test data; • According to unbalanced utilized datasets, different performances of classifiers including accuracy, sensitivity, specificity, precision, F-score, and G-mean are compared to find the best classifier. ANFIS classifiers have the best results for both datasets. • To reduce the computational time, the effects of the Principal Component Analysis (PCA) feature reduction method are studied on the performances of the proposed model and classifiers. According to the results and statistical test, the proposed hierarchical model has the best performances among other utilized classifiers.

Sparse View Deep Differentiated Backprojection for Circular Trajectories in CBCT

  • Ernst, Philipp
  • Rose, Gerog
  • Nürnberger, Andreas
2021 Conference Paper, cited 0 times
Website
In this paper, we present a method for removing streak artifacts from reconstructions of sparse cone beam CT (CBCT) projections along circular trajectories. The differentiated backprojection on 2-D planes is combined with convolutional neural networks for both artifact reduction and the ill-posed inversion of the Hilbert transform. Undersampling errors occur at different stages of the algorithm, so the influence of applying the neural networks at these stages is investigated. Spectral blending is used to combine coronal and sagittal planes to a full 3-D reconstruction. Experimental results show that using a neural network to reconstruct a plane-of-interest from the differentiated backprojection of few projections works best by additionally providing FDK reconstructed planes to the network. This approach reduces streaking and cone beam artifacts compared to the direct FDK reconstruction and is also superior to post-processing CNNs.

Sinogram upsampling using Primal-Dual UNet for undersampled CT and radial MRI reconstruction

  • Ernst, P.
  • Chatterjee, S.
  • Rose, G.
  • Speck, O.
  • Nurnberger, A.
2023 Journal Article, cited 6 times
Website
Computed tomography (CT) and magnetic resonance imaging (MRI) are two widely used clinical imaging modalities for non-invasive diagnosis. However, both of these modalities come with certain problems. CT uses harmful ionising radiation, and MRI suffers from slow acquisition speed. Both problems can be tackled by undersampling, such as sparse sampling. However, such undersampled data leads to lower resolution and introduces artefacts. Several techniques, including deep learning based methods, have been proposed to reconstruct such data. However, the undersampled reconstruction problem for these two modalities was always considered as two different problems and tackled separately by different research works. This paper proposes a unified solution for both sparse CT and undersampled radial MRI reconstruction, achieved by applying Fourier transform-based pre-processing on the radial MRI and then finally reconstructing both modalities using sinogram upsampling combined with filtered back-projection. The Primal-Dual network is a deep learning based method for reconstructing sparsely-sampled CT data. This paper introduces Primal-Dual UNet, which improves the Primal-Dual network in terms of accuracy and reconstruction speed. The proposed method resulted in an average SSIM of 0.932+/-0.021 while performing sparse CT reconstruction for fan-beam geometry with a sparsity level of 16, achieving a statistically significant improvement over the previous model, which resulted in 0.919+/-0.016. Furthermore, the proposed model resulted in 0.903+/-0.019 and 0.957+/-0.023 average SSIM while reconstructing undersampled brain and abdominal MRI data with an acceleration factor of 16, respectively - statistically significant improvements over the original model, which resulted in 0.867+/-0.025 and 0.949+/-0.025. Finally, this paper shows that the proposed network not only improves the overall image quality, but also improves the image quality for the regions-of-interest: liver, kidneys, and spleen; as well as generalises better than the baselines in presence the of a needle.

Analysis of Computed Tomography Images of Lung Cancer Patients with The Marker Controlled Based Method

  • Erkoc, Merve
  • Icer, Semra
2022 Conference Paper, cited 0 times
In this study, it was aimed to obtain the tumor region from computed tomography images after a number of pre-processes using the Marker-Controlled watershed segmentation. In accordance with this purpose, tumor segmentation was performed using four different data sets. Segmentation success was analyzed with the Jaccard index method in terms of similarity rates to the reference images. The index was calculated as average 0.8231 for the RIDER lung CT dataset, 0.8365 for the lung 1 dataset, 0.8578 for the lung 3 dataset and 0.8641 for the LIDC-IDRI dataset. Our current work on the practical and successful segmentation of lung tumor has been promising for next steps.

Multisite Image Data Collection and Management Using the RSNA Image Sharing Network

  • Erickson, Bradley J
  • Fajnwaks, Patricio
  • Langer, Steve G
  • Perry, John
Transl OncolTranslational oncology 2014 Journal Article, cited 3 times
Website
The execution of a multisite trial frequently includes image collection. The Clinical Trials Processor (CTP) makes removal of protected health information highly reliable. It also provides reliable transfer of images to a central review site. Trials using central review of imaging should consider using CTP for handling image data when a multisite trial is being designed.

Radiology and Enterprise Medical Imaging Extensions (REMIX)

  • Erdal, Barbaros S
  • Prevedello, Luciano M
  • Qian, Songyue
  • Demirer, Mutlu
  • Little, Kevin
  • Ryu, John
  • O’Donnell, Thomas
  • White, Richard D
2017 Journal Article, cited 1 times
Website

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI

  • Enlund Åström, Isabelle
2019 Thesis, cited 0 times
Website
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy

  • Engwall, Erik
  • Fredriksson, Albin
  • Glimelius, Lars
Medical Physics 2018 Journal Article, cited 2 times
Website

A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM

  • EMİRZADE, ERKAN
2016 Thesis, cited 1 times
Website
Computer aided diagnosis is starting to be implemented broadly in the diagnosis and detection of many varieties of abnormities acquired during various imaging procedures. The main aim of the CAD systems is to increase the accuracy and decrease the time of diagnoses, while the general achievement for CAD systems are to find the place of nodules and to determine the characteristic features of the nodule. As lung cancer is one of the fatal and leading cancer types, there has been plenty of studies for the usage of the CAD systems to detect lung cancer. Yet, the CAD systems need to be developed a lot in order to identify the different shapes of nodules, lung segmentation and to have higher level of sensitivity, specifity and accuracy. This challenge is the motivation of this study in implementation of CAD system for lung cancer detection. In the study, LIDC database is used which comprises of an image set of lung cancer thoracic documented CT scans. The presented CAD system consists of CT image reading, image pre-processing, segmentation, feature extraction and classification steps. To avoid losing important features, the CT images were read as a raw form in DICOM file format. Then, filtration and enhancement techniques were used as an image processing. Otsu’s algorithm, edge detection and morphological operations are applied for the segmentation, following the feature extractions step. Finally, support vector machine with Gaussian RBF is utilized for the classification step which is widely used as a supervised classifier.

A Deep Learning Approach to Glioblastoma Radiogenomic Classification Using Brain MRI

  • Emchinov, Aleksandr
2022 Book Section, cited 1 times
Website
A malignant brain tumor known as a glioblastoma is an extremely life-threatening condition. It has been proven that the existence of a specific genetic sequence in the tumor known as MGMT promoter methylation is a favourable prognostic factor and a sign of how well a patient will respond to chemotherapy. Currently, the only way to identify the presence of the MGMT promoter is to perform a genetic analysis that requires surgical intervention. The development of an accurate method for determining the presence of the MGMT promoter using only MRI would help to reduce the number of surgeries. In this work, we developed a method for glioblastoma classification using just MRI by choosing an appropriate loss function, neural network architecture and ensembling trained models. This problem was successfully solved as part of the “RSNA-MICCAI Brain Tumor Radiogenomic Classification” competition, and the proposed algorithm was included in the top 5% of best solutions.

A Novel Hybrid Perceptron Neural Network Algorithm for Classifying Breast MRI Tumors

  • ElNawasany, Amal M
  • Ali, Ahmed Fouad
  • Waheed, Mohamed E
2014 Book Section, cited 3 times
Website
Breast cancer today is the leading cause of death amongstcancer patients inflicting women around the world. Breast cancer is themost common cancer in women worldwide. It is also the principle cause ofdeath from cancer among women globally. Early detection of this diseasecan greatly enhance the chances of long-term survival of breast cancervictims. Classification of cancer data helps widely in detection of the dis-ease and it can be achieved using many techniques such as Perceptronwhich is an Artificial Neural Network (ANN) classification technique.In this paper, we proposed a new hybrid algorithm by combining theperceptron algorithm and the feature extraction algorithm after apply-ing the Scale Invariant Feature Transform (SIFT) algorithm in orderto classify magnetic resonance imaging (MRI) breast cancer images. Theproposed algorithm is called breast MRI cancer classifier (BMRICC) andit has been tested tested on 281 MRI breast images (138 abnormal and143 normal). The numerical results of the general performance of theBMRICC algorithm and the comparasion results between it and other 5benchmark classifiers show that, the BMRICC algorithm is a promisingalgorithm and its performance is better than the other algorithms.

Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields

  • Elmezain, M.
  • Mahmoud, A.
  • Mosa, D. T.
  • Said, W.
2022 Journal Article, cited 4 times
Website
Because of the large variabilities in brain tumors, automating segmentation remains a difficult task. We propose an automated method to segment brain tumors by integrating the deep capsule network (CapsNet) and the latent-dynamic condition random field (LDCRF). The method consists of three main processes to segment the brain tumor-pre-processing, segmentation, and post-processing. In pre-processing, the N4ITK process involves correcting each MR image's bias field before normalizing the intensity. After that, image patches are used to train CapsNet during the segmentation process. Then, with the CapsNet parameters determined, we employ image slices from an axial view to learn the LDCRF-CapsNet. Finally, we use a simple thresholding method to correct the labels of some pixels and remove small 3D-connected regions from the segmentation outcomes. On the BRATS 2015 and BRATS 2021 datasets, we trained and evaluated our method and discovered that it outperforms and can compete with state-of-the-art methods in comparable conditions.

Trialing U-Net Training Modifications for Segmenting Gliomas Using Open Source Deep Learning Framework

  • Ellis, David G.
  • Aizenberg, Michele R.
2021 Book Section, cited 0 times
Automatic brain segmentation has the potential to save time and resources for researchers and clinicians. We aimed to improve upon previously proposed methods by implementing the U-Net model and trialing various modifications to the training and inference strategies. The trials were performed and tested on the Multimodal Brain Tumor Segmentation dataset that provides MR images of brain tumors along with manual segmentations for hundreds of subjects. The U-Net models were trained on a training set of MR images from 369 subjects and then tested against a validation set of images from 125 subjects. The proposed modifications included predicting the labeled region contours, permutations of the input data via rotation and reflection, grouping labels together, as well as creating an ensemble of models. The ensemble of models provided the best results compared to any of the other methods, but the other modifications did not demonstrate improvement. Future work will look at reducing the level of the training augmentation so that the models are better able to generalize to the validation set. Overall, our open source deep learning framework allowed us to quickly implement and test multiple U-Net training modifications. The code for this project is available at https://github.com/ellisdg/3DUnetCNN.

Diffusion MRI quality control and functional diffusion map results in ACRIN 6677/RTOG 0625: a multicenter, randomized, phase II trial of bevacizumab and chemotherapy in recurrent glioblastoma

  • Ellingson, Benjamin M
  • Kim, Eunhee
  • Woodworth, Davis C
  • Marques, Helga
  • Boxerman, Jerrold L
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Jain, Rajan
  • Chi, T Linda
  • Sorensen, A Gregory
  • Gilbert, Mark R
  • Barboriak, Daniel P
Int J Oncol 2015 Journal Article, cited 27 times
Website
Functional diffusion mapping (fDM) is a cancer imaging technique that quantifies voxelwise changes in apparent diffusion coefficient (ADC). Previous studies have shown value of fDMs in bevacizumab therapy for recurrent glioblastoma multiforme (GBM). The aim of the present study was to implement explicit criteria for diffusion MRI quality control and independently evaluate fDM performance in a multicenter clinical trial (RTOG 0625/ACRIN 6677). A total of 123 patients were enrolled in the current multicenter trial and signed institutional review board-approved informed consent at their respective institutions. MRI was acquired prior to and 8 weeks following therapy. A 5-point QC scoring system was used to evaluate DWI quality. fDM performance was evaluated according to the correlation of these metrics with PFS and OS at the first follow-up time-point. Results showed ADC variability of 7.3% in NAWM and 10.5% in CSF. A total of 68% of patients had usable DWI data and 47% of patients had high quality DWI data when also excluding patients that progressed before the first follow-up. fDM performance was improved by using only the highest quality DWI. High pre-treatment contrast enhancing tumor volume was associated with shorter PFS and OS. A high volume fraction of increasing ADC after therapy was associated with shorter PFS, while a high volume fraction of decreasing ADC was associated with shorter OS. In summary, DWI in multicenter trials are currently of limited value due to image quality. Improvements in consistency of image quality in multicenter trials are necessary for further advancement of DWI biomarkers.

A genome-wide gain-of-function screen identifies CDKN2C as a HBV host factor

  • Eller, Carla
  • Heydmann, Laura
  • Colpitts, Che C.
  • El Saghire, Houssein
  • Piccioni, Federica
  • Jühling, Frank
  • Majzoub, Karim
  • Pons, Caroline
  • Bach, Charlotte
  • Lucifora, Julie
  • Lupberger, Joachim
  • Nassal, Michael
  • Cowley, Glenn S.
  • Fujiwara, Naoto
  • Hsieh, Sen-Yung
  • Hoshida, Yujin
  • Felli, Emanuele
  • Pessaux, Patrick
  • Sureau, Camille
  • Schuster, Catherine
  • Root, David E.
  • Verrier, Eloi R.
  • Baumert, Thomas F.
Nature Communications 2020 Journal Article, cited 0 times
Website
Chronic HBV infection is a major cause of liver disease and cancer worldwide. Approaches for cure are lacking, and the knowledge of virus-host interactions is still limited. Here, we perform a genome-wide gain-of-function screen using a poorly permissive hepatoma cell line to uncover host factors enhancing HBV infection. Validation studies in primary human hepatocytes identified CDKN2C as an important host factor for HBV replication. CDKN2C is overexpressed in highly permissive cells and HBV-infected patients. Mechanistic studies show a role for CDKN2C in inducing cell cycle G1 arrest through inhibition of CDK4/6 associated with the upregulation of HBV transcription enhancers. A correlation between CDKN2C expression and disease progression in HBV-infected patients suggests a role in HBV-induced liver disease. Taken together, we identify a previously undiscovered clinically relevant HBV host factor, allowing the development of improved infectious model systems for drug discovery and the study of the HBV life cycle.

An Integrative Approach to Drug Development Using Machine Learning

  • Elkhader, Jamal A. 
2022 Thesis, cited 0 times
Website
Despite recent advances in life sciences and technology, the amount of time and money spent in the drug development process remain drastically inflated. Thus, there is a need to rapidly recognize characteristics that will help identify novel therapies. First, we address the increased need for drug repurposing, the approach of identifying new indications for approved or investigational drugs. We present a novel drug repurposing method called Creating A Translational Network for Indication Prediction (CATNIP) which relies solely on biological and chemical drug characteristics to identify disease areas for specific drugs and drug classes. This drug-focused approach could allow our approach to be used for both FDA approved drugs as well as investigational drugs. Our method, trained with 2,576 diverse small molecules, is built using easily interpretable features, such as chemical structure and targets, allowing for probable drug-disease mechanisms to be discovered from the predictions made. The strength of this method's approach is demonstrated through a repurposing network that can be utilized identify drug class candidate opportunities. In order to treat many of these conditinos, a drug compound is orally ingested by a patient. One of the major absorption sites for drugs is the small intestine, and drug properties such as permeability are proven important to maximize treatment efforts. Poor absorption of drug candidates is likely to lead to failure in the drug development process, so we propose an innovative approach to predict the permeability of a drug. The Caco-2 cell model is a standard surrogate for predicting in vitro intestinal permeability. We collected one of the largest experimentally based datasets of Caco-2 values to create a computational model. Using an approach called graph convolutional networks that treats molecules as graphs, we are able to take in a line-notation form molecular structure and successfully make predictions about a drug compound's permeability. Altogether, this work demonstrates how the integration of diverse datasets can aid in addressing the multitutde of challenging problems in the field of drug discovery. Computational approaches such as these, that prioritize applicability and interpretability, have the strong potential to transform and improve upon the drug development pipeline.

Hiding privacy and clinical information in medical images using QR code

  • Elhadad, Ahmed
  • Rashad, Mahmoud
2021 Conference Paper, cited 0 times
Website
This study aims to hide patient's privacy details of DICOM files using the QR code images with the same size using steganographic technique. The proposed method is based on the properties of the discrete cosine transform (DCT) of the DICOM images to embed a QR code image. The proposed approach includes two main parts: the embedding of data and the extraction procedure. Moreover, the embedded QR code will be reconstructed blindly from the Stego DICOM without the presence of the original DICOM file. The performances of proposed approach were tested using TCIA COVID-19 Dataset and the terms of the Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index (SSIM) and the Bit Error Rate (BER) values. The simulation results achieved high PSNR values ranged between 63.47 dB and 81.97 dB after the embedding procedure by using a QR code image within the DICOM image of the same size.

The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database

  • Elbers, Danne C.
  • Fillmore, Nathanael R.
  • Sung, Feng-Chi
  • Ganas, Spyridon S.
  • Prokhorenkov, Andrew
  • Meyer, Christopher
  • Hall, Robert B.
  • Ajjarapu, Samuel J.
  • Chen, Daniel C.
  • Meng, Frank
  • Grossman, Robert L.
  • Brophy, Mary T.
  • Do, Nhan V.
Patterns 2020 Journal Article, cited 0 times
Website
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.

Imaging genomics of glioblastoma: state of the art bridge between genomics and neuroradiology

  • ElBanan, Mohamed G
  • Amer, Ahmed M
  • Zinn, Pascal O
  • Colen, Rivka R
2015 Journal Article, cited 29 times
Website
Glioblastoma (GBM) is the most common and most aggressive primary malignant tumor of the central nervous system. Recently, researchers concluded that the "one-size-fits-all" approach for treatment of GBM is no longer valid and research should be directed toward more personalized and patient-tailored treatment protocols. Identification of the molecular and genomic pathways underlying GBM is essential for achieving this personalized and targeted therapeutic approach. Imaging genomics represents a new era as a noninvasive surrogate for genomic and molecular profile identification. This article discusses the basics of imaging genomics of GBM, its role in treatment decision-making, and its future potential in noninvasive genomic identification.

Extraction of Cancer Section from 2D Breast MRI Slice Using Brain Strom Optimization

  • Elanthirayan, R
  • Kubra, K Sakeenathul
  • Rajinikanth, V
  • Raja, N Sri Madhava
  • Satapathy, Suresh Chandra
2021 Book Section, cited 0 times
Website

Feature Extraction and Analysis for Lung Nodule Classification using Random Forest

  • Nada El-Askary
  • Mohammed Salem
  • Mohammed Roushdy
2019 Conference Paper, cited 0 times
Website

A Content-Based-Image-Retrieval Approach for Medical Image Repositories

  • el Rifai, Diaa
  • Maeder, Anthony
  • Liyanage, Liwan
2015 Conference Paper, cited 2 times
Website

Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network

  • El Hamdi, Dhekra
  • Elouedi, Ines
  • Slim, Ihsen
International Journal of Image and Graphics 2023 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.

MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network

  • Eidex, Z. A.
  • Wang, T.
  • Lei, Y.
  • Axente, M.
  • Akin-Akintayo, O. O.
  • Ojo, O. A. A.
  • Akintayo, A. A.
  • Roper, J.
  • Bradley, J. D.
  • Liu, T.
  • Schuster, D. M.
  • Yang, X.
Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS: The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with fivefold cross-validation and holdout testing. RESULTS: The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3 +/- 7.5/3.73 +/- 3.78 mm, 4.5 +/- 7.9/0.41 +/- 0.59 cc, and 89.6 +/- 8.9/84.3 +/- 11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches. CONCLUSIONS: The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy.

Decision forests for learning prostate cancer probability maps from multiparametric MRI

  • Ehrenberg, Henry R
  • Cornfeld, Daniel
  • Nawaf, Cayce B
  • Sprenkle, Preston C
  • Duncan, James S
2016 Conference Proceedings, cited 2 times
Website

Performance Analysis of Prediction Methods for Lossless Image Compression

  • Egorov, Nickolay
  • Novikov, Dmitriy
  • Gilmutdinov, Marat
2015 Book Section, cited 4 times
Website
Performance analysis of several state-of-the-art prediction approaches is performed for lossless image compression. To provide this analysis special models of edges are presented: bound-oriented and gradient-oriented approaches. Several heuristic assumptions are proposed for considered intra- and inter-component predictors using determined edge models. Numerical evaluation using image test sets with various statistical features confirms obtained heuristic assumptions.

Automated 3-D Tissue Segmentation Via Clustering

  • Edwards, Samuel
  • Brown, Scott
  • Lee, Michael
Journal of Biomedical Engineering and Medical Imaging 2018 Journal Article, cited 0 times

Interpretable Machine Learning with Brain Image and Survival Data

  • Eder, Matthias
  • Moser, Emanuel
  • Holzinger, Andreas
  • Jean-Quartier, Claire
  • Jeanquartier, Fleur
2022 Journal Article, cited 1 times
Website
Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explainable AI (XAI) to interpret the decision-making of an ML algorithm in the use case of predicting the survival rate of patients with brain tumors based on MRI scans. Therefore, we explore the analysis of brain images together with survival data to predict survival in gliomas with a focus on improving the interpretability of the results. Using the Brain Tumor Segmentation dataset BraTS 2020, we used a well-validated dataset for evaluation and relied on a convolutional neural network structure to improve the explainability of important features by adding Shapley overlays. The trained network models were used to evaluate SHapley Additive exPlanations (SHAP) directly and were not optimized for accuracy. The resulting overfitting of some network structures is therefore seen as a use case of the presented interpretation method. It is shown that the network structure can be validated by experts using visualizations, thus making the decision-making of the method interpretable. Our study highlights the feasibility of combining explainers with 3D voxels and also the fact that the interpretation of prediction results significantly supports the evaluation of results. The implementation in python is available on gitlab as “XAIforBrainImgSurv”.

Improving Brain Tumor Diagnosis Using MRI Segmentation Based on Collaboration of Beta Mixture Model and Learning Automata

  • Edalati-rad, Akram
  • Mosleh, Mohammad
Arabian Journal for Science and Engineering 2018 Journal Article, cited 0 times
Website

Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

  • Gibson E
  • Giganti F
  • Hu Y
  • Bonmati
  • S. Bandula E
  • Gurusamy K
  • Davidson B
  • Pereira S
  • Clarkson M
  • Barratt D
IEEE Transactions on Medical Imaging 2018 Journal Article, cited 14 times
Website

Saliency Based Deep Neural Network for Automatic Detection of Gadolinium-Enhancing Multiple Sclerosis Lesions in Brain MRI

  • Durso-Finley, Joshua
  • Arnold, Douglas L.
  • Arbel, Tal
2020 Book Section, cited 59 times
Website
The appearance of contrast-enhanced pathologies (e.g. lesion, cancer) is an important marker of disease activity, stage and treatment efficacy in clinical trials. The automatic detection and segmentation of these enhanced pathologies remains a difficult challenge, as they can be very small and visibly similar to other non-pathological enhancements (e.g. blood vessels). In this paper, we propose a deep neural network classifier for the detection and segmentation of Gadolinium enhancing lesions in brain MRI of patients with Multiple Sclerosis (MS). To avoid false positive and false negative assertions, the proposed end-to-end network uses an enhancement-based attention mechanism which assigns saliency based on the differences between the T1-weighted images before and after injection of Gadolinium, and works to first identify candidate lesions and then to remove the false positives. The effect of the saliency map is evaluated on 2293 patient multi-channel MRI scans acquired during two proprietary, multi-center clinical trials for MS treatments. Inclusion of the attention mechanism results in a decrease in false positive lesion voxels over a basic U-Net [2] and DeepMedic [6]. In terms of lesion-level detection, the framework achieves a sensitivity of 82% at a false discovery rate of 0.2, significantly outperforming the other two methods when detecting small lesions. Experiments aimed at predicting the presence of Gad lesion activity in patient scans (i.e. the presence of more than 1 lesion) result in high accuracy showing: (a) significantly improved accuracy over DeepMedic, and (b) a reduction in the errors in predicting the degree of lesion activity (in terms of per scan lesion counts) over a standard U-Net and DeepMedic.

ProstAttention-Net: A deep attention model for prostate cancer segmentation by aggressiveness in MRI scans

  • Duran, A.
  • Dussert, G.
  • Rouviere, O.
  • Jaouen, T.
  • Jodoin, P. M.
  • Lartizien, C.
Med Image Anal 2022 Journal Article, cited 7 times
Website
Multiparametric magnetic resonance imaging (mp-MRI) has shown excellent results in the detection of prostate cancer (PCa). However, characterizing prostate lesions aggressiveness in mp-MRI sequences is impossible in clinical practice, and biopsy remains the reference to determine the Gleason score (GS). In this work, we propose a novel end-to-end multi-class network that jointly segments the prostate gland and cancer lesions with GS group grading. After encoding the information on a latent space, the network is separated in two branches: 1) the first branch performs prostate segmentation 2) the second branch uses this zonal prior as an attention gate for the detection and grading of prostate lesions. The model was trained and validated with a 5-fold cross-validation on a heterogeneous series of 219 MRI exams acquired on three different scanners prior prostatectomy. In the free-response receiver operating characteristics (FROC) analysis for clinically significant lesions (defined as GS >6) detection, our model achieves 69.0%+/-14.5% sensitivity at 2.9 false positive per patient on the whole prostate and 70.8%+/-14.4% sensitivity at 1.5 false positive when considering the peripheral zone (PZ) only. Regarding the automatic GS group grading, Cohen's quadratic weighted kappa coefficient (kappa) is 0.418+/-0.138, which is the best reported lesion-wise kappa for GS segmentation to our knowledge. The model has encouraging generalization capacities with kappa=0.120+/-0.092 on the PROSTATEx-2 public dataset and achieves state-of-the-art performance for the segmentation of the whole prostate gland with a Dice of 0.875+/-0.013. Finally, we show that ProstAttention-Net improves performance in comparison to reference segmentation models, including U-Net, DeepLabv3+ and E-Net. The proposed attention mechanism is also shown to outperform Attention U-Net.

Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

  • Dunn, William D Jr
  • Aerts, Hugo J W L
  • Cooper, Lee A
  • Holder, Chad A
  • Hwang, Scott N
  • Jaffe, Carle C
  • Brat, Daniel J
  • Jain, Rajan
  • Flanders, Adam E
  • Zinn, Pascal O
  • Colen, Rivka R
  • Gutman, David A
J Neuroimaging Psychiatry Neurol 2016 Journal Article, cited 0 times
Website
Background: Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods: Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results: We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman's r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion: Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses.

Some New Tricks for Deep Glioma Segmentation

  • Duncan, Chase
  • Roxas, Francis
  • Jani, Neel
  • Maksimovic, Jane
  • Bramlet, Matthew
  • Sutton, Brad
  • Koyejo, Sanmi
2021 Book Section, cited 2 times
Website
This manuscript outlines the design of methods, and initial progress on automatic detection of glioma from MRI images using deep neural networks, all applied and evaluated for the 2020 Brain Tumor Segmentation (BraTS) Challenge. Our approach builds on existing work using U-net architectures, and evaluates a variety deep learning techniques including model averaging and adaptive learning rates.

Disparities in the Demographic Composition of The Cancer Imaging Archive

  • Dulaney, A.
  • Virostko, J.
Radiol Imaging Cancer 2024 Journal Article, cited 1 times
Website
Purpose To characterize the demographic distribution of The Cancer Imaging Archive (TCIA) studies and compare them with those of the U.S. cancer population. Materials and Methods In this retrospective study, data from TCIA studies were examined for the inclusion of demographic information. Of 189 studies in TCIA up until April 2023, a total of 83 human cancer studies were found to contain supporting demographic data. The median patient age and the sex, race, and ethnicity proportions of each study were calculated and compared with those of the U.S. cancer population, provided by the Surveillance, Epidemiology, and End Results Program and the Centers for Disease Control and Prevention U.S. Cancer Statistics Data Visualizations Tool. Results The median age of TCIA patients was found to be 6.84 years lower than that of the U.S. cancer population (P = .047) and contained more female than male patients (53% vs 47%). American Indian and Alaska Native, Black or African American, and Hispanic patients were underrepresented in TCIA studies by 47.7%, 35.8%, and 14.7%, respectively, compared with the U.S. cancer population. Conclusion The results demonstrate that the patient demographics of TCIA data sets do not reflect those of the U.S. cancer population, which may decrease the generalizability of artificial intelligence radiology tools developed using these imaging data sets. Keywords: Ethics, Meta-Analysis, Health Disparities, Cancer Health Disparities, Machine Learning, Artificial Intelligence, Race, Ethnicity, Sex, Age, Bias Published under a CC BY 4.0 license.

An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images

  • Duggento, Andrea
  • Aiello, Marco
  • Cavaliere, Carlo
  • Cascella, Giuseppe L
  • Cascella, Davide
  • Conte, Giovanni
  • Guerrisi, Maria
  • Toschi, Nicola
Contrast Media Mol Imaging 2019 Journal Article, cited 1 times
Website
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.

Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases

  • Dubey, Shiv Ram
  • Singh, Satish Kumar
  • Singh, Rajat Kumar
IEEE Trans Image Process 2015 Journal Article, cited 52 times
Website
A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.

Prediction of Pathological Complete Response to Neoadjuvant Chemotherapy in Breast Cancer Using Deep Learning with Integrative Imaging, Molecular and Demographic Data

  • Duanmu, Hongyi
  • Huang, Pauline Boning
  • Brahmavar, Srinidhi
  • Lin, Stephanie
  • Ren, Thomas
  • Kong, Jun
  • Wang, Fusheng
  • Duong, Tim Q.
2020 Book Section, cited 16 times
Website
Neoadjuvant chemotherapy is widely used to reduce tumor size to make surgical excision manageable and to minimize distant metastasis. Assessing and accurately predicting pathological complete response is important in treatment planing for breast cancer patients. In this study, we propose a novel approach integrating 3D MRI imaging data, molecular data and demographic data using convolutional neural network to predict the likelihood of pathological complete response to neoadjuvant chemotherapy in breast cancer. We take post-contrast T1-weighted 3D MRI images without the need of tumor segmentation, and incorporate molecular subtypes and demographic data. In our predictive model, MRI data and non-imaging data are convolved to inform each other through interactions, instead of a concatenation of multiple data type channels. This is achieved by channel-wise multiplication of the intermediate results of imaging and non-imaging data. We use a subset of curated data from the I-SPY-1 TRIAL of 112 patients with stage 2 or 3 breast cancer with breast tumors underwent standard neoadjuvant chemotherapy. Our method yielded an accuracy of 0.83, AUC of 0.80, sensitivity of 0.68 and specificity of 0.88. Our model significantly outperforms models using imaging data only or traditional concatenation models. Our approach has the potential to aid physicians to identify patients who are likely to respond to neoadjuvant chemotherapy at diagnosis or early treatment, thus facilitate treatment planning, treatment execution, or mid-treatment adjustment.

3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks

  • Du, Richard
  • Vardhanabhuti, Varut
2020 Conference Proceedings, cited 4 times
Website
Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0) compared to training from scratch (DICE=41.8). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.

BRATS2021: Exploring Each Sequence in Multi-modal Input for Baseline U-net Performance

  • Druzhinina, Polina
  • Kondrateva, Ekaterina
  • Bozhenko, Arseny
  • Yarkin, Vyacheslav
  • Sharaev, Maxim
  • Kurmukov, Anvar
2022 Book Section, cited 0 times
Since 2012 the BraTS competition has become a benchmark for brain MRI segmentation. The top-ranked solutions from the competition leaderboard of past years are primarily heavy and sophisticated ensembles of deep neural networks. The complexity of the proposed solutions can restrict their clinical use due to the long execution time and complicate the model transfer to the other datasets, especially with the lack of some MRI sequences in multimodal input. The current paper provides a baseline segmentation accuracy for each separate MRI modality and all four sequences (T1, T1c, T2, and FLAIR) on conventional 3D U-net architecture. We explore the predictive ability of each modality to segment enhancing core, tumor core, and whole tumor. We then compare the baseline performance with BraTS 2019–2020 state-of-the-art solutions. Finally, we share the code and trained weights to facilitate further research on model transfer to different domains and use in other applications.

Most-enhancing tumor volume by MRI radiomics predicts recurrence-free survival “early on” in neoadjuvant treatment of breast cancer

  • Drukker, Karen
  • Li, Hui
  • Antropova, Natalia
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen L
Cancer Imaging 2018 Journal Article, cited 0 times
Website
BACKGROUND: The hypothesis of this study was that MRI-based radiomics has the ability to predict recurrence-free survival "early on" in breast cancer neoadjuvant chemotherapy. METHODS: A subset, based on availability, of the ACRIN 6657 dynamic contrast-enhanced MR images was used in which we analyzed images of all women imaged at pre-treatment baseline (141 women: 40 with a recurrence, 101 without) and all those imaged after completion of the first cycle of chemotherapy, i.e., at early treatment (143 women: 37 with a recurrence vs. 105 without). Our method was completely automated apart from manual localization of the approximate tumor center. The most enhancing tumor volume (METV) was automatically calculated for the pre-treatment and early treatment exams. Performance of METV in the task of predicting a recurrence was evaluated using ROC analysis. The association of recurrence-free survival with METV was assessed using a Cox regression model controlling for patient age, race, and hormone receptor status and evaluated by C-statistics. Kaplan-Meier analysis was used to estimate survival functions. RESULTS: The C-statistics for the association of METV with recurrence-free survival were 0.69 with 95% confidence interval of [0.58; 0.80] at pre-treatment and 0.72 [0.60; 0.84] at early treatment. The hazard ratios calculated from Kaplan-Meier curves were 2.28 [1.08; 4.61], 3.43 [1.83; 6.75], and 4.81 [2.16; 10.72] for the lowest quartile, median quartile, and upper quartile cut-points for METV at early treatment, respectively. CONCLUSION: The performance of the automatically-calculated METV rivaled that of a semi-manual model described for the ACRIN 6657 study (published C-statistic 0.72 [0.60; 0.84]), which involved the same dataset but required semi-manual delineation of the functional tumor volume (FTV) and knowledge of the pre-surgical residual cancer burden.

Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy

  • Drukker, Karen
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen
  • Hahn, Horst K.
  • Mazurowski, Maciej A.
2020 Conference Paper, cited 0 times
Website
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.

Magnetic resonance imaging of in vitro urine flow in single and tandem stented ureters subject to extrinsic ureteral obstruction

  • Dror, I.
  • Harris, T.
  • Kalchenko, V.
  • Shilo, Y.
  • Berkowitz, B.
2022 Journal Article, cited 1 times
Website
OBJECTIVE: To quantify the relative volumetric flows in stent and ureter lumina, as a function of stent size and configuration, in both unobstructed and externally obstructed stented ureters. METHODS: Magnetic resonance imaging was used to measure flow in stented ureters using a phantom kidney model. Volumetric flow in the stent and ureter lumina were determined along the stented ureters, for each of four single stent sizes (4.8F, 6F, 7F, and 8F), and for tandem (6F and 7F) configurations. Measurements were made in the presence of a fully encircling extrinsic ureteral obstruction as well as in benchmark cases with no extrinsic ureteral obstruction. RESULTS: Under no obstruction, the relative contribution of urine flow in single stents is 1-10%, while the relative contributions to flow are ~6 and ~28% for tandem 6F and 7F, respectively. In the presence of an extrinsic ureteral obstruction and single stents, all urine passes within the stent lumen near the extrinsic ureteral obstruction. For tandem 6F and 7F stents under extrinsic ureteral obstruction, relative volumetric flows in the two stent lumina are ~73% and ~81%, respectively, with the remainder passing through the ureter lumen. CONCLUSIONS: Magnetic resonance imaging demonstrates that with no extrinsic ureteral obstruction, minimal urine flow occurs within a stent. Stent lumen flow is significant in the presence of extrinsic ureteral obstruction, in the vicinity of the extrinsic ureteral obstruction. For tandem stents subjected to extrinsic ureteral obstruction, urine flow also occurs in the ureter lumen between the stents, which can reduce the likelihood of kidney failure even in the case of both stent lumina being occluded.

A segmentation-based method improving the performance of N4 bias field correction on T2weighted MR imaging data of the prostate

  • Dovrou, A.
  • Nikiforaki, K.
  • Zaridis, D.
  • Manikis, G. C.
  • Mylona, E.
  • Tachos, N.
  • Tsiknakis, M.
  • Fotiadis, D. I.
  • Marias, K.
Magn Reson Imaging 2023 Journal Article, cited 2 times
Website
Magnetic Resonance (MR) images suffer from spatial inhomogeneity, known as bias field corruption. The N4ITK filter is a state-of-the-art method used for correcting the bias field to optimize MR-based quantification. In this study, a novel approach is presented to quantitatively evaluate the performance of N4 bias field correction for pelvic prostate imaging. An exploratory analysis, regarding the different values of convergence threshold, shrink factor, fitting level, number of iterations and use of mask, is performed to quantify the performance of N4 filter in pelvic MR images. The performance of a total of 240 different N4 configurations is examined using the Full Width at Half Maximum (FWHM) of the segmented periprostatic fat distribution as evaluation metric. Phantom T2weighted images were used to assess the performance of N4 for a uniform test tissue mimicking material, excluding factors such as patient related susceptibility and anatomy heterogeneity. Moreover, 89 and 204 T2weighted patient images from two public datasets acquired by scanners with a combined surface and endorectal coil at 1.5 T and a surface coil at 3 T, respectively, were utilized and corrected with a variable set of N4 parameters. Furthermore, two external public datasets were used to validate the performance of the N4 filter in T2weighted patient images acquired by various scanning conditions with different magnetic field strengths and coils. The results show that the set of N4 parameters, converging to optimal representations of fat in the image, were: convergence threshold 0.001, shrink factor 2, fitting level 6, number of iterations 100 and the use of default mask for prostate images acquired by a combined surface and endorectal coil at both 1.5 T and 3 T. The corresponding optimal N4 configuration for MR prostate images acquired by a surface coil at 1.5 T or 3 T was: convergence threshold 0.001, shrink factor 2, fitting level 5, number of iterations 25 and the use of default mask. Hence, periprostatic fat segmentation can be used to define the optimal settings for achieving T2weighted prostate images free from bias field corruption to provide robust input for further analysis.

Assessment of Renal Cell Carcinoma by Texture Analysis in Clinical Practice: A Six-Site, Six-Platform Analysis of Reliability

  • Doshi, A. M.
  • Tong, A.
  • Davenport, M. S.
  • Khalaf, A.
  • Mresh, R.
  • Rusinek, H.
  • Schieda, N.
  • Shinagare, A.
  • Smith, A. D.
  • Thornhill, R.
  • Vikram, R.
  • Chandarana, H.
AJR Am J Roentgenol 2021 Journal Article, cited 0 times
Website
Background: Multiple commercial and open-source software applications are available for texture analysis. Nonstandard techniques can cause undesirable variability that impedes result reproducibility and limits clinical utility. Objective: The purpose of this study is to measure agreement of texture metrics extracted by 6 software packages. Methods: This retrospective study included 40 renal cell carcinomas with contrast-enhanced CT from The Cancer Genome Atlas and Imaging Archive. Images were analyzed by 7 readers at 6 sites. Each reader used 1 of 6 software packages to extract commonly studied texture features. Inter and intra-reader agreement for segmentation was assessed with intra-class correlation coefficients. First-order (available in 6 packages) and second-order (available in 3 packages) texture features were compared between software pairs using Pearson correlation. Results: Inter- and intra-reader agreement was excellent (ICC 0.93-1). First-order feature correlations were strong (r>0.8, p<0.001) between 75% (21/28) of software pairs for mean and standard deviation, 48% (10/21) for entropy, 29% (8/28) for skewness, and 25% (7/28) for kurtosis. Of 15 second-order features, only co-occurrence matrix correlation, grey-level non-uniformity, and run-length non-uniformity showed strong correlation between software packages (0.90-1, p<0.001). Conclusion: Variability in first and second order texture features was common across software configurations and produced inconsistent results. Standardized algorithms and reporting methods are needed before texture data can be reliably used for clinical applications. Clinical Impact: It is important to be aware of variability related to texture software processing and configuration when reporting and comparing outputs.

Inter Extreme Points Geodesics for End-to-End Weakly Supervised Image Segmentation

  • Dorent, Reuben
  • Joutard, Samuel
  • Shapey, Jonathan
  • Kujawa, Aaron
  • Modat, Marc
  • Ourselin, Sébastien
  • Vercauteren, Tom
2021 Conference Proceedings, cited 0 times
Website
We introduce InExtremIS, a weakly supervised 3D approach to train a deep image segmentation network using particularly weak train-time annotations: only 6 extreme clicks at the boundary of the objects of interest. Our fully-automatic method is trained end-to-end and does not require any test-time annotations. From the extreme points, 3D bounding boxes are extracted around objects of interest. Then, deep geodesics connecting extreme points are generated to increase the amount of “annotated” voxels within the bounding boxes. Finally, a weakly supervised regularised loss derived from a Conditional Random Field formulation is used to encourage prediction consistency over homogeneous regions. Extensive experiments are performed on a large open dataset for Vestibular Schwannoma segmentation. InExtremIS obtained competitive performance, approaching full supervision and outperforming significantly other weakly supervised techniques based on bounding boxes. Moreover, given a fixed annotation time budget, InExtremIS outperformed full supervision. Our code and data are available online.

Collaborative learning of joint medical image segmentation tasks from heterogeneous and weakly-annotated data

  • Dorent, Reuben
2022 Thesis, cited 0 times
Website
Convolutional Neural Networks (CNNs) have become the state-of-the-art for most image segmentation tasks and therefore one would expect them to be able to learn joint tasks, such as brain structures and pathology segmentation. However, annotated databases required to train CNNs are usually dedicated to a single task, leading to partial annotations (e.g. brain structure or pathology delineation but not both for joint tasks). Moreover, the information required for these tasks may come from distinct magnetic resonance (MR) sequences to emphasise different types of tissue contrast, leading to datasets with different sets of image modalities. Similarly, the scans may have been acquired at different centres, with different MR parameters, leading to differences in resolution and visual appearance among databases (domain shift). Given the large amount of resources, time and expertise required to carefully annotate medical images, it is unlikely that large and fully-annotated databases will become readily available for every joint problem. For this reason, there is a need to develop collaborative approaches that exploit existing heterogeneous and task-specific datasets, as well as weak annotations instead of time-consuming pixel-wise annotations. In this thesis, I present methods to learn joint medical segmentation tasks from task-specific, domain-shifted, hetero-modal and weakly-annotated datasets. The problem lies at the intersection of several branches of Machine Learning: Multi-Task Learning, Hetero-Modal Learning, Domain Adaptation and Weakly Supervised Learning. First, I introduce a mathematical formulation of a joint segmentation problem under the constraint of missing modalities and partial annotations, in which Domain Adaptation techniques can be directly integrated, and a procedure to optimise it. Secondly, I propose a principled approach to handle missing modalities based on Hetero-Modal Variational Auto-Encoders. Thirdly, in this thesis, I focus on Weakly Supervised Learning techniques and present a novel approach to train deep image segmentation networks using particularly weak train-time annotations: only 4 (2D) or 6 (3D) extreme clicks at the boundary of the objects of interest. The proposed framework connects the extreme points using a new formulation of geodesics that integrates the network outputs and uses the generated paths for supervision. Fourthly, I introduce a new weakly-supervised Domain Adaptation technique using scribbles on the target domain and formulate as a cross-domain CRF optimisation problem. Finally, I led the organisation of the first medical segmentation challenge for unsupervised cross-modality domain adaptation (crossMoDA). The benchmark reported in this thesis provides a comprehensive characterisation of cross-modality domain adaptation techniques. Experiments are performed on brain MR images from patients with different types of brain diseases: gliomas, white matter lesions and vestibular schwannoma. The results demonstrate the broad applicability of the presented frameworks to learn joint segmentation tasks with the potential to improve brain disease diagnosis and patient management in clinical practice.

Deep Learning Based Ensemble Approach for 3D MRI Brain Tumor Segmentation

  • Do, Tien-Bach-Thanh
  • Trinh, Dang-Linh
  • Tran, Minh-Trieu
  • Lee, Guee-Sang
  • Kim, Soo-Hyung
  • Yang, Hyung-Jeong
2022 Book Section, cited 0 times
Website
Brain tumor segmentation has wide applications and important potential values for glioblastoma research. Because of the complexity of the structure of subtype tumors and the different visual scenes of multi modalities like T1, T1ce, T2, and FLAIR, most methods fail to segment the brain tumors with high accuracy. The sizes and shapes of tumors are very diverse in the wild. Another problem is that most recent algorithms ignore the multi-scale information of brain tumor features. To handle these problems, an ensemble method that utilizes the strength of dilated convolution in capturing larger receptive fields, which has more context information of brain image, also gets the ability of small tumor segmentation by using multiple tasks learning. Besides, we apply the generalized wasserstein dice loss function in training the model to solve the problem of imbalanced between multi-class segmentation. The experimental results demonstrate that the proposed ensemble method improves the accuracy in brain tumor segmentation, showing superiority to other recent segmentation methods.

Learning Multi-Class Segmentations From Single-Class Datasets

  • Dmitriev, Konstantin
  • Kaufman, Arie
2019 Conference Paper, cited 1 times
Website
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.

Complete fully automatic detection, segmentation and 3D reconstruction of tumor volume for non-small cell lung cancer using YOLOv4 and region-based active contour model

  • Dlamini, S.
  • Chen, Y. H.
  • Kuo, C. F. J.
Expert Systems with Applications 2023 Journal Article, cited 0 times
Website
We aim to develop a fully automatic system that will detect, segment and accurately reconstruct non-small cell lung cancer tumors into space using YOLOv4 and region-based active contour model. The system consists of two main sections which are detection and volumetric rendering. The detection section is composed of image enhancement, augmentation, labeling and localization while the volumetric rendering is mainly image filtering, tumor extraction, region-based active contour and 3D reconstruction. In this method the images are enhanced to eliminate noise before augmentation which is intended to multiply and diversify the image data. Labeling was then carried out in order to create a solid learning foundation for the localization model. Images with localized tumors were passed through smoothing filters and then clustered to extract tumor masks. Lastly contour information was obtained to render the volumetric tumor. The designed system displays a strong detection performance with a precision of 96.57%, sensitivity and F 1 score of 97.02% and 96.79% respectively at a detection speed of 34 fps, prediction time per image of 21.38 ms. The system segmentation validation achieved a dice score coefficient of 92.19 % on tumor extraction. A 99.74 % accuracy was obtained during the verification of the method's volumetric rendering using a 3D printed image of the rendered tumor. The rendering of the volumetric tumor was obtained at an average time of 11 s. This system shows a strong performance and reliability due to its ability to detect, segment and reconstruct a volumetric tumor into space with high confidence.

An introduction to Topological Object Data Analysis

  • Dixon, Adam
  • Patrangenaru, Victor
  • Shen, Chen
2020 Conference Proceedings, cited 1 times
Website
Summary and analysis are important foundations in Statistics, but typical methods may prove ineffective at providing thorough summaries of complex object data. Topological data analysis (TDA) (also called topological object data analysis (TODA) when applied to object data) provides additional topological summaries, such as the persistence diagram and persistence landscape, that can be useful in distinguishing distributions based on data sets. The main tool is persistent homology, which tracks the births and deaths of various homology classes as one steps through a filtered simplicial complex that covers the sample. The persistence diagrams and landscapes can also be used to provide confidence sets for “significant” features and two-sample tests between groups. An example of application is provided via analyzing mammogram images for patients with benign and malignant masses.

A Diagnostic Study of Content-Based Image Retrieval Technique for Studying the CT Images of Lung Nodules and Prediction of Lung Cancer as a Biometric Tool

  • Dixit, Rajeev
  • Kumar, Dr Pankaj
  • Ojha, Dr Shashank
2023 Journal Article, cited 0 times
Website
Content Based Medical Image Retrieval (CBMIR) can be defined as a digital image search using the contents of the images. CBMIR plays a very important part in medical applications such as retrieving CT images and more accurately diagnosing aberrant lung tissues in CT images. The Content-Based Medical Image Retrieval (CBMIR) method might aid radiotherapists in examining a patient's CT image in order to retrieve comparable pulmonary nodes more precisely by utilizing query nodes. Intending a particular query node, the CBMIR system searches a large chest CT image database for comparable nodes. The prime aim of this research is to evaluate an end-to-end method for developing a CBIR system for lung cancer diagnosis.

An efficient reversible data hiding using SVD over a novel weighted iterative anisotropic total variation based denoised medical images

  • Diwakar, Manoj
  • Kumar, Pardeep
  • Singh, Prabhishek
  • Tripathi, Amrendra
  • Singh, Laxman
Biomedical Signal Processing and Control 2023 Journal Article, cited 0 times
Website
Computed tomography (CT) advancement and extensive usage have raised the public’s worry regarding the patient’s associated radiation dose. Reducing the radiation dose may lead to more noise and artifacts, which may harm the reputation of radiologists. The instability of low-dose CT reconstruction necessitates better image reconstruction, which increases the diagnostic performance. More modern low-dose CT tests have demonstrated outstanding results. Many times these low-dose denoised medical images with medical related information are also required to transmit over a network. Hence in this article, firstly is a novel denoising method is proposed to improve the quality of low-dose CT images that is based on the total variation method by utilizing whale optimization algorithm (WHA). WHA method is important for getting the best possible weighted function. Reduction of noise occurs by the comparison of a given output to the ground truth, although total variation tends to statistically migrate the data noise distribution from strong to weak. Following denoising, a reversible watermarking approach based on SVD and multi-local extrema (MLE) approaches is provided. Individual results of denoising and watermarking are excellent in terms of visual and performance metrics, according to comparative experimental investigation. Also it was also analyzed that if the watermark is embedded over the denoised CT images then the results of watermarking methods are impressive. So, resultant image offers us the chance to use our visual perception abilities to allow us to cut noise and keep vital and secure information.

Breast Cancer Mass Detection in Mammograms Using Gray Difference Weight and MSER Detector

  • Divyashree, B. V.
  • Kumar, G. Hemantha
SN Computer Science 2021 Journal Article, cited 0 times
Website
Breast cancer is a deadly and one of the most prevalent cancers in women across the globe. Mammography is widely used imaging modality for diagnosis and screening of breast cancer. Segmentation of breast region and mass detection are crucial steps in automatic breast cancer detection. Due to the non-uniform distribution of various tissues, it is a challenging task to analyze mammographic images with high accuracy. In this paper, background suppression and pectoral muscle removal are performed using gradient weight map followed by gray difference weight and fast marching method. Enhancement of breast region is performed using contrast limited adaptive histogram equalization (CLAHE) and de-correlation stretch. Detection of breast masses is accomplished by gray difference weight and maximally stable external regions (MSER) detector. Experimentation on Mammographic Image Analysis Society (MIAS) and curated breast imaging subset of digital database for screening mammography (CBIS-DDSM) show that the method proposed performs breast boundary segmentation and mass detection with best accuracies. Mass detection achieved high accuracies of about 97.64% and 94.66% for MIAS and CBIS-DDSM dataset, respectively. The method is simple, robust, less affected to noise, density, shape and size which could provide reasonable support for mammographic analysis.

Spinal cord detection in planning CT for radiotherapy through adaptive template matching, IMSLIC and convolutional neural networks

  • Diniz, J. O. B.
  • Diniz, P. H. B.
  • Valente, T. L. A.
  • Silva, A. C.
  • Paiva, A. C.
Comput Methods Programs Biomed 2019 Journal Article, cited 23 times
Website
BACKGROUND AND OBJECTIVE: The spinal cord is a very important organ that must be protected in treatments of radiotherapy (RT), considered an organ at risk (OAR). Excess rays associated with the spinal cord can cause irreversible diseases in patients who are undergoing radiotherapy. For the planning of treatments with RT, computed tomography (CT) scans are commonly used to delimit the OARs and to analyze the impact of rays in these organs. Delimiting these OARs take a lot of time from medical specialists, plus the fact that involves a large team of professionals. Moreover, this task made slice-by-slice becomes an exhaustive and consequently subject to errors, especially in organs such as the spinal cord, which extend through several slices of the CT and requires precise segmentation. Thus, we propose, in this work, a computational methodology capable of detecting spinal cord in planning CT images. METHODS: The techniques highlighted in this methodology are adaptive template matching for initial segmentation, intrinsic manifold simple linear iterative clustering (IMSLIC) for candidate segmentation and convolutional neural networks (CNN) for candidate classification, that consists of four steps: (1) images acquisition, (2) initial segmentation, (3) candidates segmentation and (4) candidates classification. RESULTS: The methodology was applied on 36 planning CT images provided by The Cancer Imaging Archive, and achieved an accuracy of 92.55%, specificity of 92.87% and sensitivity of 89.23% with 0.065 of false positives per images, without any false positives reduction technique, in detection of spinal cord. CONCLUSIONS: It is demonstrated the feasibility of the analysis of planning CT images using IMSLIC and convolutional neural network techniques to achieve success in detection of spinal cord regions.

Spatially aware graph neural networks and cross-level molecular profile prediction in colon cancer histopathology: a retrospective multi-cohort study

  • Ding, Kexin
  • Zhou, Mu
  • Wang, He
  • Zhang, Shaoting
  • Metaxas, Dimitri N.
2022 Journal Article, cited 1 times
Website
Background Digital whole-slide images are a unique way to assess the spatial context of the cancer microenvironment. Exploring these spatial characteristics will enable us to better identify cross-level molecular markers that could deepen our understanding of cancer biology and related patient outcomes. Methods We proposed a graph neural network approach that emphasises spatialisation of tumour tiles towards a comprehensive evaluation of predicting cross-level molecular profiles of genetic mutations, copy number alterations, and functional protein expressions from whole-slide images. We introduced a transformation strategy that converts whole-slide image scans into graph-structured data to address the spatial heterogeneity of colon cancer. We developed and assessed the performance of the model on The Cancer Genome Atlas colon adenocarcinoma (TCGA-COAD) and validated it on two external datasets (ie, The Cancer Genome Atlas rectum adenocarcinoma [TCGA-READ] and Clinical Proteomic Tumor Analysis Consortium colon adenocarcinoma [CPTAC-COAD]). We also predicted microsatellite instability and result interpretability. Findings The model was developed on 459 colon tumour whole-slide images from TCGA-COAD, and externally validated on 165 rectum tumour whole-slide images from TCGA-READ and 161 colon tumour whole-slide images from CPTAC-COAD. For TCGA cohorts, our method accurately predicted the molecular classes of the gene mutations (area under the curve [AUCs] from 82·54 [95% CI 77·41–87·14] to 87·08 [83·28–90·82] on TCGA-COAD, and AUCs from 70·46 [61·37–79·61] to 81·80 [72·20–89·70] on TCGA-READ), along with genes with copy number alterations (AUCs from 81·98 [73·34–89·68] to 90·55 [86·02–94·89] on TCGA-COAD, and AUCs from 62·05 [48·94–73·46] to 76·48 [64·78–86·71] on TCGA-READ), microsatellite instability (MSI) status classification (AUC 83·92 [77·41–87·59] on TCGA-COAD, and AUC 61·28 [53·28–67·93] on TCGA-READ), and protein expressions (AUCs from 85·57 [81·16–89·44] to 89·64 [86·29–93·19] on TCGA-COAD, and AUCs from 51·77 [42·53–61·83] to 59·79 [50·79–68·57] on TCGA-READ). For the CPTAC-COAD cohort, our model predicted a panel of gene mutations with AUC values from 63·74 (95% CI 52·92–75·37) to 82·90 (73·69–90·71), genes with copy number alterations with AUC values from 62·39 (51·37–73·76) to 86·08 (79·67–91·74), and MSI status prediction with AUC value of 73·15 (63·21–83·13). Interpretation We showed that spatially connected graph models enable molecular profile predictions in colon cancer and are generalised to rectum cancer. After further validation, our method could be used to infer the prognostic value of multiscale molecular biomarkers and identify targeted therapies for patients with colon cancer. Funding This research has been partially funded by ARO MURI 805491, NSF IIS-1793883, NSF CNS-1747778, NSF IIS 1763523, DOD-ARO ACC-W911NF, and NSF OIA-2040638 to Dimitri N Metaxas.

Feature-Enhanced Graph Networks for Genetic Mutational Prediction Using Histopathological Images in Colon Cancer

  • Ding, Kexin
  • Liu, Qiao
  • Lee, Edward
  • Zhou, Mu
  • Lu, Aidong
  • Zhang, Shaoting
2020 Conference Proceedings, cited 0 times
Website
Mining histopathological and genetic data provides a unique avenue to deepen our understanding of cancer biology. However, extensive cancer heterogeneity across image- and molecular-scales poses technical challenges for feature extraction and outcome prediction. In this study, we propose a feature-enhanced graph network (FENet) for genetic mutation prediction using histopathological images in colon cancer. Unlike conventional approaches analyzing patch-based feature alone without considering their spatial connectivity, we seek to link and explore non-isomorphic topological structures in histopathological images. Our FENet incorporates feature enhancement in convolutional graph neural networks to aggregate discriminative features for capturing gene mutation status. Specifically, our approach could identify both local patch feature information and global topological structure in histopathological images simultaneously. Furthermore, we introduced an ensemble strategy by constructing multiple subgraphs to boost the prediction performance. Extensive experiments on the TCGA-COAD and TCGA-READ cohort including both histopathological images and three key genes’ mutation profiles (APC, KRAS, and TP53) demonstrated the superiority of FENet for key mutational outcome prediction in colon cancer.

Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: a robust, multi-institutional study

  • Ding, J.
  • Zhao, R.
  • Qiu, Q.
  • Chen, J.
  • Duan, J.
  • Cao, X.
  • Yin, Y.
Quant Imaging Med Surg 2022 Journal Article, cited 2 times
Website
Background: Although surgical pathology or biopsy are considered the gold standard for glioma grading, these procedures have limitations. This study set out to evaluate and validate the predictive performance of a deep learning radiomics model based on contrast-enhanced T1-weighted multiplanar reconstruction images for grading gliomas. Methods: Patients from three institutions who diagnosed with gliomas by surgical specimen and multiplanar reconstructed (MPR) images were enrolled in this study. The training cohort included 101 patients from institution 1, including 43 high-grade glioma (HGG) patients and 58 low-grade glioma (LGG) patients, while the test cohorts consisted of 50 patients from institutions 2 and 3 (25 HGG patients, 25 LGG patients). We then extracted radiomics features and deep learning features using six pretrained models from the MPR images. The Spearman correlation test and the recursive elimination feature selection method were used to reduce the redundancy and select most predictive features. Subsequently, three classifiers were used to construct classification models. The performance of the grading models was evaluated using the area under the receiver operating curve, sensitivity, specificity, accuracy, precision, and negative predictive value. Finally, the prediction performances of the test cohort were compared to determine the optimal classification model. Results: For the training cohort, 62% (13 out of 21) of the classification models constructed with MPR images from multiple planes outperformed those constructed with single-plane MPR images, and 61% (11 out of 18) of classification models constructed with both radiomics features and deep learning features had higher area under the curve (AUC) values than those constructed with only radiomics or deep learning features. The optimal model was a random forest model that combined radiomic features and VGG16 deep learning features derived from MPR images, which achieved AUC of 0.847 in the training cohort and 0.898 in the test cohort. In the test cohort, the sensitivity, specificity, and accuracy of the optimal model were 0.840, 0.760, and 0.800, respectively. Conclusions: Multiplanar CE-T1W MPR imaging features are more effective than features from single planes when differentiating HGG and LGG. The combination of deep learning features and radiomics features can effectively grade glioma and assist clinical decision-making.

Automated segmentation refinement of small lung nodules in CT scans by local shape analysis

  • Diciotti, Stefano
  • Lombardo, Simone
  • Falchini, Massimo
  • Picozzi, Giulia
  • Mascalchi, Mario
IEEE Trans Biomed Eng 2011 Journal Article, cited 68 times
Website
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.

The Next Frontier in Health Disparities-A Closer Look at Exploring Sex Differences in Glioma Data and Omics Analysis, from Bench to Bedside and Back

  • Diaz Rosario, M.
  • Kaur, H.
  • Tasci, E.
  • Shankavaram, U.
  • Sproull, M.
  • Zhuge, Y.
  • Camphausen, K.
  • Krauze, A.
2022 Journal Article, cited 0 times
Website
Sex differences are increasingly being explored and reported in oncology, and glioma is no exception. As potentially meaningful sex differences are uncovered, existing gender-derived disparities mirror data generated in retrospective and prospective trials, real-world large-scale data sets, and bench work involving animals and cell lines. The resulting disparities at the data level are wide-ranging, potentially resulting in both adverse outcomes and failure to identify and exploit therapeutic benefits. We set out to analyze the literature on women's data disparities in glioma by exploring the origins of data in this area to understand the representation of women in study samples and omics analyses. Given the current emphasis on inclusive study design and research, we wanted to explore if sex bias continues to exist in present-day data sets and how sex differences in data may impact conclusions derived from large-scale data sets, omics, biospecimen analysis, novel interventions, and standard of care management.

Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy

  • Dias, Marta Filipa Ferraz
  • Collins-Fekete, Charles-Antoine
  • Baroni, Guido
  • Riboldi, Marco
  • Seco, Joao
Biomedical Physics & Engineering Express 2019 Journal Article, cited 0 times
Website

Human-interpretable image features derived from densely mapped cancer pathology slides predict diverse molecular phenotypes

  • Diao, J. A.
  • Wang, J. K.
  • Chui, W. F.
  • Mountain, V.
  • Gullapally, S. C.
  • Srinivasan, R.
  • Mitchell, R. N.
  • Glass, B.
  • Hoffman, S.
  • Rao, S. K.
  • Maheshwari, C.
  • Lahiri, A.
  • Prakash, A.
  • McLoughlin, R.
  • Kerner, J. K.
  • Resnick, M. B.
  • Montalto, M. C.
  • Khosla, A.
  • Wapinski, I. N.
  • Beck, A. H.
  • Elliott, H. L.
  • Taylor-Weiner, A.
2021 Journal Article, cited 0 times
Website
Computational methods have made substantial progress in improving the accuracy and throughput of pathology workflows for diagnostic, prognostic, and genomic prediction. Still, lack of interpretability remains a significant barrier to clinical integration. We present an approach for predicting clinically-relevant molecular phenotypes from whole-slide histopathology images using human-interpretable image features (HIFs). Our method leverages >1.6 million annotations from board-certified pathologists across >5700 samples to train deep learning models for cell and tissue classification that can exhaustively map whole-slide images at two and four micron-resolution. Cell- and tissue-type model outputs are combined into 607 HIFs that quantify specific and biologically-relevant characteristics across five cancer types. We demonstrate that these HIFs correlate with well-known markers of the tumor microenvironment and can predict diverse molecular signatures (AUROC 0.601-0.864), including expression of four immune checkpoint proteins and homologous recombination deficiency, with performance comparable to 'black-box' methods. Our HIF-based approach provides a comprehensive, quantitative, and interpretable window into the composition and spatial architecture of the tumor microenvironment.

Deep learning in head & neck cancer outcome prediction

  • Diamant, André
  • Chatterjee, Avishek
  • Vallières, Martin
  • Shenouda, George
  • Seuntjens, Jan
2019 Journal Article, cited 0 times
Website
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.

3d texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 7 times
Website

Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Alam, Naved
  • Khandelwal, Niranjan
2013 Conference Paper, cited 4 times
Website

LeukoCapsNet: a resource-efficient modified CapsNet model to identify leukemia from blood smear images

  • Dhalla, Sabrina
  • Mittal, Ajay
  • Gupta, Savita
Neural Computing and Applications 2023 Journal Article, cited 0 times
Website
Leukemia is one of the deadly cancers which spreads itself at an exponential rate and has a detrimental impact on leukocytes in the human blood. To automate the process of leukemia detection, researchers have utilized deep learning networks to analyze blood smear images. In our research, we have proposed the usage of networks that mimic the human brain’s real working. These models are fed features from numerous convolution layers, each having its own set of additional skip connections. It is then stored and processed as vectors, making them rotationally invariant as well, a characteristic not found in other deep learning networks, specifically convolutional neural networks (CNNs). The network is then pruned by 20% to make it more deployable in resource-constrained environments. This research also compares the model’s performance by four ablation experiments and concludes that the proposed model is optimal. It has also been tested on three different types of datasets to highlight its robustness. The average values of all three datasets correspond to specificity: 96.97%, sensitivity: 96.81%, precision: 96.79% and accuracy: 97.44%. In a nutshell, the outcomes of the proposed model, i.e., PrunedResCapsNet make it more dynamic and effective compared with other existing methods.

Social group optimization–assisted Kapur’s entropy and morphological segmentation for automated detection of COVID-19 infection from computed tomography images

  • Dey, Nilanjan
  • Rajinikanth, V
  • Fong, Simon James
  • Kaiser, M Shamim
  • Mahmud, Mufti
Cognitive Computation 2020 Journal Article, cited 0 times

Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma

  • Dextraze, Katherine
  • Saha, Abhijoy
  • Kim, Donnie
  • Narang, Shivali
  • Lehrer, Michael
  • Rao, Anita
  • Narang, Saphal
  • Rao, Dinesh
  • Ahmed, Salmaan
  • Madhugiri, Venkatesh
  • Fuller, Clifton David
  • Kim, Michelle M
  • Krishnan, Sunil
  • Rao, Ganesh
  • Rao, Arvind
OncotargetOncotarget 2017 Journal Article, cited 0 times
Website
Glioblastoma (GBM) show significant inter- and intra-tumoral heterogeneity, impacting response to treatment and overall survival time of 12-15 months. To study glioblastoma phenotypic heterogeneity, multi-parametric magnetic resonance images (MRI) of 85 glioblastoma patients from The Cancer Genome Atlas were analyzed to characterize tumor-derived spatial habitats for their relationship with outcome (overall survival) and to identify their molecular correlates (i.e., determine associated tumor signaling pathways correlated with imaging-derived habitat measurements). Tumor sub-regions based on four sequences (fluid attenuated inversion recovery, T1-weighted, post-contrast T1-weighted, and T2-weighted) were defined by automated segmentation. From relative intensity of pixels in the 3-dimensional tumor region, "imaging habitats" were identified and analyzed for their association to clinical and genetic data using survival modeling and Dirichlet regression, respectively. Sixteen distinct tumor sub-regions ("spatial imaging habitats") were derived, and those associated with overall survival (denoted "relevant" habitats) in glioblastoma patients were identified. Dirichlet regression implicated each relevant habitat with unique pathway alterations. Relevant habitats also had some pathways and cellular processes in common, including phosphorylation of STAT-1 and natural killer cell activity, consistent with cancer hallmarks. This work revealed clinical relevance of MRI-derived spatial habitats and their relationship with oncogenic molecular mechanisms in patients with GBM. Characterizing the associations between imaging-derived phenotypic measurements with the genomic and molecular characteristics of tumors can enable insights into tumor biology, further enabling the practice of personalized cancer treatment. The analytical framework and workflow demonstrated in this study are inherently scalable to multiple MR sequences.

Development of a nomogram combining clinical staging with 18F-FDG PET/CT image features in non-small-cell lung cancer stage I–III

  • Desseroit, Marie-Charlotte
  • Visvikis, Dimitris
  • Tixier, Florent
  • Majdoub, Mohamed
  • Perdrisot, Rémy
  • Guillevin, Rémy
  • Le Rest, Catherine Cheze
  • Hatt, Mathieu
European journal of nuclear medicine and molecular imaging 2016 Journal Article, cited 34 times
Website

Computer-Aided Detection for Early Detection of Lung Cancer Using CT Images

  • Desai, Usha
  • Kamath, Sowmya
  • Shetty, Akshaya D.
  • Prabhu, M. Sandeep
2022 Conference Proceedings, cited 0 times
Website
Doctors face difficulty in the diagnosis of lung cancer due to the complex nature and clinical interrelations of computer-diagnosed scan images. Hence, the visual inspection and subjective evaluation methods are time consuming and tedious, which leads to inter and intra observer inconsistency or imprecise classification. The Computer-Aided Detection (CAD) can help the clinicians for objective decision-making, early diagnosis, and classification of cancerous abnormalities. In this work, CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection in which, the phases of lung cancer are discriminated using image processing tools. Cancer is the second leading cause of death in non-communicable diseases worldwide. Lung cancer is, in fact, the most dangerous form of cancer that affects both the genders. Either or both sides of the lung begin to expand during the uncontrolled growth of extraordinary cells. The most widely used imaging technique for lung cancer diagnosis is Computerised Tomography (CT) scanning. In this work, the CAD method is used to differentiate between the phases of pictures of lung cancer stages. Abnormality detection consists of 4 steps: pre-processing, segmentation, extraction of features, and classification of input CT images. For the segmentation process, Marker-controlled watershed segmentation and the K-means algorithm are used. From CT images, normal and abnormal information is extracted and its characteristics are determined. Stages 1–4 of cancerous imaging were discriminated and graded with approximately 80% efficiency using neural network feedforward backpropagation algorithm. Input data is collected from the Lung Image Database Consortium (LIDC), which out of 1018 dataset cases uses 100 cases. For the output display, a graphical user interface (GUI) is developed. This automated and robust CAD system is necessary for accurate and quick screening of the mass population.

Investigation of inter-fraction target motion variations in the context of pencil beam scanned proton therapy in non-small cell lung cancer patients

  • den Otter, L. A.
  • Anakotta, R. M.
  • Weessies, M.
  • Roos, C. T. G.
  • Sijtsema, N. M.
  • Muijs, C. T.
  • Dieters, M.
  • Wijsman, R.
  • Troost, E. G. C.
  • Richter, C.
  • Meijers, A.
  • Langendijk, J. A.
  • Both, S.
  • Knopf, A. C.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: For locally advanced-stage non-small cell lung cancer (NSCLC), inter-fraction target motion variations during the whole time span of a fractionated treatment course are assessed in a large and representative patient cohort. The primary objective is to develop a suitable motion monitoring strategy for pencil beam scanning proton therapy (PBS-PT) treatments of NSCLC patients during free breathing. METHODS: Weekly 4D computed tomography (4DCT; 41 patients) and daily 4D cone beam computed tomography (4DCBCT; 10 of 41 patients) scans were analyzed for a fully fractionated treatment course. Gross tumor volumes (GTVs) were contoured and the 3D displacement vectors of the centroid positions were compared for all scans. Furthermore, motion amplitude variations in different lung segments were statistically analyzed. The dosimetric impact of target motion variations and target motion assessment was investigated in exemplary patient cases. RESULTS: The median observed centroid motion was 3.4 mm (range: 0.2-12.4 mm) with an average variation of 2.2 mm (range: 0.1-8.8 mm). Ten of 32 patients (31.3%) with an initial motion <5 mm increased beyond a 5-mm motion amplitude during the treatment course. Motion observed in the 4DCBCT scans deviated on average 1.5 mm (range: 0.0-6.0 mm) from the motion observed in the 4DCTs. Larger motion variations for one example patient compromised treatment plan robustness while no dosimetric influence was seen due to motion assessment biases in another example case. CONCLUSIONS: Target motion variations were investigated during the course of radiotherapy for NSCLC patients. Patients with initial GTV motion amplitudes of < 2 mm can be assumed to be stable in motion during the treatment course. For treatments of NSCLC patients who exhibit motion amplitudes of > 2 mm, 4DCBCT should be considered for motion monitoring due to substantial motion variations observed.

Residual 3D U-Net with Localization for Brain Tumor Segmentation

  • Demoustier, Marc
  • Khemir, Ines
  • Nguyen, Quoc Duong
  • Martin-Gaffé, Lucien
  • Boutry, Nicolas
2022 Conference Paper, cited 0 times
Website
Gliomas are brain tumors originating from the neuronal support tissue called glia, which can be benign or malignant. They are considered rare tumors, whose prognosis, which is highly fluctuating, is primarily related to several factors, including localization, size, degree of extension and certain immune factors. We propose an approach using a Residual 3D U-Net to segment these tumors with localization, a technique for centering and reducing the size of input images to make more accurate and faster predictions. We incorporated different training and post-processing techniques such as cross-validation and minimum pixel threshold.

CT-based radiomics stratification of tumor grade and TNM stage of clear cell renal cell carcinoma

  • Demirjian, Natalie L
  • Varghese, Bino A
  • Cen, Steven Y
  • Hwang, Darryl H
  • Aron, Manju
  • Siddiqui, Imran
  • Fields, Brandon K K
  • Lei, Xiaomeng
  • Yap, Felix Y
  • Rivas, Marielena
  • Reddy, Sharath S
  • Zahoor, Haris
  • Liu, Derek H
  • Desai, Mihir
  • Rhie, Suhn K
  • Gill, Inderbir S
  • Duddalwar, Vinay
Eur Radiol 2021 Journal Article, cited 0 times
Website
OBJECTIVES: To evaluate the utility of CT-based radiomics signatures in discriminating low-grade (grades 1-2) clear cell renal cell carcinomas (ccRCC) from high-grade (grades 3-4) and low TNM stage (stages I-II) ccRCC from high TNM stage (stages III-IV). METHODS: A total of 587 subjects (mean age 60.2 years +/- 12.2; range 22-88.7 years) with ccRCC were included. A total of 255 tumors were high grade and 153 were high stage. For each subject, one dominant tumor was delineated as the region of interest (ROI). Our institutional radiomics pipeline was then used to extract 2824 radiomics features across 12 texture families from the manually segmented volumes of interest. Separate iterations of the machine learning models using all extracted features (full model) as well as only a subset of previously identified robust metrics (robust model) were developed. Variable of importance (VOI) analysis was performed using the out-of-bag Gini index to identify the top 10 radiomics metrics driving each classifier. Model performance was reported using area under the receiver operating curve (AUC). RESULTS: The highest AUC to distinguish between low- and high-grade ccRCC was 0.70 (95% CI 0.62-0.78) and the highest AUC to distinguish between low- and high-stage ccRCC was 0.80 (95% CI 0.74-0.86). Comparable AUCs of 0.73 (95% CI 0.65-0.8) and 0.77 (95% CI 0.7-0.84) were reported using the robust model for grade and stage classification, respectively. VOI analysis revealed the importance of neighborhood operation-based methods, including GLCM, GLDM, and GLRLM, in driving the performance of the robust models for both grade and stage classification. CONCLUSION: Post-validation, CT-based radiomics signatures may prove to be useful tools to assess ccRCC grade and stage and could potentially add to current prognostic models. Multiphase CT-based radiomics signatures have potential to serve as a non-invasive stratification schema for distinguishing between low- and high-grade as well as low- and high-stage ccRCC. KEY POINTS: * Radiomics signatures derived from clinical multiphase CT images were able to stratify low- from high-grade ccRCC, with an AUC of 0.70 (95% CI 0.62-0.78). * Radiomics signatures derived from multiphase CT images yielded discriminative power to stratify low from high TNM stage in ccRCC, with an AUC of 0.80 (95% CI 0.74-0.86). * Models created using only robust radiomics features achieved comparable AUCs of 0.73 (95% CI 0.65-0.80) and 0.77 (95% CI 0.70-0.84) to the model with all radiomics features in classifying ccRCC grade and stage, respectively.

Uncertainty-Based Dynamic Graph Neighborhoods for Medical Segmentation

  • Demir, Ufuk
  • Ozer, Atahan
  • Sahin, Yusuf H.
  • Unal, Gozde
2021 Conference Paper, cited 0 times
Website
In recent years, deep learning based methods have shown success in essential medical image analysis tasks such as segmentation. Post-processing and refining the results of segmentation is a common practice to decrease the misclassifications originating from the segmentation network. In addition to widely used methods like Conditional Random Fields (CRFs) which focus on the structure of the segmented volume/area, a graph-based recent approach makes use of certain and uncertain points in a graph and refines the segmentation according to a small graph convolutional network (GCN). However, there are two drawbacks of the approach: most of the edges in the graph are assigned randomly and the GCN is trained independently from the segmentation network. To address these issues, we define a new neighbor-selection mechanism according to feature distances and combine the two networks in the training procedure. According to the experimental results on pancreas segmentation from Computed Tomography (CT) images, we demonstrate improvement in the quantitative measures. Also, examining the dynamic neighbors created by our method, edges between semantically similar image parts are observed. The proposed method also shows qualitative enhancements in the segmentation maps, as demonstrated in the visual results.

Computer-aided detection of lung nodules using outer surface features

  • Demir, Önder
  • Yılmaz Çamurcu, Ali
Bio-Medical Materials and EngineeringBio-Med Mater Eng 2015 Journal Article, cited 28 times
Website
In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.

Mesoscopic imaging of glioblastomas: Are diffusion, perfusion and spectroscopic measures influenced by the radiogenetic phenotype?

  • Demerath, Theo
  • Simon-Gabriel, Carl Philipp
  • Kellner, Elias
  • Schwarzwald, Ralf
  • Lange, Thomas
  • Heiland, Dieter Henrik
  • Reinacher, Peter
  • Staszewski, Ori
  • Mast, Hansjorg
  • Kiselev, Valerij G
  • Egger, Karl
  • Urbach, Horst
  • Weyerbrock, Astrid
  • Mader, Irina
Neuroradiology Journal 2017 Journal Article, cited 5 times
Website
The purpose of this study was to identify markers from perfusion, diffusion, and chemical shift imaging in glioblastomas (GBMs) and to correlate them with genetically determined and previously published patterns of structural magnetic resonance (MR) imaging. Twenty-six patients (mean age 60 years, 13 female) with GBM were investigated. Imaging consisted of native and contrast-enhanced 3D data, perfusion, diffusion, and spectroscopic imaging. In the presence of minor necrosis, cerebral blood volume (CBV) was higher (median +/- SD, 2.23% +/- 0.93) than in pronounced necrosis (1.02% +/- 0.71), pcorr = 0.0003. CBV adjacent to peritumoral fluid-attenuated inversion recovery (FLAIR) hyperintensity was lower in edema (1.72% +/- 0.31) than in infiltration (1.91% +/- 0.35), pcorr = 0.039. Axial diffusivity adjacent to peritumoral FLAIR hyperintensity was lower in severe mass effect (1.08*10(-3) mm(2)/s +/- 0.08) than in mild mass effect (1.14*10(-3) mm(2)/s +/- 0.06), pcorr = 0.048. Myo-inositol was positively correlated with a marker for mitosis (Ki-67) in contrast-enhancing tumor, r = 0.5, pcorr = 0.0002. Changed CBV and axial diffusivity, even outside FLAIR hyperintensity, in adjacent normal-appearing matter can be discussed as to be related to angiogenesis pathways and to activated proliferation genes. The correlation between myo-inositol and Ki-67 might be attributed to its binding to cell surface receptors regulating tumorous proliferation of astrocytic cells.

Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

  • DeGrandchamp, Joseph B
  • Whisenant, Jennifer G
  • Arlinghaus, Lori R
  • Abramson, VG
  • Yankeelov, Thomas E
  • Cárdenas-Rodríguez, Julio
2016 Conference Proceedings, cited 5 times
Website
The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.

Local mesh ternary patterns: a new descriptor for MRI and CT biomedical image indexing and retrieval

  • Deep, G.
  • Kaur, L.
  • Gupta, S.
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2016 Journal Article, cited 3 times
Website
This paper proposes a new pattern-based feature called local mesh ternary pattern for biomedical image indexing and retrieval. The standard local binary patterns (LBP) and local ternary patterns (LTP) encode the greyscale relationship between the centre pixel and its surrounding neighbours in two-dimensional (2D) local region of an image, whereas the proposed method encodes the greyscale relationship among the neighbours for a given centre pixel with three selected directions of mesh patterns which is generated from 2D image. The novelty of the proposed method is that it uses ternary patterns from mesh patterns of an image to encode more spatial structure information which leads to better retrieval. The experiments have been carried out for proving the worth of proposed algorithm on three different types of benchmarked biomedical databases; (i) computed tomography (CT) scanned lung image databases named as LIDC-IDRI-CT and VIA/I–ELCAP-CT, (ii) brain magnetic resonance imaging (MRI) database named as OASIS-MRI. The results demonstrate that the proposed method yields better performance in terms of average retrieval precision and average retrieval rate over state-of-the-art feature extraction techniques like LBP, LTP, local mesh pattern, etc.

Directional local ternary quantized extrema pattern: A new descriptor for biomedical image indexing and retrieval

  • Deep, G
  • Kaur, L
  • Gupta, S
Engineering Science and Technology, an International Journal 2016 Journal Article, cited 9 times
Website

Automated MRI based pipeline for segmentation and prediction of grade, IDH mutation and 1p19q co-deletion in glioma

  • Decuyper, M.
  • Bonte, S.
  • Deblaere, K.
  • Van Holen, R.
Comput Med Imaging Graph 2021 Journal Article, cited 42 times
Website
In the WHO glioma classification guidelines grade (glioblastoma versus lower-grade glioma), IDH mutation and 1p/19q co-deletion status play a central role as they are important markers for prognosis and optimal therapy planning. Currently, diagnosis requires invasive surgical procedures. Therefore, we propose an automatic segmentation and classification pipeline based on routinely acquired pre-operative MRI (T1, T1 postcontrast, T2 and/or FLAIR). A 3D U-Net was designed for segmentation and trained on the BraTS 2019 training dataset. After segmentation, the 3D tumor region of interest is extracted from the MRI and fed into a CNN to simultaneously predict grade, IDH mutation and 1p19q co-deletion. Multi-task learning allowed to handle missing labels and train one network on a large dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS databases. Additionally, the network was validated on an independent dataset of 110 patients retrospectively acquired at the Ghent University Hospital (GUH). Segmentation performance calculated on the BraTS validation set shows an average whole tumor dice score of 90% and increased robustness to missing image modalities by randomly excluding input MRI during training. Classification area under the curve scores are 93%, 94% and 82% on the TCIA test data and 94%, 86% and 87% on the GUH data for grade, IDH and 1p19q status respectively. We developed a fast, automatic pipeline to segment glioma and accurately predict important (molecular) markers based on pre-therapy MRI.

Ensembling Voxel-Based and Box-Based Model Predictions for Robust Lesion Detection

  • Debs, Noëlie
  • Routier, Alexandre
  • Abi-Nader, Clément
  • Marcoux, Arnaud
  • Bône, Alexandre
  • Rohé, Marc-Michel
2024 Book Section, cited 0 times
Website
This paper presents a novel generic method to improve lesion detection by ensembling semantic segmentation and object detection models. The proposed approach allows to benefit from both voxel-based and box-based predictions, thus improving the ability to accurately detect lesions. The method consists of 3 main steps: (i) semantic segmentation and object detection models are trained separately; (ii) voxel-based and box-based predictions are matched spatially; (iii) corresponding lesion presence probabilities are combined into summary detection maps. We illustrate and validate the robustness of the proposed approach on three different oncology applications: liver and pancreas neoplasm detection in single-phase CT, and significant prostate cancer detection in multi-modal MRI. Performance is evaluated on publicly-available databases, and compared to two state-of-the art baseline methods. The proposed ensembling approach improves the average precision metric in all considered applications, with a 8% gain for prostate cancer.

Probabilistic Tissue Mapping for Tumor Segmentation and Infiltration Detection of Glioma

  • De Sutter, Selene
  • Geens, Wietse
  • Bossa, Matías
  • Vanbinst, Anne-Marie
  • Duerinck, Johnny
  • Vandemeulebroucke, Jef
2023 Book Section, cited 0 times
Website
Segmentation of glioma structures is vital for therapy planning. Although state of the art algorithms achieve impressive results when compared to ground-truth manual delineations, one could argue that the binary nature of these labels does not properly reflect the underlying biology, nor does it account for uncertainties in the predicted segmentations. Moreover, the tumor infiltration beyond the contrast-enhanced lesion – visually imperceptible on imaging – is often ignored despite its potential role in tumor recurrence. We propose an intensity-based probabilistic model for brain tissue mapping based on conventional MRI sequences. We evaluated its value in the binary segmentation of the tumor and its subregions, and in the visualisation of possible infiltration. The model achieves a median Dice of 0.82 in the detection of the whole tumor, but suffers from confusion between different subregions. Preliminary results for the tumor probability maps encourage further investigation of the model regarding infiltration detection.

Segmentação Automática de Candidatos a Nódulos Pulmonares em Imagens de Tomografia Computadorizada

  • De Moura, Maria
  • De Sousa, Alcilene
  • De Oliveira, Ivo
  • Mesquita, Laércio
  • Drumond, Patrícia
2015 Conference Paper, cited 0 times
Website
Este trabalho apresenta um algoritmo para segmentação automática de candidatos a nódulos pulmonares em imagens de Tomografia Computadorizada do tórax. A metodologia empregada inclui aquisição das imagens, eliminação de ruídos, segmentação do parênquima pulmonar e segmentação dos candidatos a nódulos pulmonares. O uso do filtro wiener e a aplicação do limiar ideal garante ao algoritmo uma melhora significativa nos resultados, permitindo detectar um maior número de nódulos nas imagens. Os testes foram realizados utilizando um conjunto de imagens da base LIDC-IDRI, contendo 708 nódulos. Os resultados do teste mostraram que o algoritmo localizou 93,08% dos nódulos considerados. This paper presents an algorithm for automatic segmentation of pulmonary nodules candidates in chest computed tomography images. The methodology includes acquisition images, noise elimination, segmentation of pulmonary parenchyma and segmentation pulmonary nodules candidates. The use of the filter wiener and the application of ideal threshold ensures to the algorithm a significant improvement in results, allowing to detect a greater number of nodules on the images. The tests were conducted using a set of images of the base LIDC-IDRI, containing 708 nodules. The test results showed that the algorithm located 93.08% of the nodules considered.

Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma’s grade and IDH status

  • De Looze, Céline
  • Beausang, Alan
  • Cryan, Jane
  • Loftus, Teresa
  • Buckley, Patrick G
  • Farrell, Michael
  • Looby, Seamus
  • Reilly, Richard
  • Brett, Francesca
  • Kearney, Hugh
Journal of Neuro-Oncology 2018 Journal Article, cited 0 times

Impact of GAN-based lesion-focused medical image super-resolution on the robustness of radiomic features

  • de Farias, E. C.
  • di Noia, C.
  • Han, C.
  • Sala, E.
  • Castelli, M.
  • Rundo, L.
2021 Journal Article, cited 12 times
Website
Robust machine learning models based on radiomic features might allow for accurate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, increasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., cancer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At [Formula: see text] SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at [Formula: see text] SR. We also evaluated the robustness of our model's radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.

Impact of GAN-based Lesion-Focused Medical Image Super-Resolution on Radiomic Feature Robustness

  • de Farias, Erick Costa
2023 Thesis, cited 0 times
Website
Robust machine learning models based on radiomic features might allow for accu rate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, in creasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., can cer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At 2× SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at 4× SR. We also evaluated the robustness of our model’s radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.

A blockchain-based protocol for tracking user access to shared medical imaging

  • de Aguiar, Erikson J.
  • dos Santos, Alyson J.
  • Meneguette, Rodolfo I.
  • De Grande, Robson E.
  • Ueyama, Jó
Future Generation Computer Systems 2022 Journal Article, cited 0 times
Website
Modern healthcare systems are complex and regularly share sensitive data among multiple stakeholders, such as doctors, patients, and pharmacists. Patients’ data has increased and requires safe methods for its management. Research works related to blockchain, such as MIT MedRec, have strived to draft trustworthy and immutable systems to share data. However, blockchain may be challenging in healthcare scenarios due to issues about privacy and control of data sharing destinations. This paper presents a protocol for tracking shared medical data, which includes images, and controlling the medical data access by multiple conflicting stakeholders. Several efforts rely on blockchain for healthcare, but just a few are concerned about malicious data leakage in blockchain-based healthcare systems. We implement a token mechanism stored in DICOM files and managed by Hyperledger Fabric Blockchain. Our findings and evaluations revealed low chances of a hash collision, such as employing a fitting-resistance birthday attack. Although our solution was devised for healthcare, it can inspire and be easily ported to other blockchain-based application scenarios, such as Ethereum or Hyperledger Besu for business networks.

Cerberus: A Multi-headed Network for Brain Tumor Segmentation

  • Daza, Laura
  • Gómez, Catalina
  • Arbeláez, Pablo
2021 Book Section, cited 4 times
Website
The automated analysis of medical images requires robust and accurate algorithms that address the inherent challenges of identifying heterogeneous anatomical and pathological structures, such as brain tumors, in large volumetric images. In this paper, we present Cerberus, a single lightweight convolutional neural network model for the segmentation of fine-grained brain tumor regions in multichannel MRIs. Cerberus has an encoder-decoder architecture that takes advantage of a shared encoding phase to learn common representations for these regions and, then, uses specialized decoders to produce detailed segmentations. Cerberus learns to combine the weights learned for each category to produce a final multi-label segmentation. We evaluate our approach on the official test set of the Brain Tumor Segmentation Challenge 2020, and we obtain dice scores of 0.807 for enhancing tumor, 0.867 for whole tumor and 0.826 for tumor core.

Cross-linking breast tumor transcriptomic states and tissue histology

  • Dawood, M.
  • Eastwood, M.
  • Jahanifar, M.
  • Young, L.
  • Ben-Hur, A.
  • Branson, K.
  • Jones, L.
  • Rajpoot, N.
  • Minhas, Fuaa
2023 Journal Article, cited 1 times
Website
Identification of the gene expression state of a cancer patient from routine pathology imaging and characterization of its phenotypic effects have significant clinical and therapeutic implications. However, prediction of expression of individual genes from whole slide images (WSIs) is challenging due to co-dependent or correlated expression of multiple genes. Here, we use a purely data-driven approach to first identify groups of genes with co-dependent expression and then predict their status from WSIs using a bespoke graph neural network. These gene groups allow us to capture the gene expression state of a patient with a small number of binary variables that are biologically meaningful and carry histopathological insights for clinical and therapeutic use cases. Prediction of gene expression state based on these gene groups allows associating histological phenotypes (cellular composition, mitotic counts, grading, etc.) with underlying gene expression patterns and opens avenues for gaining biological insights from routine pathology imaging directly.

AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium

  • Davatzikos, C.
  • Barnholtz-Sloan, J. S.
  • Bakas, S.
  • Colen, R.
  • Mahajan, A.
  • Quintero, C. B.
  • Font, J. C.
  • Puig, J.
  • Jain, R.
  • Sloan, A. E.
  • Badve, C.
  • Marcus, D. S.
  • Choi, Y. S.
  • Lee, S. K.
  • Chang, J. H.
  • Poisson, L. M.
  • Griffith, B.
  • Dicker, A. P.
  • Flanders, A. E.
  • Booth, T. C.
  • Rathore, S.
  • Akbari, H.
  • Sako, C.
  • Bilello, M.
  • Shukla, G.
  • Kazerooni, A. F.
  • Brem, S.
  • Lustig, R.
  • Mohan, S.
  • Bagley, S.
  • Nasrallah, M.
  • O'Rourke, D. M.
2020 Journal Article, cited 0 times
Website

Brain tumor image pixel segmentation and detection using an aggregation of GAN models with vision transformer

  • Datta, Priyanka
  • Rohilla, Rajesh
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2023 Journal Article, cited 0 times
Website
A number of applications in the field of medical analysis require the difficult and crucial tasks of brain tumor detection and segmentation from magnetic resonance imaging (MRI). Given that each type of brain imaging provides distinctive information about the specifics of each tumor component, in order to create a flexible and successful brain tumor segmentation system, we first suggest a normalization preprocessing method along with pixel segmentation. Then creating synthetic images is advantageous in many fields thanks to generative adversarial networks (GANs). In contrast, combining different GANs may enable understanding of the distributed features but it can make the model very complex and confusing. Standalone GAN may only retrieve the localized features in the latent version of an image. To achieve global and local feature extraction in a single model, we have used a vision transformer (ViT) along with a standalone GAN which will further improve the similarity of the images and can increase the performance of the model for detection of tumor. By effectively overcoming the constraint of data scarcity, high computational time, and lower discrimination capability, our suggested model can comprehend better accuracy, and lower computational time and also give the understanding of the information variance in various representations of the original images. The proposed model was evaluated on the BraTS 2020 dataset and Masoud2021 dataset, that is, a combination of the three datasets SARTAJ, Figshare, and BR35H. The obtained results demonstrate that the suggested model is capable of producing fine-quality images with accuracy and sensitivity scores of 0.9765 and 0.977 on the BraTS 2020 dataset as well as 0.9899 and 0.9683 on the Masoud2021 dataset.

An Efficient Detection and Classification of Acute Leukemia using Transfer Learning and Orthogonal Softmax Layer-based Model

  • Das, P. K.
  • Sahoo, B.
  • Meher, S.
2022 Journal Article, cited 0 times
Website
For the early diagnosis of hematological disorders like blood cancer, microscopic analysis of blood cells is very important. Traditional deep CNNs lead to overfitting when it receives small medical image datasets such as ALLIDB1, ALLIDB2, and ASH. This paper proposes a new and effective model for classifying and detecting Acute Lymphoblastic Leukemia (ALL) or Acute Myelogenous Leukemia (AML) that delivers excellent performance in small medical datasets. Here, we have proposed a novel Orthogonal SoftMax Layer (OSL)-based Acute Leukemia detection model that consists of ResNet 18-based deep feature extraction followed by efficient OSL-based classification. Here, OSL is integrated with the ResNet18 to improve the classification performance by making the weight vectors orthogonal to each other. Hence, it integrates ResNet benefits (residual learning and identity mapping) with the benefits of OSL-based classification (improvement of feature discrimination capability and computational efficiency). Furthermore, we have introduced extra dropout and ReLu layers in the architecture to achieve a faster network with enhanced performance. The performance verification is performed on standard ALLIDB1, ALLIDB2, and C_NMC_2019 datasets for efficient ALL detection and ASH dataset for effective AML detection. The experimental performance demonstrates the superiority of the proposed model over other compairing models.

Feature Extraction In Medical Images by Using Deep Learning Approach

  • Dara, S
  • Tumma, P
  • Eluri, NR
  • Kancharla, GR
International Journal of Pure and Applied Mathematics 2018 Journal Article, cited 0 times
Website

Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?

  • Damilakis, E.
  • Mavroudis, D.
  • Sfakianaki, M.
  • Souglakos, J.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.

Development of a Convolutional Neural Network Based Skull Segmentation in MRI Using Standard Tesselation Language Models

  • Dalvit Carvalho da Silva, R.
  • Jenkyn, T. R.
  • Carranza, V. A.
J Pers Med 2021 Journal Article, cited 0 times
Website
Segmentation is crucial in medical imaging analysis to help extract regions of interest (ROI) from different imaging modalities. The aim of this study is to develop and train a 3D convolutional neural network (CNN) for skull segmentation in magnetic resonance imaging (MRI). 58 gold standard volumetric labels were created from computed tomography (CT) scans in standard tessellation language (STL) models. These STL models were converted into matrices and overlapped on the 58 corresponding MR images to create the MRI gold standards labels. The CNN was trained with these 58 MR images and a mean +/- standard deviation (SD) Dice similarity coefficient (DSC) of 0.7300 +/- 0.04 was achieved. A further investigation was carried out where the brain region was removed from the image with the help of a 3D CNN and manual corrections by using only MR images. This new dataset, without the brain, was presented to the previous CNN which reached a new mean +/- SD DSC of 0.7826 +/- 0.03. This paper aims to provide a framework for segmenting the skull using CNN and STL models, as the 3D CNN was able to segment the skull with a certain precision.

Development of a Convolutional Neural Network Based Skull Segmentation in MRI Using Standard Tesselation Language Models

  • Dalvit Carvalho da Silva, Rodrigo
  • Jenkyn, Thomas Richard
  • Carranza, Victor Alexander
Journal of Personalized Medicine 2021 Journal Article, cited 0 times
Website

The Role of Transient Vibration of the Skull on Concussion

  • Dalvit Carvalho da Silva, Rodrigo
2022 Thesis, cited 0 times
Website
Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury. Summary for Lay Audience A concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. As the maximum mechanical impedance of the brain tissue occurs at 450±50 Hz, skull resonant frequencies may play an important role in the propagation of this vibration into the brain tissue. The overall goal of the proposed research is to gain a better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives: I) develop an automatic method to segment/extract skull and brain from magnetic resonance imaging (MRI), II) create a novel 2D and 3D automatic method to align the facial skeleton, and III) identify the skull resonant frequencies and raise the theory of how these vibrations may propagate into brain tissue. For objective 1, 58 MRI and their respective computed tomography (CT) scans were used to create a convolutional neural network framework for skull and brain segmentation in MRI. Moreover, an invariant moment kernel was introduced to improve the brain segmentation accuracy in MRI. For objective 2, a 2D and 3D technique for automatically calculating the craniofacial symmetry midline from head CT scans using deep learning techniques was used to precisely align the facial skeleton for future impact analysis. In objective 3, several skulls segmented were tested to identify their natural resonant frequencies. Those with a resonant frequency of 450±50 Hz were selected to improve understanding of how their shapes and thickness may help the vibration to propagate deeply in the brain tissue. The results from this study will improve our understanding of the role of transient vibration of the skull on concussion.

Brain Tumor Segmentation Using Non-local Mask R-CNN and Single Model Ensemble

  • Dai, Zhenzhen
  • Wen, Ning
  • Carver, Eric
2022 Conference Paper, cited 0 times
Website
Gliomas are the most common primary malignant brain tumors. Accurate segmentation and quantitative analysis of brain tumor are critical for diagnosis and treatment planning. Automatically segmenting tumors and their subregions is a challenging task as demonstrated by the annual Multimodal Brain Tumor Segmentation Challenge (BraTS). In order to tackle this challenging task, we trained 2D non-local Mask R-CNN with 814 patients from the BraTS 2021 training dataset. Our performance on another 417 patients from the BraTS 2021 training dataset were as follows: DSC of 0.784, 0.851 and 0.817; sensitivity of 0.775, 0.844 and 0.825 for the enhancing tumor, whole tumor and tumor core, respectively. By applying the focal loss function, our method achieved a DSC of 0.775, 0.885 and 0.829, as well as sensitivity of 0.757, 0.877 and 0.801. We also experimented with data distillation to ensemble single model’s predictions. Our refined results were DSC of 0.797, 0.884 and 0.833; sensitivity of 0.820, 0.855 and 0.820.

Self-training for Brain Tumour Segmentation with Uncertainty Estimation and Biophysics-Guided Survival Prediction

  • Dai, Chengliang
  • Wang, Shuo
  • Raynaud, Hadrien
  • Mo, Yuanhan
  • Angelini, Elsa
  • Guo, Yike
  • Bai, Wenjia
2021 Book Section, cited 0 times
Gliomas are among the most common types of malignant brain tumours in adults. Given the intrinsic heterogeneity of gliomas, the multi-parametric magnetic resonance imaging (mpMRI) is the most effective technique for characterising gliomas and their sub-regions. Accurate segmentation of the tumour sub-regions on mpMRI is of clinical significance, which provides valuable information for treatment planning and survival prediction. Thanks to the recent developments on deep learning, the accuracy of automated medical image segmentation has improved significantly. In this paper, we leverage the widely used attention and self-training techniques to conduct reliable brain tumour segmentation and uncertainty estimation. Based on the segmentation result, we present a biophysics-guided prognostic model for the prediction of overall survival. Our method of uncertainty estimation has won the second place of the MICCAI 2020 BraTS Challenge.

Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans

  • da Silva, Giovanni L F
  • Diniz, Petterson S
  • Ferreira, Jonnison L
  • Franca, Joao V F
  • Silva, Aristofanes C
  • de Paiva, Anselmo C
  • de Cavalcanti, Elton A A
2020 Journal Article, cited 0 times
Website
Automatic and reliable prostate segmentation is an essential prerequisite for assisting the diagnosis and treatment, such as guiding biopsy procedure and radiation therapy. Nonetheless, automatic segmentation is challenging due to the lack of clear prostate boundaries owing to the similar appearance of prostate and surrounding tissues and the wide variation in size and shape among different patients ascribed to pathological changes or different resolutions of images. In this regard, the state-of-the-art includes methods based on a probabilistic atlas, active contour models, and deep learning techniques. However, these techniques have limitations that need to be addressed, such as MRI scans with the same spatial resolution, initialization of the prostate region with well-defined contours and a set of hyperparameters of deep learning techniques determined manually, respectively. Therefore, this paper proposes an automatic and novel coarse-to-fine segmentation method for prostate 3D MRI scans. The coarse segmentation step combines local texture and spatial information using the Intrinsic Manifold Simple Linear Iterative Clustering algorithm and probabilistic atlas in a deep convolutional neural networks model jointly with the particle swarm optimization algorithm to classify prostate and non-prostate tissues. Then, the fine segmentation uses the 3D Chan-Vese active contour model to obtain the final prostate surface. The proposed method has been evaluated on the Prostate 3T and PROMISE12 databases presenting a dice similarity coefficient of 84.86%, relative volume difference of 14.53%, sensitivity of 90.73%, specificity of 99.46%, and accuracy of 99.11%. Experimental results demonstrate the high performance potential of the proposed method compared to those previously published.

Faber: A Hardware/SoftWare Toolchain for Image Registration

  • D'Arnese, Eleonora
  • Conficconi, Davide
  • Sozzo, Emanuele Del
  • Fusco, Luigi
  • Sciuto, Donatella
  • Santambrogio, Marco Domenico
2023 Journal Article, cited 0 times
Website
Image registration is a well-defined computation paradigm widely applied to align one or more images to a target image. This paradigm, which builds upon three main components, is particularly compute-intensive and represents many image processing pipelines’ bottlenecks. State-of-the-art solutions leverage hardware acceleration to speed up image registration, but they are usually limited to implementing a single component. We present Faber, an open-source HW/SW CAD toolchain tailored to image registration. The Faber toolchain comprises HW/SW highly-tunable registration components, supports users with different expertise in building custom pipelines, and automates the design process. In this direction, Faber provides both default settings for entry-level users and latency and resource models to guide HW experts in customizing the different components. Finally, Faber achieves from 1.5× to 54× in speedup and from 2× to 177× in energy efficiency against state-of-the-art tools on a Xeon Gold.

Radiogenomics of glioblastoma: a pilot multi-institutional study to investigate a relationship between tumor shape features and tumor molecular subtype

  • Czarnek, Nicholas M
  • Clark, Kal
  • Peters, Katherine B
  • Collins, Leslie M
  • Mazurowski, Maciej A
2016 Conference Proceedings, cited 3 times
Website

Algorithmic three-dimensional analysis of tumor shape in MRI improves prognosis of survival in glioblastoma: a multi-institutional study

  • Czarnek, Nicholas
  • Clark, Kal
  • Peters, Katherine B
  • Mazurowski, Maciej A
Journal of Neuro-Oncology 2017 Journal Article, cited 15 times
Website
In this retrospective, IRB-exempt study, we analyzed data from 68 patients diagnosed with glioblastoma (GBM) in two institutions and investigated the relationship between tumor shape, quantified using algorithmic analysis of magnetic resonance images, and survival. Each patient's Fluid Attenuated Inversion Recovery (FLAIR) abnormality and enhancing tumor were manually delineated, and tumor shape was analyzed by automatic computer algorithms. Five features were automatically extracted from the images to quantify the extent of irregularity in tumor shape in two and three dimensions. Univariate Cox proportional hazard regression analysis was performed to determine how prognostic each feature was of survival. Kaplan Meier analysis was performed to illustrate the prognostic value of each feature. To determine whether the proposed quantitative shape features have additional prognostic value compared with standard clinical features, we controlled for tumor volume, patient age, and Karnofsky Performance Score (KPS). The FLAIR-based bounding ellipsoid volume ratio (BEVR), a 3D complexity measure, was strongly prognostic of survival, with a hazard ratio of 0.36 (95% CI 0.20-0.65), and remained significant in regression analysis after controlling for other clinical factors (P = 0.0061). Three enhancing-tumor based shape features were prognostic of survival independently of clinical factors: BEVR (P = 0.0008), margin fluctuation (P = 0.0013), and angular standard deviation (P = 0.0078). Algorithmically assessed tumor shape is statistically significantly prognostic of survival for patients with GBM independently of patient age, KPS, and tumor volume. This shows promise for extending the utility of MR imaging in treatment of GBM patients.

Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival

  • Cury, Sarah Santiloni
  • de Moraes, Diogo
  • Freire, Paula Paccielli
  • de Oliveira, Grasieli
  • Marques, Douglas Venancio Pereira
  • Fernandez, Geysson Javier
  • Dal-Pai-Silva, Maeli
  • Hasimoto, Erica Nishida
  • Dos Reis, Patricia Pintor
  • Rogatto, Silvia Regina
  • Carvalho, Robson Francisco
Cancers (Basel) 2019 Journal Article, cited 1 times
Website
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.

Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset

  • Cuocolo, R.
  • Stanzione, A.
  • Castaldo, A.
  • De Lucia, D. R.
  • Imbriaco, M.
Eur J Radiol 2021 Journal Article, cited 0 times
Website
PURPOSE: Radiomic features are promising quantitative parameters that can be extracted from medical images and employed to build machine learning predictive models. However, generalizability is a key concern, encouraging the use of public image datasets. We performed a quality assessment of the PROSTATEx training dataset and provide publicly available lesion, whole-gland, and zonal anatomy segmentation masks. METHOD: Two radiology residents and two experienced board-certified radiologists reviewed the 204 prostate MRI scans (330 lesions) included in the training dataset. The quality of provided lesion coordinate was scored using the following scale: 0 = perfectly centered, 1 = within lesion, 2 = within the prostate without lesion, 3 = outside the prostate. All clearly detectable lesions were segmented individually slice-by-slice on T2-weighted and apparent diffusion coefficient images. With the same methodology, volumes of interest including the whole gland, transition, and peripheral zones were annotated. RESULTS: Of the 330 available lesion identifiers, 3 were duplicates (1%). From the remaining, 218 received score = 0, 74 score = 1, 31 score = 2 and 4 score = 3. Overall, 299 lesions were verified and segmented. Independently of lesion coordinate score and other issues (e.g., lesion coordinates falling outside DICOM images, artifacts etc.), the whole prostate gland and zonal anatomy were also manually annotated for all cases. CONCLUSION: While several issues were encountered evaluating the original PROSTATEx dataset, the improved quality and availability of lesion, whole-gland and zonal segmentations will increase its potential utility as a common benchmark in prostate MRI radiomics.

Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset

  • Cuocolo, Renato
  • Comelli, Albert
  • Stefano, Alessandro
  • Benfante, Viviana
  • Dahiya, Navdeep
  • Stanzione, Arnaldo
  • Castaldo, Anna
  • De Lucia, Davide Raffaele
  • Yezzi, Anthony
  • Imbriaco, Massimo
Journal of Magnetic Resonance Imaging 2021 Journal Article, cited 0 times
Website
Background Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen. Purpose This study compared different deep learning methods for whole-gland and zonal prostate segmentation. Study Type Retrospective. Population A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. Field strength/Sequence A 3 T, TSE T2-weighted. Assessment Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. Statistical Tests Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. Results The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. Data Conclusion Deep learning networks can accurately segment the prostate using T2-weighted images. Evidence Level 4 Technical Efficacy Stage 2

Prognostic Imaging Biomarkers in Glioblastoma: Development and Independent Validation on the Basis of Multiregion and Quantitative Analysis of MR Images

  • Cui, Yi
  • Tha, Khin Khin
  • Terasaka, Shunsuke
  • Yamaguchi, Shigeru
  • Wang, Jeff
  • Kudo, Kohsuke
  • Xing, Lei
  • Shirato, Hiroki
  • Li, Ruijiang
RadiologyRadiology 2015 Journal Article, cited 45 times
Website
PURPOSE: To develop and independently validate prognostic imaging biomarkers for predicting survival in patients with glioblastoma on the basis of multiregion quantitative image analysis. MATERIALS AND METHODS: This retrospective study was approved by the local institutional review board, and informed consent was waived. A total of 79 patients from two independent cohorts were included. The discovery and validation cohorts consisted of 46 and 33 patients with glioblastoma from the Cancer Imaging Archive (TCIA) and the local institution, respectively. Preoperative T1-weighted contrast material-enhanced and T2-weighted fluid-attenuation inversion recovery magnetic resonance (MR) images were analyzed. For each patient, we semiautomatically delineated the tumor and performed automated intratumor segmentation, dividing the tumor into spatially distinct subregions that demonstrate coherent intensity patterns across multiparametric MR imaging. Within each subregion and for the entire tumor, we extracted quantitative imaging features, including those that fully capture the differential contrast of multimodality MR imaging. A multivariate sparse Cox regression model was trained by using TCIA data and tested on the validation cohort. RESULTS: The optimal prognostic model identified five imaging biomarkers that quantified tumor surface area and intensity distributions of the tumor and its subregions. In the validation cohort, our prognostic model achieved a concordance index of 0.67 and significant stratification of overall survival by using the log-rank test (P = .018), which outperformed conventional prognostic factors, such as age (concordance index, 0.57; P = .389) and tumor volume (concordance index, 0.59; P = .409). CONCLUSION: The multiregion analysis presented here establishes a general strategy to effectively characterize intratumor heterogeneity manifested at multimodality imaging and has the potential to reveal useful prognostic imaging biomarkers in glioblastoma.

Volume of high-risk intratumoral subregions at multi-parametric MR imaging predicts overall survival and complements molecular analysis of glioblastoma

  • Cui, Yi
  • Ren, Shangjie
  • Tha, Khin Khin
  • Wu, Jia
  • Shirato, Hiroki
  • Li, Ruijiang
European Radiology 2017 Journal Article, cited 10 times
Website

Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program

  • Cui, X.
  • Zheng, S.
  • Heuvelmans, M. A.
  • Du, Y.
  • Sidorenkov, G.
  • Fan, S.
  • Li, Y.
  • Xie, Y.
  • Zhu, Z.
  • Dorrius, M. D.
  • Zhao, Y.
  • Veldhuis, R. N. J.
  • de Bock, G. H.
  • Oudkerk, M.
  • van Ooijen, P. M. A.
  • Vliegenthart, R.
  • Ye, Z.
Eur J Radiol 2022 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS: One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS: The reference standard consisted of 262 nodules >/= 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules >/= 4 - </= 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS: The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.

Primary lung tumor segmentation from PET–CT volumes with spatial–topological constraint

  • Cui, Hui
  • Wang, Xiuying
  • Lin, Weiran
  • Zhou, Jianlong
  • Eberl, Stefan
  • Feng, Dagan
  • Fulham, Michael
International Journal of Computer Assisted Radiology and Surgery 2016 Journal Article, cited 14 times
Website

Predicting the ISUP grade of clear cell renal cell carcinoma with multiparametric MR and multiphase CT radiomics

  • Cui, Enming
  • Li, Zhuoyong
  • Ma, Changyi
  • Li, Qing
  • Lei, Yi
  • Lan, Yong
  • Yu, Juan
  • Zhou, Zhipeng
  • Li, Ronggang
  • Long, Wansheng
  • Lin, Fan
Eur Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: To investigate externally validated magnetic resonance (MR)-based and computed tomography (CT)-based machine learning (ML) models for grading clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients with pathologically proven ccRCC in 2009-2018 were retrospectively included for model development and internal validation; patients from another independent institution and The Cancer Imaging Archive dataset were included for external validation. Features were extracted from T1-weighted, T2-weighted, corticomedullary-phase (CMP), and nephrographic-phase (NP) MR as well as precontrast-phase (PCP), CMP, and NP CT. CatBoost was used for ML-model investigation. The reproducibility of texture features was assessed using intraclass correlation coefficient (ICC). Accuracy (ACC) was used for ML-model performance evaluation. RESULTS: Twenty external and 440 internal cases were included. Among 368 and 276 texture features from MR and CT, 322 and 250 features with good to excellent reproducibility (ICC >/= 0.75) were included for ML-model development. The best MR- and CT-based ML models satisfactorily distinguished high- from low-grade ccRCCs in internal (MR-ACC = 73% and CT-ACC = 79%) and external (MR-ACC = 74% and CT-ACC = 69%) validation. Compared to single-sequence or single-phase images, the classifiers based on all-sequence MR (71% to 73% in internal and 64% to 74% in external validation) and all-phase CT (77% to 79% in internal and 61% to 69% in external validation) images had significant increases in ACC. CONCLUSIONS: MR- and CT-based ML models are valuable noninvasive techniques for discriminating high- from low-grade ccRCCs, and multiparameter MR- and multiphase CT-based classifiers are potentially superior to those based on single-sequence or single-phase imaging. KEY POINTS: * Both the MR- and CT-based machine learning models are reliable predictors for differentiating high- from low-grade ccRCCs. * ML models based on multiparameter MR sequences and multiphase CT images potentially outperform those based on single-sequence or single-phase images in ccRCC grading.

StaticCodeCT: single coded aperture tensorial X-ray CT

  • Cuadros, A. P.
  • Ma, X.
  • Restrepo, C. M.
  • Arce, G. R.
Opt Express 2021 Journal Article, cited 0 times
Website
Coded aperture X-ray CT (CAXCT) is a new low-dose imaging technology that promises far-reaching benefits in industrial and clinical applications. It places various coded apertures (CA) at a time in front of the X-ray source to partially block the radiation. The ill-posed inverse reconstruction problem is then solved using l1-norm-based iterative reconstruction methods. Unfortunately, to attain high-quality reconstructions, the CA patterns must change in concert with the view-angles making the implementation impractical. This paper proposes a simple yet radically different approach to CAXCT, which is coined StaticCodeCT, that uses a single-static CA in the CT gantry, thus making the imaging system amenable for practical implementations. Rather than using conventional compressed sensing algorithms for recovery, we introduce a new reconstruction framework for StaticCodeCT. Namely, we synthesize the missing measurements using low-rank tensor completion principles that exploit the multi-dimensional data correlation and low-rank nature of a 3-way tensor formed by stacking the 2D coded CT projections. Then, we use the FDK algorithm to recover the 3D object. Computational experiments using experimental projection measurements exhibit up to 10% gains in the normalized root mean square distance of the reconstruction using the proposed method compared with those attained by alternative low-dose systems.

A comprehensive lung CT landmark pair dataset for evaluating deformable image registration algorithms

  • Criscuolo, E. R.
  • Fu, Y.
  • Hao, Y.
  • Zhang, Z.
  • Yang, D.
Med Phys 2024 Journal Article, cited 0 times
Website
PURPOSE: Deformable image registration (DIR) is a key enabling technology in many diagnostic and therapeutic tasks, but often does not meet the required robustness and accuracy for supporting clinical tasks. This is in large part due to a lack of high-quality benchmark datasets by which new DIR algorithms can be evaluated. Our team was supported by the National Institute of Biomedical Imaging and Bioengineering to develop DIR benchmark dataset libraries for multiple anatomical sites, comprising of large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Here we introduce our lung CT DIR benchmark dataset library, which was developed to improve upon the number and distribution of landmark pairs in current public lung CT benchmark datasets. ACQUISITION AND VALIDATION METHODS: Thirty CT image pairs were acquired from several publicly available repositories as well as authors' institution with IRB approval. The data processing workflow included multiple steps: (1) The images were denoised. (2) Lungs, airways, and blood vessels were automatically segmented. (3) Bifurcations were directly detected on the skeleton of the segmented vessel tree. (4) Falsely identified bifurcations were filtered out using manually defined rules. (5) A DIR was used to project landmarks detected on the first image onto the second image of the image pair to form landmark pairs. (6) Landmark pairs were manually verified. This workflow resulted in an average of 1262 landmark pairs per image pair. Estimates of the landmark pair target registration error (TRE) using digital phantoms were 0.4 mm +/- 0.3 mm. DATA FORMAT AND USAGE NOTES: The data is published in Zenodo at https://doi.org/10.5281/zenodo.8200423. Instructions for use can be found at https://github.com/deshanyang/Lung-DIR-QA. POTENTIAL APPLICATIONS: The dataset library generated in this work is the largest of its kind to date and will provide researchers with a new and improved set of ground truth benchmarks for quantitatively validating DIR algorithms within the lung.

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II

  • Crimi, A.
  • Bakas, S.
2021 Book, cited 0 times

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Revised Selected Papers, Part I

  • Crimi, A.
  • Bakas, S.
2020 Book, cited 0 times
The two-volume set LNCS 11992 and 11993 constitutes the thoroughly refereed proceedings of the 5th International MICCAI Brainlesion Workshop, BrainLes 2019, the International Multimodal Brain Tumor Segmentation (BraTS) challenge, the Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification (CPM-RadPath) challenge, as well as the tutorial session on Tools Allowing Clinical Translation of Image Computing Algorithms (TACTICAL). These were held jointly at the Medical Image Computing for Computer Assisted Intervention Conference, MICCAI, in Shenzhen, China, in October 2019. The revised selected papers presented in these volumes were organized in the following topical sections: brain lesion image analysis (12 selected papers from 32 submissions); brain tumor image segmentation (57 selected papers from 102 submissions); combined MRI and pathology brain tumor classification (4 selected papers from 5 submissions); tools allowing clinical translation of image computing algorithms (2 selected papers from 3 submissions.)

Bayesian Kernel Models for Statistical Genetics and Cancer Genomics

  • Crawford, Lorin
2017 Thesis, cited 0 times

Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT

  • Coronado-Delgado, Daniel A
  • Garnica-Garza, Hector M
Technol Cancer Res Treat 2019 Journal Article, cited 0 times
Website
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.

Evaluation of Semiautomatic and Deep Learning-Based Fully Automatic Segmentation Methods on [(18)F]FDG PET/CT Images from Patients with Lymphoma: Influence on Tumor Characterization

  • Constantino, C. S.
  • Leocadio, S.
  • Oliveira, F. P. M.
  • Silva, M.
  • Oliveira, C.
  • Castanheira, J. C.
  • Silva, A.
  • Vaz, S.
  • Teixeira, R.
  • Neves, M.
  • Lucio, P.
  • Joao, C.
  • Costa, D. C.
J Digit Imaging 2023 Journal Article, cited 0 times
Website
The objective is to assess the performance of seven semiautomatic and two fully automatic segmentation methods on [(18)F]FDG PET/CT lymphoma images and evaluate their influence on tumor quantification. All lymphoma lesions identified in 65 whole-body [(18)F]FDG PET/CT staging images were segmented by two experienced observers using manual and semiautomatic methods. Semiautomatic segmentation using absolute and relative thresholds, k-means and Bayesian clustering, and a self-adaptive configuration (SAC) of k-means and Bayesian was applied. Three state-of-the-art deep learning-based segmentations methods using a 3D U-Net architecture were also applied. One was semiautomatic and two were fully automatic, of which one is publicly available. Dice coefficient (DC) measured segmentation overlap, considering manual segmentation the ground truth. Lymphoma lesions were characterized by 31 features. Intraclass correlation coefficient (ICC) assessed features agreement between different segmentation methods. Nine hundred twenty [(18)F]FDG-avid lesions were identified. The SAC Bayesian method achieved the highest median intra-observer DC (0.87). Inter-observers' DC was higher for SAC Bayesian than manual segmentation (0.94 vs 0.84, p < 0.001). Semiautomatic deep learning-based median DC was promising (0.83 (Obs1), 0.79 (Obs2)). Threshold-based methods and publicly available 3D U-Net gave poorer results (0.56 </= DC </= 0.68). Maximum, mean, and peak standardized uptake values, metabolic tumor volume, and total lesion glycolysis showed excellent agreement (ICC >/= 0.92) between manual and SAC Bayesian segmentation methods. The SAC Bayesian classifier is more reproducible and produces similar lesion features compared to manual segmentation, giving the best concordant results of all other methods. Deep learning-based segmentation can achieve overall good segmentation results but failed in few patients impacting patients' clinical evaluation.

The exceptional responders initiative: feasibility of a National Cancer Institute pilot study

  • Conley, Barbara A
  • Staudt, Lou
  • Takebe, Naoko
  • Wheeler, David A
  • Wang, Linghua
  • Cardenas, Maria F
  • Korchina, Viktoriya
  • Zenklusen, Jean Claude
  • McShane, Lisa M
  • Tricoli, James V
JNCI: Journal of the National Cancer Institute 2021 Journal Article, cited 5 times
Website

Colour adaptive generative networks for stain normalisation of histopathology images

  • Cong, C.
  • Liu, S.
  • Di Ieva, A.
  • Pagnucco, M.
  • Berkovsky, S.
  • Song, Y.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Deep learning has shown its effectiveness in histopathology image analysis, such as pathology detection and classification. However, stain colour variation in Hematoxylin and Eosin (H&E) stained histopathology images poses challenges in effectively training deep learning-based algorithms. To alleviate this problem, stain normalisation methods have been proposed, with most of the recent methods utilising generative adversarial networks (GAN). However, these methods are either trained fully with paired images from the target domain (supervised) or with unpaired images (unsupervised), suffering from either large discrepancy between domains or risks of undertrained/overfitted models when only the target domain images are used for training. In this paper, we introduce a colour adaptive generative network (CAGAN) for stain normalisation which combines both supervised learning from target domain and unsupervised learning from source domain. Specifically, we propose a dual-decoder generator and force consistency between their outputs thus introducing extra supervision which benefits from extra training with source domain images. Moreover, our model is immutable to stain colour variations due to the use of stain colour augmentation. We further implement histogram loss to ensure the processed images are coloured with the target domain colours regardless of their content differences. Extensive experiments on four public histopathology image datasets including TCGA-IDH, CAMELYON16, CAMELYON17 and BreakHis demonstrate that our proposed method produces high quality stain normalised images which improve the performance of benchmark algorithms by 5% to 10% compared to baselines not using normalisation.

A Framework for Customizable FPGA-based Image Registration Accelerators

  • Conficconi, Davide
  • D'Arnese, Eleonora
  • Del Sozzo, Emanuele
  • Sciuto, Donatella
  • Santambrogio, Marco D.
2021 Conference Proceedings, cited 0 times
Website
Image Registration is a highly compute-intensive optimization procedure that determines the geometric transformation to align a floating image to a reference one. Generally, the registration targets are images taken from different time instances, acquisition angles, and/or sensor types. Several methodologies are employed in the literature to address the limiting factors of this class of algorithms, among which hardware accelerators seem the most promising solution to boost performance. However, most hardware implementations are either closed-source or tailored to a specific context, limiting their application to different fields. For these reasons, we propose an open-source hardware-software framework to generate a configurable architecture for the most compute-intensive part of registration algorithms, namely the similarity metric computation. This metric is the Mutual Information, a well-known calculus from the Information Theory, used in several optimization procedures. Through different design parameters configurations, we explore several design choices of our highly-customizable architecture and validate it on multiple FPGAs. We evaluated various architectures against an optimized Matlab implementation on an Intel Xeon Gold, reaching a speedup up to 2.86x, and remarkable performance and power efficiency against other state-of-the-art approaches.

Early prediction of neoadjuvant chemotherapy response by exploiting a transfer learning approach on breast DCE-MRIs

  • Comes, M. C.
  • Fanizzi, A.
  • Bove, S.
  • Didonna, V.
  • Diotaiuti, S.
  • La Forgia, D.
  • Latorre, A.
  • Martinelli, E.
  • Mencattini, A.
  • Nardone, A.
  • Paradiso, A. V.
  • Ressa, C. M.
  • Tamborra, P.
  • Lorusso, V.
  • Massafra, R.
2021 Journal Article, cited 2 times
Website
The dynamic contrast-enhanced MR imaging plays a crucial role in evaluating the effectiveness of neoadjuvant chemotherapy (NAC) even since its early stage through the prediction of the final pathological complete response (pCR). In this study, we proposed a transfer learning approach to predict if a patient achieved pCR (pCR) or did not (non-pCR) by exploiting, separately or in combination, pre-treatment and early-treatment exams from I-SPY1 TRIAL public database. First, low-level features, i.e., related to local structure of the image, were automatically extracted by a pre-trained convolutional neural network (CNN) overcoming manual feature extraction. Next, an optimal set of most stable features was detected and then used to design an SVM classifier. A first subset of patients, called fine-tuning dataset (30 pCR; 78 non-pCR), was used to perform the optimal choice of features. A second subset not involved in the feature selection process was employed as an independent test (7 pCR; 19 non-pCR) to validate the model. By combining the optimal features extracted from both pre-treatment and early-treatment exams with some clinical features, i.e., ER, PgR, HER2 and molecular subtype, an accuracy of 91.4% and 92.3%, and an AUC value of 0.93 and 0.90, were returned on the fine-tuning dataset and the independent test, respectively. Overall, the low-level CNN features have an important role in the early evaluation of the NAC efficacy by predicting pCR. The proposed model represents a first effort towards the development of a clinical support tool for an early prediction of pCR to NAC.

DR-Unet104 for Multimodal MRI Brain Tumor Segmentation

  • Colman, Jordan
  • Zhang, Lei
  • Duan, Wenting
  • Ye, Xujiong
2021 Book Section, cited 11 times
Website
In this paper we propose a 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs. We make multiple additions to the Unet architecture, including adding the ‘bottleneck’ residual block to the Unet encoder and adding dropout after each convolution block stack. We verified the effect of including the regularization of dropout with small rate (e.g. 0.2) on the architecture, and found a dropout of 0.2 improved the overall performance compared to no dropout, or a dropout of 0.5. We evaluated the proposed architecture as part of the Multimodal Brain Tumor Segmentation (BraTS) 2020 Challenge and compared our method to DeepLabV3+ with a ResNet-V2–152 backbone. We found the DR-Unet104 achieved a mean dice score coefficient of 0.8862, 0.6756 and 0.6721 for validation data, whole tumor, enhancing tumor and tumor core respectively, an overall improvement on 0.8770, 0.65242 and 0.68134 achieved by DeepLabV3+. Our method produced a final mean DSC of 0.8673, 0.7514 and 0.7983 on whole tumor, enhancing tumor and tumor core on the challenge’s testing data. We produce a competitive lesion segmentation architecture, despite only using 2D convolutions, having the added benefit that it can be used on lower power computers than a 3D architecture. The source code and trained model for this work is openly available at https://github.com/jordan-colman/DR-Unet104.

Glioblastoma: Imaging Genomic Mapping Reveals Sex-specific Oncogenic Associations of Cell Death

  • Colen, Rivka R
  • Wang, Jixin
  • Singh, Sanjay K
  • Gutman, David A
  • Zinn, Pascal O
RadiologyRadiology 2014 Journal Article, cited 36 times
Website
PURPOSE: To identify the molecular profiles of cell death as defined by necrosis volumes at magnetic resonance (MR) imaging and uncover sex-specific molecular signatures potentially driving oncogenesis and cell death in glioblastoma (GBM). MATERIALS AND METHODS: This retrospective study was HIPAA compliant and had institutional review board approval, with waiver of the need to obtain informed consent. The molecular profiles for 99 patients (30 female patients, 69 male patients) were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Volumes of necrosis at MR imaging were extracted. Differential gene expression profiles were obtained in those patients (including male and female patients separately) with high versus low MR imaging volumes of tumor necrosis. Ingenuity Pathway Analysis was used for messenger RNA-microRNA interaction analysis. A histopathologic data set (n = 368; 144 female patients, 224 male patients) was used to validate the MR imaging findings by assessing the amount of cell death. A connectivity map was used to identify therapeutic agents potentially targeting sex-specific cell death in GBM. RESULTS: Female patients showed significantly lower volumes of necrosis at MR imaging than male patients (6821 vs 11 050 mm(3), P = .03). Female patients, unlike male patients, with high volumes of necrosis at imaging had significantly shorter survival (6.5 vs 14.5 months, P = .01). Transcription factor analysis suggested that cell death in female patients with GBM is associated with MYC, while that in male patients is associated with TP53 activity. Additionally, a group of therapeutic agents that can potentially be tested to target cell death in a sex-specific manner was identified. CONCLUSION: The results of this study suggest that cell death in GBM may be driven by sex-specific molecular pathways.

Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project

  • Colen, Rivka R
  • Vangel, Mark
  • Wang, Jixin
  • Gutman, David A
  • Hwang, Scott N
  • Wintermark, Max
  • Jain, Rajan
  • Jilwan-Nicolas, Manal
  • Chen, James Y
  • Raghavan, Prashant
  • Holder, C. A.
  • Rubin, D.
  • Huang, E.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Flanders, A.
  • TCGA Glioma Phenotype Research Group
  • Zinn, P. O.
BMC Medical Genomics 2014 Journal Article, cited 47 times
Website
BACKGROUND: Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype. METHODS: We retrospectively identified 104 treatment-naive glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute). RESULTS: Our results show that patients with a combination of deep white matter tracts and ependymal invasion (Class A) on imaging had a significant decrease in overall survival as compared to patients with absence of such invasive imaging features (Class B) (8.7 versus 18.6 months, p < 0.001). Mitochondrial dysfunction was the top canonical pathway associated with Class A gene expression signature. The MYC oncogene was predicted to be the top activation regulator in Class A. CONCLUSION: We demonstrate that MRI biomarker signatures can identify distinct GBM phenotypes associated with highly significant survival differences and specific molecular pathways. This study identifies mitochondrial dysfunction as the top canonical pathway in a very aggressive GBM phenotype. Thus, imaging-genomic analyses may prove invaluable in detecting novel targetable genomic pathways.

NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures

  • Colen, Rivka
  • Foster, Ian
  • Gatenby, Robert
  • Giger, Mary Ellen
  • Gillies, Robert
  • Gutman, David
  • Heller, Matthew
  • Jain, Rajan
  • Madabhushi, Anant
  • Madhavan, Subha
  • Napel, Sandy
  • Rao, Arvind
  • Saltz, Joel
  • Tatum, James
  • Verhaak, Roeland
  • Whitman, Gary
Transl OncolTranslational oncology 2014 Journal Article, cited 39 times
Website
The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

Parallel Implementation of the DRLSE Algorithm

  • Coelho, Daniel Popp
  • Furuie, Sérgio Shiguemi
2020 Conference Proceedings, cited 0 times
Website
The Distance-Regularized Level Set Evolution (DRLSE) algorithm solves many problems that plague the class of Level Set algorithms, but has a significant computational cost and is sensitive to its many parameters. Configuring these parameters is a time-intensive trial-and-error task that limits the usability of the algorithm. This is especially true in the field of Medical Imaging, where it would be otherwise highly suitable. The aim of this work is to develop a parallel implementation of the algorithm using the Compute-Unified Device Architecture (CUDA) for Graphics Processing Units (GPU), which would reduce the computational cost of the algorithm, bringing it to the interactive regime. This would lessen the burden of configuring its parameters and broaden its application. Using consumer-grade, hardware, we observed performance gains between roughly 800% and 1700% when comparing against a purely serial C++ implementation we developed, and gains between roughly 180% and 500%, when comparing against the MATLAB reference implementation of DRLSE, both depending on input image resolution.

Semantic Model Vector for ImageCLEF2013

  • Codella, Noel
  • Merler, Michele
2014 Report, cited 0 times
Website

Automated Medical Image Modality Recognition by Fusion of Visual and Text Information

  • Codella, Noel
  • Connell, Jonathan
  • Pankanti, Sharath
  • Merler, Michele
  • Smith, John R
2014 Book Section, cited 10 times
Website

Acute Lymphoblastic Leukemia Detection Using Depthwise Separable Convolutional Neural Networks

  • Clinton Jr, Laurence P
  • Somes, Karen M
  • Chu, Yongjun
  • Javed, Faizan
SMU Data Science Review 2020 Journal Article, cited 0 times
Website

Using Machine Learning Applied to Radiomic Image Features for Segmenting Tumour Structures

  • Clifton, Henry
  • Vial, Alanna
  • Miller, Andrew
  • Ritz, Christian
  • Field, Matthew
  • Holloway, Lois
  • Ros, Montserrat
  • Carolan, Martin
  • Stirling, David
2019 Conference Paper, cited 0 times
Website
Lung cancer (LC) was the predicted leading causeof Australian cancer fatalities in 2018 (around 9,200 deaths). Non-Small Cell Lung Cancer (NSCLC) tumours with larger amounts of heterogeneity have been linked to a worse outcome.Medical imaging is widely used in oncology and non-invasively collects data about the whole tumour. The field of radiomics uses these medical images to extract quantitative image featuresand promises further understanding of the disease at the time of diagnosis, during treatment and in follow up. It is well known that manual and semi-automatic tumour segmentation methods are subject to inter-observer variability which reduces confidence in the treatment region and extentof disease. This leads to tumour under- and over-estimation which can impact on treatment outcome and treatment-induced morbidity.This research aims to use radiomic features centred at each pixel to segment the location of the lung tumour on Computed Tomography (CT) scans. To achieve this objective, a DecisionTree (DT) model was trained using sampled CT data from eight patients. The data consisted of 25 pixel-based texture features calculated from four Gray Level Matrices (GLMs)describing the region around each pixel. The model was assessed using an unseen patient through both a confusion matrix and interpretation of the segment.The findings showed that the model accurately (AUROC =83.9%) predicts tumour location within the test data, concluding that pixel based textural features likely contribute to segmenting the lung tumour. The prediction displayed a strong representation of the manually segmented Region of Interest (ROI), which is considered the ground truth for the purpose of this research.

The Quantitative Imaging Network: NCI's Historical Perspective and Planned Goals

  • Clarke, Laurence P.
  • Nordstrom, Robert J.
  • Zhang, Huiming
  • Tandon, Pushpa
  • Zhang, Yantian
  • Redmond, George
  • Farahani, Keyvan
  • Kelloff, Gary
  • Henderson, Lori
  • Shankar, Lalitha
  • Deye, James
  • Capala, Jacek
  • Jacobs, Paula
Transl OncolTranslational oncology 2014 Journal Article, cited 0 times
Website

Reproducing 2D breast mammography images with 3D printed phantoms

  • Clark, Matthew
  • Ghammraoui, Bahaa
  • Badal, Andreu
2016 Conference Proceedings, cited 2 times
Website

Transfer learning for auto-segmentation of 17 organs-at-risk in the head and neck: Bridging the gap between institutional and public datasets

  • Clark, B.
  • Hardcastle, N.
  • Johnston, L. A.
  • Korte, J.
Med Phys 2024 Journal Article, cited 0 times
Website
BACKGROUND: Auto-segmentation of organs-at-risk (OARs) in the head and neck (HN) on computed tomography (CT) images is a time-consuming component of the radiation therapy pipeline that suffers from inter-observer variability. Deep learning (DL) has shown state-of-the-art results in CT auto-segmentation, with larger and more diverse datasets showing better segmentation performance. Institutional CT auto-segmentation datasets have been small historically (n < 50) due to the time required for manual curation of images and anatomical labels. Recently, large public CT auto-segmentation datasets (n > 1000 aggregated) have become available through online repositories such as The Cancer Imaging Archive. Transfer learning is a technique applied when training samples are scarce, but a large dataset from a closely related domain is available. PURPOSE: The purpose of this study was to investigate whether a large public dataset could be used in place of an institutional dataset (n > 500), or to augment performance via transfer learning, when building HN OAR auto-segmentation models for institutional use. METHODS: Auto-segmentation models were trained on a large public dataset (public models) and a smaller institutional dataset (institutional models). The public models were fine-tuned on the institutional dataset using transfer learning (transfer models). We assessed both public model generalizability and transfer model performance by comparison with institutional models. Additionally, the effect of institutional dataset size on both transfer and institutional models was investigated. All DL models used a high-resolution, two-stage architecture based on the popular 3D U-Net. Model performance was evaluated using five geometric measures: the dice similarity coefficient (DSC), surface DSC, 95(th) percentile Hausdorff distance, mean surface distance (MSD), and added path length. RESULTS: For a small subset of OARs (left/right optic nerve, spinal cord, left submandibular), the public models performed significantly better (p < 0.05) than, or showed no significant difference to, the institutional models under most of the metrics examined. For the remaining OARs, the public models were inferior to the institutional models, although performance differences were small (DSC </= 0.03, MSD < 0.5 mm) for seven OARs (brainstem, left/right lens, left/right parotid, mandible, right submandibular). The transfer models performed significantly better than the institutional models for seven OARs (brainstem, right lens, left/right optic nerve, left/right parotid, spinal cord) with a small margin of improvement (DSC </= 0.02, MSD < 0.4 mm). When numbers of institutional training samples were limited, public and transfer models outperformed the institutional models for most OARs (brainstem, left/right lens, left/right optic nerve, left/right parotid, spinal cord, and left/right submandibular). CONCLUSION: Training auto-segmentation models with public data alone was suitable for a small number of OARs. Using only public data incurred a small performance deficit for most other OARs, when compared with institutional data alone, but may be preferable over time-consuming curation of a large institutional dataset. When a large institutional dataset was available, transfer learning with models pretrained on a large public dataset provided a modest performance improvement for several OARs. When numbers of institutional samples were limited, using the public dataset alone, or as a pretrained model, was beneficial for most OARs.

Vox2Vox: 3D-GAN for Brain Tumour Segmentation

  • Cirillo, Marco Domenico
  • Abramian, David
  • Eklund, Anders
2021 Book Section, cited 0 times
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histological sub-regions, i.e., peritumoral edema, necrotic core, enhancing and non-enhancing tumour core. Although brain tumours can easily be detected using multi-modal MRI, accurate tumor segmentation is a challenging task. Hence, using the data provided by the BraTS Challenge 2020, we propose a 3D volume-to-volume Generative Adversarial Network for segmentation of brain tumours. The model, called Vox2Vox, generates realistic segmentation outputs from multi-channel 3D MR images, segmenting the whole, core and enhancing tumor with mean values of 87.20%, 81.14%, and 78.67% as dice scores and 6.44mm, 24.36 mm, and 18.95 mm for Hausdorff distance 95 percentile for the BraTS testing set after ensembling 10 Vox2Vox models obtained with a 10-fold cross-validation. The code is available at https://github.com/mdciri/Vox2Vox.

Automatic detection of spiculation of pulmonary nodules in computed tomography images

  • Ciompi, F
  • Jacobs, C
  • Scholten, ET
  • van Riel, SJ
  • Wille, MMW
  • Prokop, M
  • van Ginneken, B
2015 Conference Proceedings, cited 5 times
Website

Self supervised contrastive learning for digital histopathology

  • Ciga, Ozan
  • Xu, Tony
  • Martel, Anne Louise
Machine Learning with Applications 2022 Journal Article, cited 28 times
Website
Unsupervised learning has been a long-standing goal of machine learning and is especially important for medical image analysis, where the learning can compensate for the scarcity of labeled datasets. A promising subclass of unsupervised learning is self-supervised learning, which aims to learn salient features using the raw input as the learning signal. In this work, we tackle the issue of learning domain-specific features without any supervision to improve multiple task performances that are of interest to the digital histopathology community. We apply a contrastive self-supervised learning method to digital histopathology by collecting and pretraining on 57 histopathology datasets without any labels. We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features. Furthermore, we find using more images for pretraining leads to a better performance in multiple downstream tasks, albeit there are diminishing returns as more unlabeled images are incorporated into the pretraining. Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks, boosting task performances by more than 28% in scores on average. Interestingly, we did not observe a consistent correlation between the pretraining dataset site or the organ versus the downstream task (e.g., pretraining with only breast images does not necessarily lead to a superior downstream task performance for breast-related tasks). These findings may also be useful when applying newer contrastive techniques to histopathology data. Pretrained PyTorch models are made publicly available at https://github.com/ozanciga/self-supervised-histopathology.

Results of initial low-dose computed tomographic screening for lung cancer

  • Church, T. R.
  • Black, W. C.
  • Aberle, D. R.
  • Berg, C. D.
  • Clingan, K. L.
  • Duan, F.
  • Fagerstrom, R. M.
  • Gareen, I. F.
  • Gierada, D. S.
  • Jones, G. C.
  • Mahon, I.
  • Marcus, P. M.
  • Sicks, J. D.
  • Jain, A.
  • Baum, S.
N Engl J MedThe New England journal of medicine 2013 Journal Article, cited 529 times
Website
BACKGROUND: Lung cancer is the largest contributor to mortality from cancer. The National Lung Screening Trial (NLST) showed that screening with low-dose helical computed tomography (CT) rather than with chest radiography reduced mortality from lung cancer. We describe the screening, diagnosis, and limited treatment results from the initial round of screening in the NLST to inform and improve lung-cancer-screening programs. METHODS: At 33 U.S. centers, from August 2002 through April 2004, we enrolled asymptomatic participants, 55 to 74 years of age, with a history of at least 30 pack-years of smoking. The participants were randomly assigned to undergo annual screening, with the use of either low-dose CT or chest radiography, for 3 years. Nodules or other suspicious findings were classified as positive results. This article reports findings from the initial screening examination. RESULTS: A total of 53,439 eligible participants were randomly assigned to a study group (26,715 to low-dose CT and 26,724 to chest radiography); 26,309 participants (98.5%) and 26,035 (97.4%), respectively, underwent screening. A total of 7191 participants (27.3%) in the low-dose CT group and 2387 (9.2%) in the radiography group had a positive screening result; in the respective groups, 6369 participants (90.4%) and 2176 (92.7%) had at least one follow-up diagnostic procedure, including imaging in 5717 (81.1%) and 2010 (85.6%) and surgery in 297 (4.2%) and 121 (5.2%). Lung cancer was diagnosed in 292 participants (1.1%) in the low-dose CT group versus 190 (0.7%) in the radiography group (stage 1 in 158 vs. 70 participants and stage IIB to IV in 120 vs. 112). Sensitivity and specificity were 93.8% and 73.4% for low-dose CT and 73.5% and 91.3% for chest radiography, respectively. CONCLUSIONS: The NLST initial screening results are consistent with the existing literature on screening by means of low-dose CT and chest radiography, suggesting that a reduction in mortality from lung cancer is achievable at U.S. screening centers that have staff experienced in chest CT. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).

Facilitating innovation and knowledge transfer between homogeneous and heterogeneous datasets: Generic incremental transfer learning approach and multidisciplinary studies

  • Chui, Kwok Tai
  • Arya, Varsha
  • Band, Shahab S.
  • Alhalabi, Mobeen
  • Liu, Ryan Wen
  • Chi, Hao Ran
2023 Journal Article, cited 0 times
Website
Open datasets serve as facilitators for researchers to conduct research with ground truth data. Generally, datasets contain innovation and knowledge in the domains that could be transferred between homogeneous datasets and have become feasible using machine learning models with the advent of transfer learning algorithms. Research initiatives are drawn to the heterogeneous datasets if these could extract useful innovation and knowledge across datasets of different domains. A breakthrough can be achieved without the restriction requiring the similarities between datasets. A multiple incremental transfer learning is proposed to yield optimal results in the target model. A multiple rounds multiple incremental transfer learning with a negative transfer avoidance algorithm are proposed as a generic approach to transfer innovation and knowledge from the source domain to the target domain. Incremental learning has played an important role in lowering the risk of transferring unrelated information which reduces the performance of machine learning models. To evaluate the effectiveness of the proposed algorithm, multidisciplinary studies are carried out in 5 disciplines with 15 benchmark datasets. Each discipline comprises 3 datasets as studies with homogeneous datasets whereas heterogeneous datasets are formed between disciplines. The results reveal that the proposed algorithm enhances the average accuracy by 4.35% compared with existing works. Ablation studies are also conducted to analyse the contributions of the individual techniques of the proposed algorithm, namely, the multiple rounds strategy, incremental learning, and negative transfer avoidance algorithms. These techniques enhance the average accuracy of the machine learning model by 3.44%, 0.849%, and 4.26%, respectively.

Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features

  • Chufal, Kundan S.
  • Ahmad, Irfan
  • Pahuja, Anjali K.
  • Miller, Alexis A.
  • Singh, Rajpal
  • Chowdhary, Rahul L.
Asian Journal of Oncology 2019 Journal Article, cited 0 times
Website
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data. Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05. Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164). Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.

Predicting recurrence risks in lung cancer patients using multimodal radiomics and random survival forests

  • Christie, J. R.
  • Daher, O.
  • Abdelrazek, M.
  • Romine, P. E.
  • Malthaner, R. A.
  • Qiabi, M.
  • Nayak, R.
  • Napel, S.
  • Nair, V. S.
  • Mattonen, S. A.
J Med Imaging (Bellingham) 2022 Journal Article, cited 0 times
Website
PURPOSE: We developed a model integrating multimodal quantitative imaging features from tumor and nontumor regions, qualitative features, and clinical data to improve the risk stratification of patients with resectable non-small cell lung cancer (NSCLC). APPROACH: We retrospectively analyzed 135 patients [mean age, 69 years (43 to 87, range); 100 male patients and 35 female patients] with NSCLC who underwent upfront surgical resection between 2008 and 2012. The tumor and peritumoral regions on both preoperative CT and FDG PET-CT and the vertebral bodies L3 to L5 on FDG PET were segmented to assess the tumor and bone marrow uptake, respectively. Radiomic features were extracted and combined with clinical and CT qualitative features. A random survival forest model was developed using the top-performing features to predict the time to recurrence/progression in the training cohort ( n = 101 ), validated in the testing cohort ( n = 34 ) using the concordance, and compared with a stage-only model. Patients were stratified into high- and low-risks of recurrence/progression using Kaplan-Meier analysis. RESULTS: The model, consisting of stage, three wavelet texture features, and three wavelet first-order features, achieved a concordance of 0.78 and 0.76 in the training and testing cohorts, respectively, significantly outperforming the baseline stage-only model results of 0.67 ( p < 0.005 ) and 0.60 ( p = 0.008 ), respectively. Patients at high- and low-risks of recurrence/progression were significantly stratified in both the training ( p < 0.005 ) and the testing ( p = 0.03 ) cohorts. CONCLUSIONS: Our radiomic model, consisting of stage and tumor, peritumoral, and bone marrow features from CT and FDG PET-CT significantly stratified patients into low- and high-risk of recurrence/progression.

ST3GAL1-associated transcriptomic program in glioblastoma tumor growth, invasion, and prognosis

  • Chong, Yuk Kien
  • Sandanaraj, Edwin
  • Koh, Lynnette WH
  • Thangaveloo, Moogaambikai
  • Tan, Melanie SY
  • Koh, Geraldene RH
  • Toh, Tan Boon
  • Lim, Grace GY
  • Holbrook, Joanna D
  • Kon, Oi Lian
  • Nadarajah, M.
  • Ng, I.
  • Ng, W. H.
  • Tan, N. S.
  • Lim, K. L.
  • Tang, C.
  • Ang, B. T.
2016 Journal Article, cited 16 times
Website
BACKGROUND: Cell surface sialylation is associated with tumor cell invasiveness in many cancers. Glioblastoma is the most malignant primary brain tumor and is highly infiltrative. ST3GAL1 sialyltransferase gene is amplified in a subclass of glioblastomas, and its role in tumor cell self-renewal remains unexplored. METHODS: Self-renewal of patient glioma cells was evaluated using clonogenic, viability, and invasiveness assays. ST3GAL1 was identified from differentially expressed genes in Peanut Agglutinin-stained cells and validated in REMBRANDT (n = 390) and Gravendeel (n = 276) clinical databases. Gene set enrichment analysis revealed upstream processes. TGFbeta signaling on ST3GAL1 transcription was assessed using chromatin immunoprecipitation. Transcriptome analysis of ST3GAL1 knockdown cells was done to identify downstream pathways. A constitutively active FoxM1 mutant lacking critical anaphase-promoting complex/cyclosome ([APC/C]-Cdh1) binding sites was used to evaluate ST3Gal1-mediated regulation of FoxM1 protein. Finally, the prognostic role of ST3Gal1 was determined using an orthotopic xenograft model (3 mice groups comprising nontargeting and 2 clones of ST3GAL1 knockdown in NNI-11 [8 per group] and NNI-21 [6 per group]), and the correlation with patient clinical information. All statistical tests on patients' data were two-sided; other P values below are one-sided. RESULTS: High ST3GAL1 expression defines an invasive subfraction with self-renewal capacity; its loss of function prolongs survival in a mouse model established from mesenchymal NNI-11 (P < .001; groups of 8 in 3 arms: nontargeting, C1, and C2 clones of ST3GAL1 knockdown). ST3GAL1 transcriptomic program stratifies patient survival (hazard ratio [HR] = 2.47, 95% confidence interval [CI] = 1.72 to 3.55, REMBRANDT P = 1.92 x 10(-)(8); HR = 2.89, 95% CI = 1.94 to 4.30, Gravendeel P = 1.05 x 10(-)(1)(1)), independent of age and histology, and associates with higher tumor grade and T2 volume (P = 1.46 x 10(-)(4)). TGFbeta signaling, elevated in mesenchymal patients, correlates with high ST3GAL1 (REMBRANDT gliomacor = 0.31, P = 2.29 x 10(-)(1)(0); Gravendeel gliomacor = 0.50, P = 3.63 x 10(-)(2)(0)). The transcriptomic program upon ST3GAL1 knockdown enriches for mitotic cell cycle processes. FoxM1 was identified as a statistically significantly modulated gene (P = 2.25 x 10(-)(5)) and mediates ST3Gal1 signaling via the (APC/C)-Cdh1 complex. CONCLUSIONS: The ST3GAL1-associated transcriptomic program portends poor prognosis in glioma patients and enriches for higher tumor grades of the mesenchymal molecular classification. We show that ST3Gal1-regulated self-renewal traits are crucial to the sustenance of glioblastoma multiforme growth.

Incremental Prognostic Value of ADC Histogram Analysis over MGMT Promoter Methylation Status in Patients with Glioblastoma

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Kim, Dong Wook
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Rim, Tyler Hyungtaek
  • Lee, Seung-Koo
RadiologyRadiology 2016 Journal Article, cited 18 times
Website
Purpose To investigate the incremental prognostic value of apparent diffusion coefficient (ADC) histogram analysis over oxygen 6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status in patients with glioblastoma and the correlation between ADC parameters and MGMT status. Materials and Methods This retrospective study was approved by institutional review board, and informed consent was waived. A total of 112 patients with glioblastoma were divided into training (74 patients) and test (38 patients) sets. Overall survival (OS) and progression-free survival (PFS) was analyzed with ADC parameters, MGMT status, and other clinical factors. Multivariate Cox regression models with and without ADC parameters were constructed. Model performance was assessed with c index and receiver operating characteristic curve analyses for 12- and 16-month OS and 12-month PFS in the training set and validated in the test set. ADC parameters were compared according to MGMT status for the entire cohort. Results By using ADC parameters, the c indices and diagnostic accuracies for 12- and 16-month OS and 12-month PFS in the models showed significant improvement, with the exception of c indices in the models for PFS (P < .05 for all) in the training set. In the test set, the diagnostic accuracy was improved by using ADC parameters and was significant, with the 25th and 50th percentiles of ADC for 16-month OS (P = .040 and P = .047) and the 25th percentile of ADC for 12-month PFS (P = .026). No significant correlation was found between ADC parameters and MGMT status. Conclusion ADC histogram analysis had incremental prognostic value over MGMT promoter methylation status in patients with glioblastoma. ((c)) RSNA, 2016 Online supplemental material is available for this article.

Machine learning and radiomic phenotyping of lower grade gliomas: improving survival prediction

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Jain, Rajan
  • Lee, Seung-Koo
Eur Radiol 2020 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Recent studies have highlighted the importance of isocitrate dehydrogenase (IDH) mutational status in stratifying biologically distinct subgroups of gliomas. This study aimed to evaluate whether MRI-based radiomic features could improve the accuracy of survival predictions for lower grade gliomas over clinical and IDH status. MATERIALS AND METHODS: Radiomic features (n = 250) were extracted from preoperative MRI data of 296 lower grade glioma patients from databases at our institutional (n = 205) and The Cancer Genome Atlas (TCGA)/The Cancer Imaging Archive (TCIA) (n = 91) datasets. For predicting overall survival, random survival forest models were trained with radiomic features; non-imaging prognostic factors including age, resection extent, WHO grade, and IDH status on the institutional dataset, and validated on the TCGA/TCIA dataset. The performance of the random survival forest (RSF) model and incremental value of radiomic features were assessed by time-dependent receiver operating characteristics. RESULTS: The radiomics RSF model identified 71 radiomic features to predict overall survival, which were successfully validated on TCGA/TCIA dataset (iAUC, 0.620; 95% CI, 0.501-0.756). Relative to the RSF model from the non-imaging prognostic parameters, the addition of radiomic features significantly improved the overall survival prediction accuracy of the random survival forest model (iAUC, 0.627 vs. 0.709; difference, 0.097; 95% CI, 0.003-0.209). CONCLUSION: Radiomic phenotyping with machine learning can improve survival prediction over clinical profile and genomic data for lower grade gliomas. KEY POINTS: * Radiomics analysis with machine learning can improve survival prediction over the non-imaging factors (clinical and molecular profiles) for lower grade gliomas, across different institutions.

Prediction of Human Papillomavirus Status and Overall Survival in Patients with Untreated Oropharyngeal Squamous Cell Carcinoma: Development and Validation of CT-Based Radiomics

  • Choi, Y.
  • Nam, Y.
  • Jang, J.
  • Shin, N. Y.
  • Ahn, K. J.
  • Kim, B. S.
  • Lee, Y. S.
  • Kim, M. S.
AJNR Am J Neuroradiol 2020 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Human papillomavirus is a prognostic marker for oropharyngeal squamous cell carcinoma. We aimed to determine the value of CT-based radiomics for predicting the human papillomavirus status and overall survival in patients with oropharyngeal squamous cell carcinoma. MATERIALS AND METHODS: Eighty-six patients with oropharyngeal squamous cell carcinoma were retrospectively collected and grouped into training (n = 61) and test (n = 25) sets. For human papillomavirus status and overall survival prediction, radiomics features were selected via a random forest-based algorithm and Cox regression analysis, respectively. Relevant features were used to build multivariate Cox regression models and calculate the radiomics score. Human papillomavirus status and overall survival prediction were assessed via the area under the curve and concordance index, respectively. The models were validated in the test and The Cancer Imaging Archive cohorts (n = 78). RESULTS: For prediction of human papillomavirus status, radiomics features yielded areas under the curve of 0.865, 0.747, and 0.834 in the training, test, and validation sets, respectively. In the univariate Cox regression, the human papillomavirus status (positive: hazard ratio, 0.257; 95% CI, 0.09-0.7; P = .008), T-stage (>/=III: hazard ratio, 3.66; 95% CI, 1.34-9.99; P = .011), and radiomics score (high-risk: hazard ratio, 3.72; 95% CI, 1.21-11.46; P = .022) were associated with overall survival. The addition of the radiomics score to the clinical Cox model increased the concordance index from 0.702 to 0.733 (P = .01). Validation yielded concordance indices of 0.866 and 0.720. CONCLUSIONS: CT-based radiomics may be useful in predicting human papillomavirus status and overall survival in patients with oropharyngeal squamous cell carcinoma.

3D CMM-Net with Deeper Encoder for Semantic Segmentation of Brain Tumors in BraTS2021 Challenge

  • Choi, Yoonseok
  • Al-masni, Mohammed A.
  • Kim, Dong-Hyun
2022 Book Section, cited 0 times
We propose a 3D version of the Contextual Multi-scale Multi-level Network (3D CMM-Net) with deeper encoder depth for automated semantic segmentation of different brain tumors in the BraTS2021 challenge. The proposed network has the capability to extract and learn deeper features for the task of multi-class segmentation directly from 3D MRI data. The overall performance of the proposed network gave Dice scores of 0.7557, 0.8060, and 0.8351 for enhancing tumor, tumor core, and whole tumor, respectively on the local-test dataset.

Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening

  • Choi, Wookjin
  • Nadeem, Saad
  • Alam, Sadegh R.
  • Deasy, Joseph O.
  • Tannenbaum, Allen
  • Lu, Wei
Computer Methods and Programs in Biomedicine 2020 Journal Article, cited 0 times
Website
Spiculations are important predictors of lung cancer malignancy, which are spikes on the surface of the pulmonary nodules. In this study, we proposed an interpretable and parameter-free technique to quantify the spiculation using area distortion metric obtained by the conformal (angle-preserving) spherical parameterization. We exploit the insight that for an angle-preserved spherical mapping of a given nodule, the corresponding negative area distortion precisely characterizes the spiculations on that nodule. We introduced novel spiculation scores based on the area distortion metric and spiculation measures. We also semi-automatically segment lung nodule (for reproducibility) as well as vessel and wall attachment to differentiate the real spiculations from lobulation and attachment. A simple pathological malignancy prediction model is also introduced. We used the publicly-available LIDC-IDRI dataset pathologists (strong-label) and radiologists (weak-label) ratings to train and test radiomics models containing this feature, and then externally validate the models. We achieved AUC = 0.80 and 0.76, respectively, with the models trained on the 811 weakly-labeled LIDC datasets and tested on the 72 strongly-labeled LIDC and 73 LUNGx datasets; the previous best model for LUNGx had AUC = 0.68. The number-of-spiculations feature was found to be highly correlated (Spearman’s rank correlation coefficient ) with the radiologists’ spiculation score. We developed a reproducible and interpretable, parameter-free technique for quantifying spiculations on nodules. The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule. Using our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve higher performance than previous models. In the future, we will exhaustively test our model for lung cancer screening in the clinic.

A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT

  • Choi, J.
  • Cho, H. H.
  • Kwon, J.
  • Lee, H. Y.
  • Park, H.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
BACKGROUND AND AIM: Tumor staging in non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging involves expert interpretation of imaging, which we aim to automate with deep learning (DL). We proposed a cascaded DL method comprised of two steps to classification between early- and advanced-stage NSCLC using pretreatment computed tomography. METHODS: We developed and tested a DL model to classify between early- and advanced-stage using training (n = 90), validation (n = 8), and two test (n = 37, n = 26) cohorts obtained from the public domain. The first step adopted an autoencoder network to compress the imaging data into latent variables and the second step used the latent variable to classify the stages using the convolutional neural network (CNN). Other DL and machine learning-based approaches were compared. RESULTS: Our model was tested in two test cohorts of CPTAC and TCGA. In CPTAC, our model achieved accuracy of 0.8649, sensitivity of 0.8000, specificity of 0.9412, and area under the curve (AUC) of 0.8206 compared to other approaches (AUC 0.6824-0.7206) for classifying between early- and advanced-stages. In TCGA, our model achieved accuracy of 0.8077, sensitivity of 0.7692, specificity of 0.8462, and AUC of 0.8343. CONCLUSION: Our cascaded DL model for classification NSCLC patients into early-stage and advanced-stage showed promising results and could help future NSCLC research.

Integrative analysis of imaging and transcriptomic data of the immune landscape associated with tumor metabolism in lung adenocarcinoma: Clinical and prognostic implications

  • Choi, Hongyoon
  • Na, Kwon Joong
THERANOSTICS 2018 Journal Article, cited 0 times
Website

Quantification of T2-FLAIR Mismatch in Nonenhancing Diffuse Gliomas Using Digital Subtraction

  • Cho, N. S.
  • Sanvito, F.
  • Le, V. L.
  • Oshima, S.
  • Teraishi, A.
  • Yao, J.
  • Telesca, D.
  • Raymond, C.
  • Pope, W. B.
  • Nghiemphu, P. L.
  • Lai, A.
  • Cloughesy, T. F.
  • Salamon, N.
  • Ellingson, B. M.
AJNR Am J Neuroradiol 2024 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: The T2-FLAIR mismatch sign on MR imaging is a highly specific imaging biomarker of isocitrate dehydrogenase (IDH)-mutant astrocytomas, which lack 1p/19q codeletion. However, most studies using the T2-FLAIR mismatch sign have used visual assessment. This study quantified the degree of T2-FLAIR mismatch using digital subtraction of fluid-nulled T2-weighted FLAIR images from non-fluid-nulled T2-weighted images in human nonenhancing diffuse gliomas and then used this information to assess improvements in diagnostic performance and investigate subregion characteristics within these lesions. MATERIALS AND METHODS: Two cohorts of treatment-naive, nonenhancing gliomas with known IDH and 1p/19q status were studied (n = 71 from The Cancer Imaging Archive (TCIA) and n = 34 in the institutional cohort). 3D volumes of interest corresponding to the tumor were segmented, and digital subtraction maps of T2-weighted MR imaging minus T2-weighted FLAIR MR imaging were used to partition each volume of interest into a T2-FLAIR mismatched subregion (T2-FLAIR mismatch, corresponding to voxels with positive values on the subtraction maps) and nonmismatched subregion (T2-FLAIR nonmismatch corresponding to voxels with negative values on the subtraction maps). Tumor subregion volumes, percentage of T2-FLAIR mismatch volume, and T2-FLAIR nonmismatch subregion thickness were calculated, and 2 radiologists assessed the T2-FLAIR mismatch sign with and without the aid of T2-FLAIR subtraction maps. RESULTS: Thresholds of >/=42% T2-FLAIR mismatch volume classified IDH-mutant astrocytoma with a specificity/sensitivity of 100%/19.6% (TCIA) and 100%/31.6% (institutional); >/=25% T2-FLAIR mismatch volume showed 92.0%/32.6% and 100%/63.2% specificity/sensitivity, and >/=15% T2-FLAIR mismatch volume showed 88.0%/39.1% and 93.3%/79.0% specificity/sensitivity. In IDH-mutant astrocytomas with >/=15% T2-FLAIR mismatch volume, T2-FLAIR nonmismatch subregion thickness was negatively correlated with the percentage T2-FLAIR mismatch volume (P < .0001) across both cohorts. The percentage T2-FLAIR mismatch volume was higher in grades 3-4 compared with grade 2 IDH-mutant astrocytomas (P < .05), and >/=15% T2-FLAIR mismatch volume IDH-mutant astrocytomas were significantly larger than <15% T2-FLAIR mismatch volume IDH-mutant astrocytoma (P < .05) across both cohorts. When evaluated by 2 radiologists, the additional use of T2-FLAIR subtraction maps did not show a significant difference in interreader agreement, sensitivity, or specificity compared with a separate evaluation of T2-FLAIR and T2-weighted MR imaging alone. CONCLUSIONS: T2-FLAIR digital subtraction maps may be a useful, automated tool to obtain objective segmentations of tumor subregions based on quantitative thresholds for classifying IDH-mutant astrocytomas using the percentage T2 FLAIR mismatch volume with 100% specificity and exploring T2-FLAIR mismatch/T2-FLAIR nonmismatch subregion characteristics. Conversely, the addition of T2-FLAIR subtraction maps did not enhance the sensitivity or specificity of the visual T2-FLAIR mismatch sign assessment by experienced radiologists.

Multi-modal Transformer for Brain Tumor Segmentation

  • Cho, Jihoon
  • Park, Jinah
2023 Book Section, cited 0 times
Website
Segmentation of brain tumors from multiple MRI modalities is necessary for successful disease diagnosis and clinical treatment. In recent years, Transformer-based networks with the self-attention mechanism have been proposed. But they do not show the performance beyond the U-shaped fully convolutional network. In this paper, we apply HFTrans network to the brain tumor segmentation task of BraTS 2022 challenge by focusing on the multi-modalities of MRI with the benefits of Transformer. By applying BraTS-specific modifications of preprocessing, aggressive data augmentation, and postprocessing, our method shows superior results in comparisons between previous best performers. We show that the final result on the BraTS 2022 validation dataset achieves dice scores of 82.94%, 85.48%, and 92.44% and Hausdorff distances of 14.55 mm, 12.96 mm, and 3.77 mm for enhancing tumor, tumor core, and whole tumor, respectively.

Classification of the glioma grading using radiomics analysis

  • Cho, Hwan-ho
  • Lee, Seung-hak
  • Kim, Jonghoon
  • Park, Hyunjin
PeerJ 2018 Journal Article, cited 0 times
Website

Radiomics-guided deep neural networks stratify lung adenocarcinoma prognosis from CT scans

  • Cho, Hwan-ho
  • Lee, Ho Yun
  • Kim, Eunjin
  • Lee, Geewon
  • Kim, Jonghoon
  • Kwon, Junmo
  • Park, Hyunjin
Communications Biology 2021 Journal Article, cited 7 times
Website

Efficient Radiomics-Based Classification of Multi-Parametric MR Images to Identify Volumetric Habitats and Signatures in Glioblastoma: A Machine Learning Approach

  • Chiu, F. Y.
  • Yen, Y.
Cancers (Basel) 2022 Journal Article, cited 0 times
Website
Glioblastoma (GBM) is a fast-growing and aggressive brain tumor of the central nervous system. It encroaches on brain tissue with heterogeneous regions of a necrotic core, solid part, peritumoral tissue, and edema. This study provided qualitative image interpretation in GBM subregions and radiomics features in quantitative usage of image analysis, as well as ratios of these tumor components. The aim of this study was to assess the potential of multi-parametric MR fingerprinting with volumetric tumor phenotype and radiomic features to underlie biological process and prognostic status of patients with cerebral gliomas. Based on efficiently classified and retrieved cerebral multi-parametric MRI, all data were analyzed to derive volume-based data of the entire tumor from local cohorts and The Cancer Imaging Archive (TCIA) cohorts with GBM. Edema was mainly enriched for homeostasis whereas necrosis was associated with texture features. The proportional volume size of the edema was about 1.5 times larger than the size of the solid part tumor. The volume size of the solid part was approximately 0.7 times in the necrosis area. Therefore, the multi-parametric MRI-based radiomics model reveals efficiently classified tumor subregions of GBM and suggests that prognostic radiomic features from routine MRI examination may also be significantly associated with key biological processes as a practical imaging biomarker.

Volume-based inter difference XOR pattern: a new pattern for pulmonary nodule detection in CT images

  • Chitradevi, A.
  • Singh, N. Nirmal
  • Jayapriya, K.
International Journal of Biomedical Engineering and Technology 2021 Journal Article, cited 0 times
Website
The pulmonary nodule identification which paves the path to the cancer diagnosis is a challenging task today. The proposed work, volume-based inter difference XOR pattern (VIDXP) provides an efficient lung nodule detection system using a 3D texture-based pattern which is formed by XOR pattern calculation of inter-frame grey value differences among centre frame with its neighbourhood frames in rotationally clockwise direction, for every segmented nodule. Different classifiers such as random forest (RF), decision tree (DT) and AdaBoost are used with ten trails of five-fold cross validation test for classification. The experimental analysis in the public database, lung image database consortium-image database resource initiative (LIDC-IDRI) shows that proposed scheme gives better accuracy while comparing with existing approaches. Further, the proposed scheme is enhanced by combining shape information using histogram of oriented gradient (HOG) which improves the classification accuracy.

Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence

  • Chitalia, Rhea
  • Rowland, Jennifer
  • McDonald, Elizabeth S
  • Pantalone, Lauren
  • Cohen, Eric A
  • Gastounioti, Aimilia
  • Feldman, Michael
  • Schnall, Mitchell
  • Conant, Emily
  • Kontos, Despina
Clinical Cancer Research 2019 Journal Article, cited 0 times
Website

Expert tumor annotations and radiomics for locally advanced breast cancer in DCE-MRI for ACRIN 6657/I-SPY1

  • Chitalia, R.
  • Pati, S.
  • Bhalerao, M.
  • Thakur, S. P.
  • Jahani, N.
  • Belenky, V.
  • McDonald, E. S.
  • Gibbs, J.
  • Newitt, D. C.
  • Hylton, N. M.
  • Kontos, D.
  • Bakas, S.
Sci Data 2022 Journal Article, cited 0 times
Website
Breast cancer is one of the most pervasive forms of cancer and its inherent intra- and inter-tumor heterogeneity contributes towards its poor prognosis. Multiple studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of having consistency in: a) data quality, b) quality of expert annotation of pathology, and c) availability of baseline results from computational algorithms. To address these limitations, here we propose the enhancement of the I-SPY1 data collection, with uniformly curated data, tumor annotations, and quantitative imaging features. Specifically, the proposed dataset includes a) uniformly processed scans that are harmonized to match intensity and spatial characteristics, facilitating immediate use in computational studies, b) computationally-generated and manually-revised expert annotations of tumor regions, as well as c) a comprehensive set of quantitative imaging (also known as radiomic) features corresponding to the tumor regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.

Radiomic tumor phenotypes augment molecular profiling in predicting recurrence free survival after breast neoadjuvant chemotherapy

  • Chitalia, R.
  • Miliotis, M.
  • Jahani, N.
  • Tastsoglou, S.
  • McDonald, E. S.
  • Belenky, V.
  • Cohen, E. A.
  • Newitt, D.
  • Van't Veer, L. J.
  • Esserman, L.
  • Hylton, N.
  • DeMichele, A.
  • Hatzigeorgiou, A.
  • Kontos, D.
2023 Journal Article, cited 2 times
Website
BACKGROUND: Early changes in breast intratumor heterogeneity during neoadjuvant chemotherapy may reflect the tumor's ability to adapt and evade treatment. We investigated the combination of precision medicine predictors of genomic and MRI data towards improved prediction of recurrence free survival (RFS). METHODS: A total of 100 women from the ACRIN 6657/I-SPY 1 trial were retrospectively analyzed. We estimated MammaPrint, PAM50 ROR-S, and p53 mutation scores from publicly available gene expression data and generated four, voxel-wise 3-D radiomic kinetic maps from DCE-MR images at both pre- and early-treatment time points. Within the primary lesion from each kinetic map, features of change in radiomic heterogeneity were summarized into 6 principal components. RESULTS: We identify two imaging phenotypes of change in intratumor heterogeneity (p < 0.01) demonstrating significant Kaplan-Meier curve separation (p < 0.001). Adding phenotypes to established prognostic factors, functional tumor volume (FTV), MammaPrint, PAM50, and p53 scores in a Cox regression model improves the concordance statistic for predicting RFS from 0.73 to 0.79 (p = 0.002). CONCLUSIONS: These results demonstrate an important step in combining personalized molecular signatures and longitudinal imaging data towards improved prognosis. Early changes in tumor properties during treatment may tell us whether or not a patient's tumor is responding to treatment. Such changes may be seen on imaging. Here, changes in breast cancer properties are identified on imaging and are used in combination with gene markers to investigate whether response to treatment can be predicted using mathematical models. We demonstrate that tumor properties seen on imaging early on in treatment can help to predict patient outcomes. Our approach may allow clinicians to better inform patients about their prognosis and choose appropriate and effective therapies. eng

A Model to Improve the Quality of Low-dose CT Scan Images

  • Chircop, Francesca
  • Debono, Carl James
  • Bezzina, Paul
  • Zarb, Francis
2022 Conference Paper, cited 0 times
Website
Computed Tomography (CT) scans are used during medical imaging diagnosis as they provide detailed cross-sectional images of the human body by making use of X-rays. X-ray radiation as part of medical diagnosis poses health risks to patients leading experts to opt for low doses of radiation when possible. In accordance with European Directives, ionising radiation doses for medical purposes are to be kept as low as reasonably achievable (ALARA). While reduced radiation is beneficial from a health perspective, this impacts the quality of the images as the noise in the images increases, reducing the radiologist’s confidence in diagnosis. Various low-dose CT (LDCT) image denoising strategies available in the literature attempt to solve this conflict. However, current models face problems like over-smoothed results and lose detailed information. Consequently, the quality of LDCT images after denoising is still an important problem. The models presented in this work use deep learning techniques that are modified and trained for this problem. The results show that the best model in terms of image quality achieved a peak signal to noise ratio (PSNR) of 19.5 dB, a structural similarity index measure (SSIM) of 0.7153 and a root mean square error (RMSE) of 43.34. It performed the required operations in an average time of 4843.80s. Furthermore, tests at different dose levels were done to test the robustness of the best performing models.

SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets

  • Chinnam, Siva
  • Sistla, Venkatramaphanikumar
  • Kolli, Venkata
Traitement du Signal 2019 Journal Article, cited 0 times
Website

Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks

  • Chi, Jianning
  • Zhang, Yifei
  • Yu, Xiaosheng
  • Wang, Ying
  • Wu, Chengdong
Sensors (Basel) 2019 Journal Article, cited 2 times
Website
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.

Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism

  • Chi, J.
  • Wei, X.
  • Sun, Z.
  • Yang, Y.
  • Yang, B.
2024 Journal Article, cited 0 times
Website
Low-dose computed tomography (LDCT) has been widely used in medical diagnosis. In practice, doctors often zoom in on LDCT slices for clearer lesions and issues, while, a simple zooming operation fails to suppress low-dose artifacts, leading to distorted details. Therefore, numerous LDCT super-resolution (SR) methods have been proposed to promote the quality of zooming without the increase of the dose in CT scanning. However, there are still some drawbacks that need to be addressed in existing methods. First, the region of interest (ROI) is not emphasized due to the lack of guidance in the reconstruction process. Second, the convolutional blocks extracting fix-resolution features fail to concentrate on the essential multi-scale features. Third, a single SR head cannot suppress the residual artifacts. To address these issues, we propose an LDCT CT joint SR and denoising reconstruction network. Our proposed network consists of global dual-guidance attention fusion modules (GDAFMs) and multi-scale anastomosis blocks (MABs). The GDAFM directs the network to focus on ROI by fusing the extra mask guidance and average CT image guidance, while the MAB introduces hierarchical features through anastomosis connections to leverage multi-scale features and promote the feature representation ability. To suppress radial residual artifacts, we optimize our network using the feedback feature distillation mechanism (FFDM) which shares the backbone to learn features corresponding to the denoising task. We apply the proposed method to the 3D-IRCADB and PANCREAS datasets to evaluate its ability on LDCT image SR reconstruction. The experimental results compared with state-of-the-art methods illustrate the superiority of our approach with respect to peak signal-to-noise (PSNR), structural similarity (SSIM), and qualitative observations. Our proposed LDCT joint SR and denoising reconstruction network has been extensively evaluated through ablation, quantitative, and qualitative experiments. The results demonstrate that our method can recover noise-free and detail-sharp images, resulting in better reconstruction results. Code is available at https://github.com/neu-szy/ldct_sr_dn_w_ffdm .

Pancreatic Carcinoma Detection with Publicly available Radiological Images: A Systematic Analysis

  • Chhikara, Jasmine
  • Goel, Nidhi
  • Rathee, Neeru
2022 Conference Paper, cited 0 times
Pancreatic carcinoma is the fifth deadliest melanoma existing worldwide. It shares the maximum percentage of total mortalities caused by cancer every year. The main cause of the high mortality and minimal survival rate is the delayed detection of abnormal cell growth in pancreatic regions of patients diagnosed with this ailment. In recent years, researchers have been putting effort into the early detection of pancreatic carcinoma in radiological imaging scans of the whole abdomen. In this paper, the authors have systematically reviewed the data reported and various works done on publicly available imaging datasets of pancreatic cancer. The analyzed datasets are Pancreas- Computed Tomography, Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma from The Cancer Imaging Archive, and Pancreas Tumor from the Medical Segmentation Decathlon online repository. The review is supported by reporting incidences depending on age group, clinical history, physical conditions, pathological findings, tumor nature, region, stage, and tumor size of examined patients. The outcomes of categorized subjects will aid academicians, research scholars, and industrialists to understand the propagation of pancreatic cancer for early detection in computer-aided systems.

Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness

  • Cherezov, Dmitry
  • Goldgof, Dmitry
  • Hall, Lawrence
  • Gillies, Robert
  • Schabath, Matthew
  • Müller, Henning
  • Depeursinge, Adrien
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.

A Joint Detection and Recognition Approach to Lung Cancer Diagnosis From CT Images With Label Uncertainty

  • Chenyang, L.
  • Chan, S. C.
IEEE Access 2020 Journal Article, cited 0 times
Website
Automatic lung cancer diagnosis from computer tomography (CT) images requires the detection of nodule location as well as nodule malignancy prediction. This article proposes a joint lung nodule detection and classification network for simultaneous lung nodule detection, segmentation and classification subject to possible label uncertainty in the training set. It operates in an end-to-end manner and provides detection and classification of nodules simultaneously together with a segmentation of the detected nodules. Both the nodule detection and classification subnetworks of the proposed joint network adopt a 3-D encoder-decoder architecture for better exploration of the 3-D data. Moreover, the classification subnetwork utilizes the features extracted from the detection subnetwork and multiscale nodule-specific features for boosting the classification performance. The former serves as valuable prior information for optimizing the more complicated 3D classification network directly to better distinguish suspicious nodules from other tissues compared with direct backpropagation from the decoder. Experimental results show that this co-training yields better performance on both tasks. The framework is validated on the LUNA16 and LIDC-IDRI datasets and a pseudo-label approach is proposed for addressing the label uncertainty problem due to inconsistent annotations/labels. Experimental results show that the proposed nodule detector outperforms the state-of-the-art algorithms and yields comparable performance as state-of-the-art nodule classification algorithms when classification alone is considered. Since our joint detection/recognition approach can directly detect nodules and classify its malignancy instead of performing the tasks separately, our approach is more practical for automatic cancer and nodules detection.

Memory-Efficient Cascade 3D U-Net for Brain Tumor Segmentation

  • Cheng, Xinchao
  • Jiang, Zongkang
  • Sun, Qiule
  • Zhang, Jianxin
2020 Book Section, cited 17 times
Website
Segmentation is a routine and crucial procedure for the treatment of brain tumors. Deep learning based brain tumor segmentation methods have achieved promising performance in recent years. However, to pursue high segmentation accuracy, most of them require too much memory and computation resources. Motivated by a recently proposed partially reversible U-Net architecture that pays more attention to memory footprint, we further present a novel Memory-Efficient Cascade 3D U-Net (MECU-Net) for brain tumor segmentation in this work, which can achieve comparable segmentation accuracy with less memory and computation consumption. More specifically, MECU-Net utilizes fewer down-sampling channels to reduce the utilization of memory and computation resources. To make up the accuracy loss, MECU-Net employs multi-scale feature fusion module to enhance the feature representation capability. Additionally, a light-weight cascade model, which resolves the problem of small target segmentation accuracy caused by model compression to some extent, is further introduced into the segmentation network. Finally, edge loss and weighted dice loss are combined to refine the brain tumor segmentation results. Experiment results on BraTS 2019 validation set illuminate that MECU-Net can achieve average Dice coefficients of 0.902, 0.824 and 0.777 on the whole tumor, tumor core and enhancing tumor, respectively.

Glioma Sub-region Segmentation on Multi-parameter MRI with Label Dropout

  • Cheng, Kun
  • Hu, Caihao
  • Yin, Pengyu
  • Su, Qianlan
  • Zhou, Guancheng
  • Wu, Xian
  • Wang, Xiaohui
  • Yang, Wei
2021 Book Section, cited 2 times
Website
Gliomas are the most common primary brain tumor, the accurate segmentation of clinical sub-regions including enhancing tumor (ET), tumor core (TC) and whole tumor (WT) has great clinical importance throughout the diagnosis, treatment planning, delivery and prognosis. Machine learning algorithms particularly neural network based methods have been successful in many medical image segmentation applications. In this paper, we trained a patch based 3D UNet model with a hybrid loss between soft dice loss, generalized dice loss and multi-class cross-entropy loss. We also proposed a label dropout process that randomly discards inner segment labels and their corresponding network output during training to overcome the heavy class imbalance issue. On the BraTs 2020 final test data, we achieved 0.823, 0.886 and 0.843 for ET, WT and TC respectively.

Prediction of Glioma Grade Using Intratumoral and Peritumoral Radiomic Features From Multiparametric MRI Images

  • Cheng, J.
  • Liu, J.
  • Yue, H.
  • Bai, H.
  • Pan, Y.
  • Wang, J.
2022 Journal Article, cited 26 times
Website
The accurate prediction of glioma grade before surgery is essential for treatment planning and prognosis. Since the gold standard (i.e., biopsy)for grading gliomas is both highly invasive and expensive, and there is a need for a noninvasive and accurate method. In this study, we proposed a novel radiomics-based pipeline by incorporating the intratumoral and peritumoral features extracted from preoperative mpMRI scans to accurately and noninvasively predict glioma grade. To address the unclear peritumoral boundary, we designed an algorithm to capture the peritumoral region with a specified radius. The mpMRI scans of 285 patients derived from a multi-institutional study were adopted. A total of 2153 radiomic features were calculated separately from intratumoral volumes (ITVs)and peritumoral volumes (PTVs)on mpMRI scans, and then refined using LASSO and mRMR feature ranking methods. The top-ranking radiomic features were entered into the classifiers to build radiomic signatures for predicting glioma grade. The prediction performance was evaluated with five-fold cross-validation on a patient-level split. The radiomic signatures utilizing the features of ITV and PTV both show a high accuracy in predicting glioma grade, with AUCs reaching 0.968. By incorporating the features of ITV and PTV, the AUC of IPTV radiomic signature can be increased to 0.975, which outperforms the state-of-the-art methods. Additionally, our proposed method was further demonstrated to have strong generalization performance in an external validation dataset with 65 patients. The source code of our implementation is made publicly available at https://github.com/chengjianhong/glioma_grading.git.

Multi-view Local Co-occurrence and Global Consistency Learning Improve Mammogram Classification Generalisation.

  • Chen, Yuanhong
  • Wang, Hu
  • Wang, Chong
  • Tian, Yu
  • Liu, Fengbei
  • Liu, Yuyuan
  • Elliott, Michael
  • McCarthy, Davis J.
  • Frazer, Helen
  • Carneiro, Gustavo
2022 Book Section, cited 0 times
When analysing screening mammograms, radiologists can naturally process information across two ipsilateral views of each breast, namely the cranio-caudal (CC) and mediolateral-oblique (MLO) views. These multiple related images provide complementary diagnostic information and can improve the radiologist’s classification accuracy. Unfortunately, most existing deep learning systems, trained with globally-labelled images, lack the ability to jointly analyse and integrate global and local information from these multiple views. By ignoring the potentially valuable information present in multiple images of a screening episode, one limits the potential accuracy of these systems. Here, we propose a new multi-view global-local analysis method that mimics the radiologist’s reading procedure, based on a global consistency learning and local co-occurrence learning of ipsilateral views in mammograms. Extensive experiments show that our model outperforms competing methods, in terms of classification accuracy and generalisation, on a large-scale private dataset and two publicly available datasets, where models are exclusively trained and tested with global labels.

Integrating Radiomics with Genomics for Non-Small Cell Lung Cancer Survival Analysis

  • Chen, W.
  • Qiao, X.
  • Yin, S.
  • Zhang, X.
  • Xu, X.
2022 Journal Article, cited 0 times
Website
PURPOSE: The objectives of our study were to assess the association of radiological imaging and gene expression with patient outcomes in non-small cell lung cancer (NSCLC) and construct a nomogram by combining selected radiomic, genomic, and clinical risk factors to improve the performance of the risk model. METHODS: A total of 116 cases of NSCLC with CT images, gene expression, and clinical factors were studied, wherein 87 patients were used as the training cohort, and 29 patients were used as an independent testing cohort. Handcrafted radiomic features and deep-learning genomic features were extracted and selected from CT images and gene expression analysis, respectively. Two risk scores were calculated through Cox regression models for each patient based on radiomic features and genomic features to predict overall survival (OS). Finally, a fusion survival model was constructed by incorporating these two risk scores and clinical factors. RESULTS: The fusion model that combined CT images, gene expression data, and clinical factors effectively stratified patients into low- and high-risk groups. The C-indexes for OS prediction were 0.85 and 0.736 in the training and testing cohorts, respectively, which was better than that based on unimodal data. CONCLUSIONS: Combining radiomics and genomics can effectively improve OS prediction for NSCLC patients.

Deep learning-based pathology signature could reveal lymph node status and act as a novel prognostic marker across multiple cancer types

  • Chen, S.
  • Xiang, J.
  • Wang, X.
  • Zhang, J.
  • Yang, S.
  • Yang, W.
  • Zheng, J.
  • Han, X.
Br J Cancer 2023 Journal Article, cited 0 times
Website
BACKGROUND: Identifying lymph node metastasis (LNM) relies mainly on indirect radiology. Current studies omitted the quantified associations with traits beyond cancer types, failing to provide generalisation performance across various tumour types. METHODS: 4400 whole slide images across 11 cancer types were collected for training, cross-verification, and external validation of the pan-cancer lymph node metastasis (PC-LNM) model. We proposed an attention-based weakly supervised neural network based on self-supervised cancer-invariant features for the prediction task. RESULTS: PC-LNM achieved a test area under the curve (AUC) of 0.732 (95% confidence interval: 0.717-0.746, P < 0.0001) in fivefold cross-validation of multiple cancer types, which also demonstrated good generalisation in the external validation cohort with AUC of 0.699 (95% confidence interval: 0.658-0.737, P < 0.0001). The interpretability results derived from PC-LNM revealed that the regions with the highest attention scores identified by the model generally correspond to tumours with poorly differentiated morphologies. PC-LNM achieved superior performance over previously reported methods and could also act as an independent prognostic factor for patients across multiple tumour types. DISCUSSION: We presented an automated pan-cancer model for predicting the LNM status from primary tumour histology, which could act as a novel prognostic marker across multiple cancer types.

Machine learning-based pathomics signature could act as a novel prognostic marker for patients with clear cell renal cell carcinoma

  • Chen, S.
  • Jiang, L.
  • Gao, F.
  • Zhang, E.
  • Wang, T.
  • Zhang, N.
  • Wang, X.
  • Zheng, J.
Br J Cancer 2022 Journal Article, cited 0 times
Website
BACKGROUND: Traditional histopathology performed by pathologists through naked eyes is insufficient for accurate survival prediction of clear cell renal cell carcinoma (ccRCC). METHODS: A total of 483 whole slide images (WSIs) data from three patient cohorts were retrospectively analyzed. We performed machine learning algorithm to identify optimal digital pathological features and constructed machine learning-based pathomics signature (MLPS) for ccRCC patients. Prognostic performance of the prognostic model was also verified in two independent validation cohorts. RESULTS: MLPS could significantly distinguish ccRCC patients with high survival risk, with hazard ratio of 15.05, 4.49 and 1.65 in three independent cohorts, respectively. Cox regression analysis revealed that the MLPS could act as an independent prognostic factor for ccRCC patients. Integration nomogram based on MLPS, tumour stage system and tumour grade system improved the current survival prediction accuracy for ccRCC patients, with area under curve value of 89.5%, 90.0%, 88.5% and 85.9% for 1-, 3-, 5- and 10-year disease-free survival prediction. DISCUSSION: The machine learning-based pathomics signature could act as a novel prognostic marker for patients with ccRCC. Nevertheless, prospective studies with multicentric patient cohorts are still needed for further verifications.

Deep learning-based multimodel prediction for disease-free survival status of patients with clear cell renal cell carcinoma after surgery: a multicenter cohort study

  • Chen, S.
  • Gao, F.
  • Guo, T.
  • Jiang, L.
  • Zhang, N.
  • Wang, X.
  • Zheng, J.
2024 Journal Article, cited 0 times
Website
BACKGROUND: Although separate analysis of individual factor can somewhat improve the prognostic performance, integration of multimodal information into a single signature is necessary to stratify patients with clear cell renal cell carcinoma (ccRCC) for adjuvant therapy after surgery. METHODS: A total of 414 patients with whole slide images, computed tomography images, and clinical data from three patient cohorts were retrospectively analyzed. The authors performed deep learning and machine learning algorithm to construct three single-modality prediction models for disease-free survival of ccRCC based on whole slide images, cell segmentation, and computed tomography images, respectively. A multimodel prediction signature (MMPS) for disease-free survival were further developed by combining three single-modality prediction models and tumor stage/grade system. Prognostic performance of the prognostic model was also verified in two independent validation cohorts. RESULTS: Single-modality prediction models performed well in predicting the disease-free survival status of ccRCC. The MMPS achieved higher area under the curve value of 0.742, 0.917, and 0.900 in three independent patient cohorts, respectively. MMPS could distinguish patients with worse disease-free survival, with HR of 12.90 (95% CI: 2.443-68.120, P<0.0001), 11.10 (95% CI: 5.467-22.520, P<0.0001), and 8.27 (95% CI: 1.482-46.130, P<0.0001) in three different patient cohorts. In addition, MMPS outperformed single-modality prediction models and current clinical prognostic factors, which could also provide complements to current risk stratification for adjuvant therapy of ccRCC. CONCLUSION: Our novel multimodel prediction analysis for disease-free survival exhibited significant improvements in prognostic prediction for patients with ccRCC. After further validation in multiple centers and regions, the multimodal system could be a potential practical tool for clinicians in the treatment for ccRCC patients.

Towards a general-purpose foundation model for computational pathology

  • Chen, R. J.
  • Ding, T.
  • Lu, M. Y.
  • Williamson, D. F. K.
  • Jaume, G.
  • Song, A. H.
  • Chen, B.
  • Zhang, A.
  • Shao, D.
  • Shaban, M.
  • Williams, M.
  • Oldenburg, L.
  • Weishaupt, L. L.
  • Wang, J. J.
  • Vaidya, A.
  • Le, L. P.
  • Gerber, G.
  • Sahai, S.
  • Williams, W.
  • Mahmood, F.
Nat Med 2024 Journal Article, cited 0 times
Website
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.

Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-based Study

  • Chen, P. T.
  • Wu, T.
  • Wang, P.
  • Chang, D.
  • Liu, K. L.
  • Wu, M. S.
  • Roth, H. R.
  • Lee, P. C.
  • Liao, W. C.
  • Wang, W.
RadiologyRadiology 2022 Journal Article, cited 5 times
Website
Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years +/- 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. (c) RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.

Radiomic Features at CT Can Distinguish Pancreatic Cancer from Noncancerous Pancreas

  • Chen, Po-Ting
  • Chang, Dawei
  • Yen, Huihsuan
  • Liu, Kao-Lang
  • Huang, Su-Yun
  • Roth, Holger
  • Wu, Ming-Shiang
  • Liao, Wei-Chih
  • Wang, Weichung
Radiol Imaging Cancer 2021 Journal Article, cited 0 times
Website
Purpose To identify distinguishing CT radiomic features of pancreatic ductal adenocarcinoma (PDAC) and to investigate whether radiomic analysis with machine learning can distinguish between patients who have PDAC and those who do not. Materials and Methods This retrospective study included contrast material-enhanced CT images in 436 patients with PDAC and 479 healthy controls from 2012 to 2018 from Taiwan that were randomly divided for training and testing. Another 100 patients with PDAC (enriched for small PDACs) and 100 controls from Taiwan were identified for testing (from 2004 to 2011). An additional 182 patients with PDAC and 82 healthy controls from the United States were randomly divided for training and testing. Images were processed into patches. An XGBoost (https://xgboost.ai/) model was trained to classify patches as cancerous or noncancerous. Patients were classified as either having or not having PDAC on the basis of the proportion of patches classified as cancerous. For both patch-based and patient-based classification, the models were characterized as either a local model (trained on Taiwanese data only) or a generalized model (trained on both Taiwanese and U.S. data). Sensitivity, specificity, and accuracy were calculated for patch- and patient-based analysis for the models. Results The median tumor size was 2.8 cm (interquartile range, 2.0-4.0 cm) in the 536 Taiwanese patients with PDAC (mean age, 65 years +/- 12 [standard deviation]; 289 men). Compared with normal pancreas, PDACs had lower values for radiomic features reflecting intensity and higher values for radiomic features reflecting heterogeneity. The performance metrics for the developed generalized model when tested on the Taiwanese and U.S. test data sets, respectively, were as follows: sensitivity, 94.7% (177 of 187) and 80.6% (29 of 36); specificity, 95.4% (187 of 196) and 100% (16 of 16); accuracy, 95.0% (364 of 383) and 86.5% (45 of 52); and area under the curve, 0.98 and 0.91. Conclusion Radiomic analysis with machine learning enabled accurate detection of PDAC at CT and could identify patients with PDAC. Keywords: CT, Computer Aided Diagnosis (CAD), Pancreas, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. (c) RSNA, 2021.

Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation

  • Chen, Minglin
  • Wu, Yaozu
  • Wu, Jianhuang
2020 Book Section, cited 0 times
Website
Magnetic resonance imaging (MRI) is the dominant modality used in the initial evaluation of patients with primary brain tumors due to its superior image resolution and high safety profile. Automated segmentation of brain tumors from MRI is critical in the determination of response to therapy. In this paper, we propose a novel method which aggregates multi-scale prediction from 3D U-Net to segment enhancing tumor (ET), whole tumor (WT) and tumor core (TC) from multimodal MRI. Multi-scale prediction is derived from the decoder part of 3D U-Net at different resolutions. The final prediction takes the minimum value of the corresponding pixel from the upsampling multi-scale prediction. Aggregating multi-scale prediction can add constraints to the network which is beneficial for limited data. Additionally, we employ model ensembling strategy to further improve the performance of the proposed network. Finally, we achieve dice scores of 0.7745, 0.8640 and 0.7914, and Hausdorff distances (95th percentile) of 4.2365, 6.9381 and 6.6026 for ET, WT and TC respectively on the test set in BraTS 2019.

Machine vision-assisted identification of the lung adenocarcinoma category and high-risk tumor area based on CT images

  • Chen, L.
  • Qi, H.
  • Lu, D.
  • Zhai, J.
  • Cai, K.
  • Wang, L.
  • Liang, G.
  • Zhang, Z.
Patterns (N Y) 2022 Journal Article, cited 1 times
Website
Computed tomography (CT) is a widely used medical imaging technique. It is important to determine the relationship between CT images and pathological examination results of lung adenocarcinoma to better support its diagnosis. In this study, a bilateral-branch network with a knowledge distillation procedure (KDBBN) was developed for the auxiliary diagnosis of lung adenocarcinoma. KDBBN can automatically identify adenocarcinoma categories and detect the lesion area that most likely contributes to the identification of specific types of adenocarcinoma based on lung CT images. In addition, a knowledge distillation process was established for the proposed framework to ensure that the developed models can be applied to different datasets. The results of our comprehensive computational study confirmed that our method provides a reliable basis for adenocarcinoma diagnosis supplementary to the pathological examination. Meanwhile, the high-risk area labeled by KDBBN highly coincides with the related lesion area labeled by doctors in clinical diagnosis.

Are all shortcuts in encoder–decoder networks beneficial for CT denoising?

  • Chen, Junhua
  • Zhang, Chong
  • Wee, Leonard
  • Dekker, Andre
  • Bermejo, Inigo
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022 Journal Article, cited 0 times
Website
Denoising of CT scans has attracted the attention of many researchers in the medical image analysis domain. Encoder–decoder networks are deep learning neural networks that have become common for image denoising in recent years. Shortcuts between the encoder and decoder layers are crucial for some image-to-image translation tasks. However, are all shortcuts necessary for CT denoising? To answer this question, we set up two encoder–decoder networks representing two popular architectures and then progressively removed shortcuts from the networks from shallow to deep (forward removal) and from deep to shallow (backward removal). We used two unrelated datasets with different noise levels to test the denoising performance of these networks using two metrics, namely root mean square error and content loss. The results show that while more than half of the shortcuts are still indispensable for CT scan denoising, removing certain shortcuts leads to performance improvement for denoising. Both shallow and deep shortcuts might be removed, thus retaining sparse connections, especially when the noise level is high. Backward removal seems to have a better performance than forward removal, which means deep shortcuts have priority to be removed. Finally, we propose a hypothesis to explain this phenomenon and validate it in the experiments.

Using 3D deep features from CT scans for cancer prognosis based on a video classification model: A multi-dataset feasibility study

  • Chen, J.
  • Wee, L.
  • Dekker, A.
  • Bermejo, I.
Med Phys 2023 Journal Article, cited 0 times
Website
BACKGROUND: Cancer prognosis before and after treatment is key for patient management and decision making. Handcrafted imaging biomarkers-radiomics-have shown potential in predicting prognosis. PURPOSE: However, given the recent progress in deep learning, it is timely and relevant to pose the question: could deep learning based 3D imaging features be used as imaging biomarkers and outperform radiomics? METHODS: Effectiveness, reproducibility in test/retest, across modalities, and correlation of deep features with clinical features such as tumor volume and TNM staging were tested in this study. Radiomics was introduced as the reference image biomarker. For deep feature extraction, we transformed the CT scans into videos, and we adopted the pre-trained Inflated 3D ConvNet (I3D) video classification network as the architecture. We used four datasets-LUNG 1 (n = 422), LUNG 4 (n = 106), OPC (n = 605), and H&N 1 (n = 89)-with 1270 samples from different centers and cancer types-lung and head and neck cancer-to test deep features' predictiveness and two additional datasets to assess the reproducibility of deep features. RESULTS: Support Vector Machine-Recursive Feature Elimination (SVM-RFE) selected top 100 deep features achieved a concordance index (CI) of 0.67 in survival prediction in LUNG 1, 0.87 in LUNG 4, 0.76 in OPC, and 0.87 in H&N 1, while SVM-RFE selected top 100 radiomics achieved CIs of 0.64, 0.77, 0.73, and 0.74, respectively, all statistically significant differences (p < 0.01, Wilcoxon's test). Most selected deep features are not correlated with tumor volume and TNM staging. However, full radiomics features show higher reproducibility than full deep features in a test/retest setting (0.89 vs. 0.62, concordance correlation coefficient). CONCLUSION: The results show that deep features can outperform radiomics while providing different views for tumor prognosis compared to tumor volume and TNM staging. However, deep features suffer from lower reproducibility than radiomic features and lack the interpretability of the latter.

Empathy structure in multi-agent system with the mechanism of self-other separation: Design and analysis from a random walk view

  • Chen, Jize
  • Liu, Bo
  • Qu, Zhenshen
  • Wang, Changhong
2023 Journal Article, cited 0 times
Website
In a socialized multi-agent system, the preferences of individuals will be inevitably influenced by others. This paper introduces an extended empathy structure to characterize the coupling process of preferences under specific relations and make it cover scenarios including human society, human–machine system, and even abiotic engineering applications. In this model, empathy is abstracted as a stochastic experience process in the form of Markov chain, and the coupled empathy utility is defined as the expectation of obtaining preferences under the corresponding probability distribution. The self-other separation is the core concept with which our structure can exhibit social attributes, including attraction of implicit states, inhibition of excessive empathy, attention of empathetic targets, and anisotropy of the utility distribution. Compared with the previous empirical models, our model has a better performance on the data set and can provide a new perspective for designing and analyzing the cognitive layer of the human–machine network, as well as the information fusion and semi-supervised clustering methods in engineering.

Generating anthropomorphic phantoms using fully unsupervised deformable image registration with convolutional neural networks

  • Chen, Junyu
  • Li, Ye
  • Du, Yong
  • Frey, Eric C
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Computerized phantoms have been widely used in nuclear medicine imaging for imaging system optimization and validation. Although the existing computerized phantoms can model anatomical variations through organ and phantom scaling, they do not provide a way to fully reproduce the anatomical variations and details seen in humans. In this work, we present a novel registration-based method for creating highly anatomically detailed computerized phantoms. We experimentally show substantially improved image similarity of the generated phantom to a patient image. METHODS: We propose a deep-learning-based unsupervised registration method to generate a highly anatomically detailed computerized phantom by warping an XCAT phantom to a patient computed tomography (CT) scan. We implemented and evaluated the proposed method using the NURBS-based XCAT phantom and a publicly available low-dose CT dataset from TCIA. A rigorous tradeoff analysis between image similarity and deformation regularization was conducted to select the loss function and regularization term for the proposed method. A novel SSIM-based unsupervised objective function was proposed. Finally, ablation studies were conducted to evaluate the performance of the proposed method (using the optimal regularization and loss function) and the current state-of-the-art unsupervised registration methods. RESULTS: The proposed method outperformed the state-of-the-art registration methods, such as SyN and VoxelMorph, by more than 8%, measured by the SSIM and less than 30%, by the MSE. The phantom generated by the proposed method was highly detailed and was almost identical in appearance to a patient image. CONCLUSIONS: A deep-learning-based unsupervised registration method was developed to create anthropomorphic phantoms with anatomies labels that can be used as the basis for modeling organ properties. Experimental results demonstrate the effectiveness of the proposed method. The resulting anthropomorphic phantom is highly realistic. Combined with realistic simulations of the image formation process, the generated phantoms could serve in many applications of medical imaging research.

Generative models improve radiomics: reproducibility and performance in low dose CTs

  • Chen, Junhua
2023 Thesis, cited 0 times
Website
English summary Along with the increasing demand of low dose CT in clinical practices, low dose CT radiomics has shown its potential to provide clinical decision support in oncology. As a trade-off of low radiation exposure in low dose CT imaging, higher noise is present in these images. Noise in low dose CT decreases the texture information of image, and the reproducibility and performance of CT radiomics. One potential solution worth exploring for improving the reproducibility and performance of radiomics based on low dose CT is denoising the images before extracting radiomic features. As the state of art method for low dose CT denoising, generative models have been widely used in denoising practices. This thesis investigated the possibility of using generative models to enhance the image quality of low dose CTs and improve radiomics reproducibility and performance. In the first research chapter (Chapter 2) of this thesis, we investigate the benefits of shortcuts in encoder-decoder network for CT denoising. An encoder-decoder network (EDN) is an important architecture for the generator in generative models and this chapter provides some guidelines to help us design generative models. Results showed that over half of the shortcuts are necessary for CT denoising. However, the network should keep sparse connection between the encoder and decoder. Moreover, deeper shortcuts have a higher priority to be removed in favor of keeping sparse connections. Paired training datasets are needed for training most generative models. However, collecting these kinds of datasets is difficult and time-consuming. To investigate the effect of generative models in improving low dose CT radiomics reproducibility, (Chapter 3) two included generative models – Conditional Generative Adversarial Network (CGAN) and END - were trained on paired simulation low-high dose CT images. The trained models are applied to simulated noisy CT images and real low dose CT images. Results showed that denoising using EDN and CGANs can improve the reproducibility of radiomic features from noisy CTs (including simulated data and real low dose CTs). To test the improvement of enhanced low dose CT radiomics in real applications more comprehensively, low dose CT radiomics was applied for a new application. (Chapter 4) The objective of this application is to develop a lung cancer classification model at the subject (patient) level from multiple examined nodules, without the need to have specific expert findings reported at the level of each individual nodule. Lung cancer classification was regarded as a multiple instances learning problem, CT radiomics were used as biomarkers to extract information from each nodule and deep attention-based MIL is used as the classification algorithm at the patient level. Results showed that the proposed method can achieve the best performance in lung cancer classification compared with other MIL methods and that the introduced attention mechanism can increase the interpretability of results. To comprehensively investigate the improvements of generative models for CT radiomics performance in real applications, pre-trained generative models are applied into multiple real low dose CT datasets without fine- tuning. (Chapter 5) Improved radiomics features were applied into multiple radiomics related applications – tumor pre-treatment survival prediction and deep attention-based MIL based lung cancer diagnosis. The results showed that generative models can improve low dose CT radiomics performance. To investigate the possibility of using unpaired real low-high dose CT image to establish a denoiser and using thus trained denoiser to enhance radiomics reproducibility and performance, a Cycle GAN was adopted as the testing model in this chapter. (Chapter 6) The Cycle GAN was trained based on paired simulated datasets (for comparison study with EDN and CGAN) and unpaired real datasets. The trained models were applied to simulated noisy CT images and real low dose CT images to test the improvement of radiomics reproducibility and performance. The results showed that Cycle GANs trained on both simulated and real data can improve radiomics reproducibility and performance in low dose CT and achieve similar results compared to CGANs and EDNs Finally, the discussion section of this thesis (Chapter 7) summarized the barriers that prevent generative models to be applied apply for real low dose CT radiomics and propose the possible solutions for these barriers. Moreover, this discussion section mentioned other possible methods to improve low dose CT radiomics performance.

Low-dose CT via convolutional neural network

  • Chen, Hu
  • Zhang, Yi
  • Zhang, Weihua
  • Liao, Peixi
  • Li, Ke
  • Zhou, Jiliu
  • Wang, Ge
Biomedical Optics Express 2017 Journal Article, cited 342 times
Website
In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.

Simple and Fast Convolutional Neural Network Applied to Median Cross Sections for Predicting the Presence of MGMT Promoter Methylation in FLAIR MRI Scans

  • Chen, Daniel Tianming
  • Chen, Allen Tianle
  • Wang, Haiyan
2022 Book Section, cited 0 times
In this paper we present a small and fast Convolutional Neural Network (CNN) used to predict the presence of MGMT promoter methylation in Magnetic Resonance Imaging (MRI) scans. Our data set is “The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification” by U. Baid, et al. We focus on using the median (“middle-most”) cross section of a FLAIR scan and use this as the input to the neural net for training. This cross section therefore presents the most or nearly the most surface area compared to any other cross section. We are thus able to reduce the computational complexity and time of the training step while preserving the high performance extrapolation capabilities of the model on unseen data.

An Integrated Machine Learning Framework Identifies Prognostic Gene Pair Biomarkers Associated with Programmed Cell Death Modalities in Clear Cell Renal Cell Carcinoma

  • Chen, B.
  • Zhou, M.
  • Guo, L.
  • Huang, H.
  • Sun, X.
  • Peng, Z.
  • Wu, D.
  • Chen, W.
2024 Journal Article, cited 0 times
Website
BACKGROUND: Clear cell renal cell carcinoma (ccRCC) is a common and lethal urological malignancy for which there are no effective personalized therapeutic strategies. Programmed cell death (PCD) patterns have emerged as critical determinants of clinical prognosis and immunotherapy responses. However, the actual clinical relevance of PCD processes in ccRCC is still poorly understood. METHODS: We screened for PCD-related gene pairs through single-sample gene set enrichment analysis (ssGSEA), consensus cluster analysis, and univariate Cox regression analysis. A novel machine learning framework incorporating 12 algorithms and 113 unique combinations were used to develop the cell death-related gene pair score (CDRGPS). Additionally, a radiomic score (Rad_Score) derived from computed tomography (CT) image features was used to classify the CDRGPS status as high or low. Finally, we conclusively verified the function of PRSS23 in ccRCC. RESULTS: The CDRGPS was developed through an integrated machine learning approach that leveraged 113 algorithm combinations. CDRGPS represents an independent prognostic biomarker for overall survival and demonstrated consistent performance between training and external validation cohorts. Moreover, CDRGPS showed better prognostic accuracy compared to seven previously published cell death-related signatures. In addition, patients classified as high-risk by CDRGPS exhibited increased responsiveness to tyrosine kinase inhibitors (TKIs), mammalian Target of Rapamycin (mTOR) inhibitors, and immunotherapy. The Rad_Score demonstrated excellent discrimination for predicting high versus low CDRGPS status, with an area under the curve (AUC) value of 0.813 in the Cancer Imaging Archive (TCIA) database. PRSS23 was identified as a significant factor in the metastasis and immune response of ccRCC, thereby validating experimental in vitro results. CONCLUSIONS: CDRGPS is a robust and non-invasive tool that has the potential to improve clinical outcomes and enable personalized medicine in ccRCC patients.

A Fast Semi-Automatic Segmentation Tool for Processing Brain Tumor Images

  • Chen, Andrew X
  • Rabadán, Raúl
2017 Book Section, cited 0 times
Website

MRI prostate cancer radiomics: Assessment of effectiveness and perspectives

  • Chatzoudis, Pavlos
2018 Thesis, cited 0 times
Website

Investigating the impact of the CT Hounsfield unit range on radiomic feature stability using dual energy CT data

  • Chatterjee, A.
  • Vallieres, M.
  • Forghani, R.
  • Seuntjens, J.
Phys Med 2021 Journal Article, cited 0 times
Website
PURPOSE: Radiomic texture calculation requires discretizing image intensities within the region-of-interest. FBN (fixed-bin-number), FBS (fixed-bin-size) and FBN and FBS with intensity equalization (FBNequal, FBSequal) are four discretization approaches. A crucial choice is the voxel intensity (Hounsfield units, or HU) binning range. We assessed the effect of this choice on radiomic features. METHODS: The dataset comprised 95 patients with head-and-neck squamous-cell-carcinoma. Dual energy CT data was reconstructed at 21 electron energies (40, 45,... 140 keV). Each of 94 texture features were calculated with 64 extraction parameters. All features were calculated five times: original choice, left shift (-10/-20 HU), right shift (+10/+20 HU). For each feature, Spearman correlation between nominal and four variants were calculated to determine feature stability. This was done for six texture feature types (GLCM, GLRLM, GLSZM, GLDZM, NGTDM, and NGLDM) separately. This analysis was repeated for the four binning algorithms. Effect of feature instability on predictive ability was studied for lymphadenopathy as endpoint. RESULTS: FBN and FBNequal algorithms showed good stability (correlation values consistently >0.9). For FBS and FBSequal algorithms, while median values exceeded 0.9, the 95% lower bound decreased as a function of energy, with poor performance over the entire spectrum. FBNequal was the most stable algorithm, and FBS the least. CONCLUSIONS: We believe this is the first multi-energy systematic study of the impact of CT HU range used during intensity discretization for radiomic feature extraction. Future analyses should account for this source of uncertainty when evaluating the robustness of their radiomic signature.

An Automatic Overall Survival Time Prediction System for Glioma Brain Tumor Patients Based on Volumetric and Shape Features

  • Chato, Lina
  • Kachroo, Pushkin
  • Latifi, Shahram
2021 Book Section, cited 5 times
Website
An automatic overall survival time prediction system for Glioma brain tumor patients is proposed and developed based on volumetric, location, and shape features. The proposed automatic prediction system consists of three stages: segmentation of brain tumor sub-regions; features extraction; and overall survival time predictions. A deep learning structure based on a modified 3 Dimension (3D) U-Net is proposed to develop an accurate segmentation model to identify and localize the three Glioma brain tumor sub-regions: gadolinium (GD)-enhancing tumor, peritumoral edema, and necrotic and non-enhancing tumor core (NCR/NET). The best performance of a segmentation model is achieved by the modified 3D U-Net based on an Accumulated Encoder (U-Net AE) with a Generalized Dice-Loss (GDL) function trained by the ADAM optimization algorithm. This model achieves Average Dice-Similarity (ADS) scores of 0.8898, 0.8819, and 0.8524 for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET), respectively, in the train dataset of the Multimodal Brain Tumor Segmentation challenge (BraTS) 2020. Various combinations of volumetric (based on brain functionality regions), shape, and location features are extracted to train an overall survival time classification model using a Neural Network (NN). The model classifies the data into three classes: short-survivors, mid-survivors, and long-survivors. An information fusion strategy based on features-level fusion and decision-level fusion is used to produce the best prediction model. The best performance is achieved by the ensemble model and shape features model with accuracies of (55.2%) on the BraTS 2020 validation dataset. The ensemble model achieves a competitive accuracy (55.1%) on the BraTS 2020 test dataset.

A New General Maximum Intensity Projection Technology via the Hybrid of U-Net and Radial Basis Function Neural Network

  • Chao, Zhen
  • Xu, Wenting
2021 Journal Article, cited 0 times
Website

Joint denoising and interpolating network for low-dose cone-beam CT reconstruction under hybrid dose-reduction strategy

  • Chao, L.
  • Wang, Y.
  • Zhang, T.
  • Shan, W.
  • Zhang, H.
  • Wang, Z.
  • Li, Q.
Comput Biol Med 2023 Journal Article, cited 0 times
Website
Cone-beam computed tomography (CBCT) is generally reconstructed with hundreds of two-dimensional X-Ray projections through the FDK algorithm, and its excessive ionizing radiation of X-Ray may impair patients' health. Two common dose-reduction strategies are to either lower the intensity of X-Ray, i.e., low-intensity CBCT, or reduce the number of projections, i.e., sparse-view CBCT. Existing efforts improve the low-dose CBCT images only under a single dose-reduction strategy. In this paper, we argue that applying the two strategies simultaneously can reduce dose in a gentle manner and avoid the extreme degradation of the projection data in a single dose-reduction strategy, especially under ultra-low-dose situations. Therefore, we develop a Joint Denoising and Interpolating Network (JDINet) in projection domain to improve the CBCT quality with the hybrid low-intensity and sparse-view projections. Specifically, JDINet mainly includes two important components, i.e., denoising module and interpolating module, to respectively suppress the noise caused by the low-intensity strategy and interpolate the missing projections caused by the sparse-view strategy. Because FDK actually utilizes the projection information after ramp-filtering, we develop a filtered structural similarity constraint to help JDINet focus on the reconstruction-required information. Afterward, we employ a Postprocessing Network (PostNet) in the reconstruction domain to refine the CBCT images that are reconstructed with denoised and interpolated projections. In general, a complete CBCT reconstruction framework is built with JDINet, FDK, and PostNet. Experiments demonstrate that our framework decreases RMSE by approximately 8 %, 15 %, and 17 %, respectively, on the 1/8, 1/16, and 1/32 dose data, compared to the latest methods. In conclusion, our learning-based framework can be deeply imbedded into the CBCT systems to promote the development of CBCT. Source code is available at https://github.com/LianyingChao/FusionLowDoseCBCT.

Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas

  • Chang, P
  • Grinband, J
  • Weinberg, BD
  • Bardis, M
  • Khy, M
  • Cadena, G
  • Su, M-Y
  • Cha, S
  • Filippi, CG
  • Bota, D
American Journal of Neuroradiology 2018 Journal Article, cited 5 times
Website

Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement

  • Chang, Ken
  • Beers, Andrew L
  • Bai, Harrison X
  • Brown, James M
  • Ly, K Ina
  • Li, Xuejun
  • Senders, Joeky T
  • Kavouridis, Vasileios K
  • Boaro, Alessandro
  • Su, Chang
  • Bi, Wenya Linda
  • Rapalino, Otto
  • Liao, Weihua
  • Shen, Qin
  • Zhou, Hao
  • Xiao, Bo
  • Wang, Yinyan
  • Zhang, Paul J
  • Pinho, Marco C
  • Wen, Patrick Y
  • Batchelor, Tracy T
  • Boxerman, Jerrold L
  • Arnaout, Omar
  • Rosen, Bruce R
  • Gerstner, Elizabeth R
  • Yang, Li
  • Huang, Raymond Y
  • Kalpathy-Cramer, Jayashree
2019 Journal Article, cited 0 times
Website
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.

Residual Convolutional Neural Network for the Determination of IDH Status in Low-and High-Grade Gliomas from MR Imaging

  • Chang, Ken
  • Bai, Harrison X
  • Zhou, Hao
  • Su, Chang
  • Bi, Wenya Linda
  • Agbodza, Ena
  • Kavouridis, Vasileios K
  • Senders, Joeky T
  • Boaro, Alessandro
  • Beers, Andrew
Clinical Cancer Research 2018 Journal Article, cited 26 times
Website

A deep learning pipeline to simulate fluorodeoxyglucose (FDG) uptake in head and neck cancers using non-contrast CT images without the administration of radioactive tracer

  • Chandrashekar, A.
  • Handa, A.
  • Ward, J.
  • Grau, V.
  • Lee, R.
Insights Imaging 2022 Journal Article, cited 0 times
Website
OBJECTIVES: Positron emission tomography (PET) imaging is a costly tracer-based imaging modality used to visualise abnormal metabolic activity for the management of malignancies. The objective of this study is to demonstrate that non-contrast CTs alone can be used to differentiate regions with different Fluorodeoxyglucose (FDG) uptake and simulate PET images to guide clinical management. METHODS: Paired FDG-PET and CT images (n = 298 patients) with diagnosed head and neck squamous cell carcinoma (HNSCC) were obtained from The cancer imaging archive. Random forest (RF) classification of CT-derived radiomic features was used to differentiate metabolically active (tumour) and inactive tissues (ex. thyroid tissue). Subsequently, a deep learning generative adversarial network (GAN) was trained for this CT to PET transformation task without tracer injection. The simulated PET images were evaluated for technical accuracy (PERCIST v.1 criteria) and their ability to predict clinical outcome [(1) locoregional recurrence, (2) distant metastasis and (3) patient survival]. RESULTS: From 298 patients, 683 hot spots of elevated FDG uptake (elevated SUV, 6.03 +/- 1.71) were identified. RF models of intensity-based CT-derived radiomic features were able to differentiate regions of negligible, low and elevated FDG uptake within and surrounding the tumour. Using the GAN-simulated PET image alone, we were able to predict clinical outcome to the same accuracy as that achieved using FDG-PET images. CONCLUSION: This pipeline demonstrates a deep learning methodology to simulate PET images from CT images in HNSCC without the use of radioactive tracer. The same pipeline can be applied to other pathologies that require PET imaging.

Automatic Classification of Brain Tumor Types with the MRI Scans and Histopathology Images

  • Chan, Hsiang-Wei
  • Weng, Yan-Ting
  • Huang, Teng-Yi
2020 Conference Paper, cited 0 times
Website
In the study, we used two neural networks, including VGG16 and Resnet50, to process the whole slide images with feature extracting. To classify the three types of brain tumors (i.e., glioblastoma, oligodendroglioma, and astrocytoma), we tried several clustering methods include k-means and random forest classification methods. In the prediction stage, we compared the prediction results with and without MRI features. The results support that the classification method performed with image features extracted by VGG16 has the highest prediction accuracy. Moreover, we found that combining with radiomics generated from MR images slightly improved the accuracy of the classification.

Using Docker to support reproducible research

  • Chamberlain, Ryan
  • Invenshure, L
  • Schommer, Jennifer
2014 Report, cited 30 times
Website
Reproducible research is a growing movement among scientists, but the tools for creating sustainable software to support the computational side of research are still in their infancy and are typically only being used by scientists with expertise in computer programming and system administration. Docker is a new platform developed for the DevOps community that enables the easy creation and management of consistent computational environments. This article describes how we have applied it to computational science and suggests that it could be a powerful tool for reproducible research.

Automated lung field segmentation in CT images using mean shift clustering and geometrical features

  • Chama, Chanukya Krishna
  • Mukhopadhyay, Sudipta
  • Biswas, Prabir Kumar
  • Dhara, Ashis Kumar
  • Madaiah, Mahendra Kasuvinahally
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 8 times
Website

Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images

  • Chaddad, Ahmad
  • Tanougast, Camel
Brain Informatics 2016 Journal Article, cited 28 times
Website
To isolate the brain from non-brain tissues using a fully automatic method may be affected by the presence of radio frequency non-homogeneity of MR images (MRI), regional anatomy, MR sequences, and the subjects of the study. In order to automate the brain tumor (Glioblastoma) detection, we proposed a novel approach of skull stripping for axial slices derived from MRI. Then, the brain tumor was detected using multi-level threshold segmentation based on histogram analysis. Skull-stripping method, was applied by adaptive morphological operations approach. This is considered an empirical threshold by calculation of the area of brain tissue, iteratively. It was employed on the registration of non-contrast T1-weighted (T1-WI) and its corresponding fluid attenuated inversion recovery sequence. Then, we used multi-thresholding segmentation (MTS) method which is proposed by Otsu. We calculated the performance metrics based on the similarity coefficients for patients (n = 120) with tumor. The adaptive algorithm of skull stripping and MTS of segmented tumors were achieved efficient in preliminary results with 92 and 80 % of Dice similarity coefficient and 0.3 and 25.8 % of false negative rate, respectively. The adaptive skull stripping algorithm provides robust skull-stripping results, and the tumor area for medical diagnosis was determined by MTS.

Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients

  • Chaddad, Ahmad
  • Tanougast, Camel
2016 Journal Article, cited 16 times
Website
GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value < 0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.

High-Throughput Quantification of Phenotype Heterogeneity Using Statistical Features

  • Chaddad, Ahmad
  • Tanougast, Camel
Advances in Bioinformatics 2015 Journal Article, cited 5 times
Website
Statistical features are widely used in radiology for tumor heterogeneity assessment using magnetic resonance (MR) imaging technique. In this paper, feature selection based on decision tree is examined to determine the relevant subset of glioblastoma (GBM) phenotypes in the statistical domain. To discriminate between active tumor (vAT) and edema/invasion (vE) phenotype, we selected the significant features using analysis of variance (ANOVA) with p value < 0.01. Then, we implemented the decision tree to define the optimal subset features of phenotype classifier. Naive Bayes (NB), support vector machine (SVM), and decision tree (DT) classifier were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate vAT from vE. Whole nine features were statistically significant to classify the vAT from vE with p value < 0.01. Feature selection based on decision tree showed the best performance by the comparative study using full feature set. The feature selected showed that the two features Kurtosis and Skewness achieved a highest range value of 58.33-75.00% accuracy classifier and 73.88-92.50% AUC. This study demonstrated the ability of statistical features to provide a quantitative, individualized measurement of glioblastoma patient and assess the phenotype progression.

Prediction of survival with multi-scale radiomic analysis in glioblastoma patients

  • Chaddad, Ahmad
  • Sabri, Siham
  • Niazi, Tamim
  • Abdulkarim, Bassam
2018 Journal Article, cited 1 times
Website
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman’s rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict the PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients.

Predicting Gleason Score of Prostate Cancer Patients using Radiomic Analysis

  • Chaddad, Ahmad
  • Niazi, Tamim
  • Probst, Stephan
  • Bladou, Franck
  • Anidjar, Moris
  • Bahoric, Boris
Frontiers in Oncology 2018 Journal Article, cited 0 times
Website

Multimodal Radiomic Features for the Predicting Gleason Score of Prostate Cancer

  • Chaddad, Ahmad
  • Kucharczyk, Michael
  • Niazi, Tamim
Cancers 2018 Journal Article, cited 1 times
Website

Predicting survival time of lung cancer patients using radiomic analysis

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
  • Abdulkarim, Bassam
OncotargetOncotarget 2017 Journal Article, cited 4 times
Website
Objectives: This study investigates the prediction of Non-small cell lung cancer (NSCLC) patient survival outcomes based on radiomic texture and shape features automatically extracted from tumor image data. Materials and Methods: Retrospective analysis involves CT scans of 315 NSCLC patients from The Cancer Imaging Archive (TCIA). A total of 24 image features are computed from labeled tumor volumes of patients within groups defined using NSCLC subtype and TNM staging information. Spearman's rank correlation, Kaplan-Meier estimation and log-rank tests were used to identify features related to long/short NSCLC patient survival groups. Automatic random forest classification was used to predict patient survival group from multivariate feature data. Significance is assessed at P < 0.05 following Holm-Bonferroni correction for multiple comparisons. Results: Significant correlations between radiomic features and survival were observed for four clinical groups: (group, [absolute correlation range]): (large cell carcinoma (LCC) [0.35, 0.43]), (tumor size T2, [0.31, 0.39]), (non lymph node metastasis N0, [0.3, 0.33]), (TNM stage I, [0.39, 0.48]). Significant log-rank relationships between features and survival time were observed for three clinical groups: (group, hazard ratio): (LCC, 3.0), (LCC, 3.9), (T2, 2.5) and (stage I, 2.9). Automatic survival prediction performance (i.e. below/above median) is superior for combined radiomic features with age-TNM in comparison to standard TNM clinical staging information (clinical group, mean area-under-the-ROC-curve (AUC)): (LCC, 75.73%), (N0, 70.33%), (T2, 70.28%) and (TNM-I, 76.17%). Conclusion: Quantitative lung CT imaging features can be used as indicators of survival, in particular for patients with large-cell-carcinoma (LCC), primary-tumor-sizes (T2) and no lymph-node-metastasis (N0).

GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website
Glioblastoma multiforme (GBM) is the most common malignant primary tumor of the central nervous system, characterized among other traits by rapid metastatis. Three tissue phenotypes closely associated with GBMs, namely, necrosis (N), contrast enhancement (CE), and edema/invasion (E), exhibit characteristic patterns of texture heterogeneity in magnetic resonance images (MRI). In this study, we propose a novel model to characterize GBM tissue phenotypes using gray level co-occurrence matrices (GLCM) in three anatomical planes. The GLCM encodes local image patches in terms of informative, orientation-invariant texture descriptors, which are used here to sub-classify GBM tissue phenotypes. Experiments demonstrate the model on MRI data of 41 GBM patients, obtained from the cancer genome atlas (TCGA). Intensity-based automatic image registration is applied to align corresponding pairs of fixed T1˗weighted (T1˗WI) post-contrast and fluid attenuated inversion recovery (FLAIR) images. GBM tissue regions are then segmented using the 3D Slicer tool. Texture features are computed from 12 quantifier functions operating on GLCM descriptors, that are generated from MRI intensities within segmented GBM tissue regions. Various classifier models are used to evaluate the effectiveness of texture features for discriminating between GBM phenotypes. Results based on T1-WI scans showed a phenotype classification accuracy of over 88.14%, a sensitivity of 85.37% and a specificity of 96.1%, using the linear discriminant analysis (LDA) classifier. This model has the potential to provide important characteristics of tumors, which can be used for the sub-classification of GBM phenotypes.

Phenotypic characterization of glioblastoma identified through shape descriptors

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.

Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 11 times
Website
Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI.

Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models

  • Chaddad, Ahmad
Journal of Biomedical Imaging 2015 Journal Article, cited 29 times
Website

Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer

  • Chacón, Gerardo
  • Rodríguez, Johel E
  • Bermúdez, Valmore
  • Vera, Miguel
  • Hernández, Juan Diego
  • Vargas, Sandra
  • Pardo, Aldo
  • Lameda, Carlos
  • Madriz, Delia
  • Bravo, Antonio J
F1000Research 2018 Journal Article, cited 0 times
Website
Background: The multi-slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three-dimensional (3-D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3-D shape computationally segmented from the each dataset. These 3-D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.

Qualitative stomach cancer assessment by multi-slice computed tomography

  • Chacón, Gerardo
  • Rodríguez, Johel E.
  • Bermúdez, Valmore
  • Vera, Miguel
  • Hernandez, Juan Diego
  • Pardo, Aldo
  • Lameda, Carlos
  • Madriz, Delia
  • Bravo, Antonio José
Ingeniare. Revista chilena de ingeniería 2020 Journal Article, cited 0 times
Website
ABSTRACT A theoretical framework based on the Borrmann classification and the Japanese gastric cancer classification is proposed in order to qualitatively assess the stomach cancer from the three-dimensional (3-D) images obtained using multi-slice computerized tomography (MSCT). The main goal of this paper is to demonstrate through visual inspection, the MSCT capacity to effectively reflect the morphopathological characteristics of the stomach adenocarcinoma types. The idea is to contrast the pathological theoretic characteristics with those that are possible to understand from MSCT images available in clinical datasets. This research corresponds to a study with a mixed approach (qualitative and quantitative), applied to a total of 46 images available for patients diagnosed, from the data collection included of the Cancer Genome Atlas Stomach Adenocarcinoma (TCGA-STAD). The conclusions are established from a comparative analysis based on the document review and direct observation, the product being a matrix of compliance with the specific qualities of the theoretical standards, in the visualization of images performed by the clinical specialist from the datasets. A total of 6210 slices from 46 MSCT explorations are visually inspected, and then visual characteristics are contrasted with respect to the theoretic characteristics obtained from the cancer classifications. These characteristics match into about 96% of images inspected. The approach effectiveness measured using the positive predictive value is about 96.50%. The results of the images data also show a sensitivity of 97.83%, and specificity of 98.27%. MSCT is a precise imaging modality in the qualitative assessment of the staging of stomach cancer. Keywords: Stomach cancer; adenocarcinoma; macroscopic assessment; Borrmann classification; Japanese classification; medical imaging; multi-slice computerized tomography RESUMEN En el presente artículo se propone un marco teórico basado en la clasificación de Borrmann y la clasificación japonesa del cáncer gástrico para evaluar cualitativamente el cáncer a partir de imágenes tridimensionales (3-D) obtenidas mediante tomografía computarizada multicorte (MSCT). El objetivo es demostrar a través de la inspección visual, la capacidad de MSCT para reflejar efectivamente las características morfopatológicas de los tipos de adenocarcinoma de estómago. La idea es contrastar las características teóricas patológicas con aquellas que son posibles de comprender en las imágenes disponibles. Esta investigación corresponde a un estudio con un enfoque mixto (cualitativo y cuantitativo), aplicado a un total de 46 imágenes de pacientes diagnosticados, incluidos en el Atlas del Genoma del Cáncer (TCGA-STAD). Las conclusiones se establecen mediante un análisis comparativo basado en la revisión documental y observación directa, siendo el producto una matriz de cumplimiento de las cualidades específicas de los estándares teóricos, a partir de la visualización de imágenes realizadas por el especialista clínico. Se inspeccionaron visualmente un total de 6210 cortes de tomografía de 46 exploraciones de MSCT, y luego se contrastaron las características visuales patológicas con respecto a los criterios patológicos obtenidos de las clasificaciones de cáncer. Las características coinciden con aproximadamente el 96% de las imágenes inspeccionadas. La efectividad del enfoque medida usando el valorpredictivo positivo es aproximadamente 96,50%. Los resultados también muestran una sensibilidad de 97,83% y especificidad de 99,18%. MSCT es una modalidad de imagen precisa en la evaluación cualitativa de la estadificación del cáncer de estómago. Palabras clave: Cáncer de estómago; adenocarcinoma; evaluación macroscópica; clasificación de Borrmann; clasificación japonesa; imágenes médicas; tomografía computarizada

Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning

  • Cha, K. H.
  • Petrick, N.
  • Pezeshk, A.
  • Graff, C. G.
  • Sharma, D.
  • Badal, A.
  • Sahiner, B.
J Med Imaging (Bellingham) 2020 Journal Article, cited 1 times
Website
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.

Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT

  • Cha, Jungwon
  • Farhangi, Mohammad Mehdi
  • Dunlap, Neal
  • Amini, Amir A
Medical Physics 2018 Journal Article, cited 5 times
Website

Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning

  • Cha, Jungwon
2018 Thesis, cited 0 times
Website
This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon's non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman's rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT.

Non-navigated 2D intraoperative ultrasound: An unsophisticated surgical tool to achieve high standards of care in glioma surgery

  • Cepeda, S.
  • Garcia-Garcia, S.
  • Arrese, I.
  • Sarabia, R.
J Neurooncol 2024 Journal Article, cited 0 times
Website
PURPOSE: In an era characterized by rapid progression in neurosurgical technologies, traditional tools such as the non-navigated two-dimensional intraoperative ultrasound (nn-2D-IOUS) risk being overshadowed. Against this backdrop, this study endeavors to provide a comprehensive assessment of the clinical efficacy and surgical relevance of nn-2D-IOUS, specifically in the context of glioma resections. METHODS: This retrospective study undertaken at a single center evaluated 99 consecutive, non-selected patients diagnosed with both high-grade and low-grade gliomas. The primary objective was to assess the proficiency of nn-2D-IOUS in generating satisfactory image quality, identifying residual tumor tissue, and its influence on the extent of resection. To validate these results, early postoperative MRI data served as the reference standard. RESULTS: The nn-2D-IOUS exhibited a high level of effectiveness, successfully generating good quality images in 79% of the patients evaluated. With a sensitivity rate of 68% and a perfect specificity of 100%, nn-2D-IOUS unequivocally demonstrated its utility in intraoperative residual tumor detection. Notably, when total tumor removal was the surgical objective, a resection exceeding 95% of the initial tumor volume was achieved in 86% of patients. Additionally, patients in whom residual tumor was not detected by nn-2D-IOUS, the mean volume of undetected tumor tissue was remarkably minimal, averaging at 0.29 cm(3). CONCLUSION: Our study supports nn-2D-IOUS's invaluable role in glioma surgery. The results highlight the utility of traditional technologies for enhanced surgical outcomes, even when compared to advanced alternatives. This is particularly relevant for resource-constrained settings and emphasizes optimizing existing tools for efficient patient care. NCT05873946 - 24/05/2023 - Retrospectively registered.

Renal cell carcinoma: predicting RUNX3 methylation level and its consequences on survival with CT features

  • Dongzhi Cen
  • Li Xu
  • Siwei Zhang
  • Zhiguang Chen
  • Yan Huang
  • Ziqi Li
  • Bo Liang
European Radiology 2019 Journal Article, cited 0 times
Website
PURPOSE: To investigate associations between CT imaging features, RUNX3 methylation level, and survival in clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients were divided into high RUNX3 methylation and low RUNX3 methylation groups according to RUNX3 methylation levels (the threshold was identified by using X-tile). The CT scanning data from 106 ccRCC patients were retrospectively analyzed. The relationship between RUNX3 methylation level and overall survivals was evaluated using the Kaplan-Meyer analysis and Cox regression analysis (univariate and multivariate). The relationship between RUNX3 methylation level and CT features was evaluated using chi-square test and logistic regression analysis (univariate and multivariate). RESULTS: beta value cutoff of 0.53 to distinguish high methylation (N = 44) from low methylation tumors (N = 62). Patients with lower levels of methylation had longer median overall survival (49.3 vs. 28.4) months (low vs. high, adjusted hazard ratio [HR] 4.933, 95% CI 2.054-11.852, p < 0.001). On univariate logistic regression analysis, four risk factors (margin, side, long diameter, and intratumoral vascularity) were associated with RUNX3 methylation level (all p < 0.05). Multivariate logistic regression analysis found that three risk factors (side: left vs. right, odds ratio [OR] 2.696; p = 0.024; 95% CI 1.138-6.386; margin: ill-defined vs. well-defined, OR 2.685; p = 0.038; 95% CI 1.057-6.820; and intratumoral vascularity: yes vs. no, OR 3.286; p = 0.008; 95% CI 1.367-7.898) were significant independent predictors of high methylation tumors. This model had an area under the receiver operating characteristic curve (AUC) of 0.725 (95% CI 0.623-0.827). CONCLUSIONS: Higher levels of RUNX3 methylation are associated with shorter survival in ccRCC patients. And presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene. KEY POINTS: * RUNX3 methylation level is negatively associated with overall survival in ccRCC patients. * Presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene.

Improving the classification of veterinary thoracic radiographs through inter-species and inter-pathology self-supervised pre-training of deep learning models

  • Celniak, W.
  • Wodzinski, M.
  • Jurgas, A.
  • Burti, S.
  • Zotti, A.
  • Atzori, M.
  • Muller, H.
  • Banzato, T.
2023 Journal Article, cited 1 times
Website
The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach.

Detection of Tumor Slice in Brain Magnetic Resonance Images by Feature Optimized Transfer Learning

  • Celik, Salih
  • KASIM, Ömer
Aksaray University Journal of Science and Engineering 2020 Journal Article, cited 0 times
Website

Highly accurate model for prediction of lung nodule malignancy with CT scans

  • Causey, Jason L
  • Zhang, Junyu
  • Ma, Shiqian
  • Jiang, Bo
  • Qualls, Jake A
  • Politte, David G
  • Prior, Fred
  • Zhang, Shuzhong
  • Huang, Xiuzhen
Sci RepScientific reports 2018 Journal Article, cited 5 times
Website
Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at http://bioinformatics.astate.edu/NoduleX .

MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer

  • Cattell, Renee F.
  • Kang, James J.
  • Ren, Thomas
  • Huang, Pauline B.
  • Muttreja, Ashima
  • Dacosta, Sarah
  • Li, Haifang
  • Baer, Lea
  • Clouston, Sean
  • Palermo, Roxanne
  • Fisher, Paul
  • Bernstein, Cliff
  • Cohen, Jules A.
  • Duong, Tim Q.
Clinical Breast Cancer 2019 Journal Article, cited 0 times
Website
Introduction Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC). Materials and Methods Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance. Results Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82. Conclusion Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.

Selección de un algoritmo para la clasificación de Nódulos Pulmonares Solitarios

  • Castro, Arelys Rivero
  • Correa, Luis Manuel Cruz
  • Lezcano, Jeffrey Artiles
Revista Cubana de Informática Médica 2016 Journal Article, cited 0 times
Website

Classification of Clinically Significant Prostate Cancer on Multi-Parametric MRI: A Validation Study Comparing Deep Learning and Radiomics

  • Castillo T., Jose M.
  • Arif, Muhammad
  • Starmans, Martijn P. A.
  • Niessen, Wiro J.
  • Bangma, Chris H.
  • Schoots, Ivo G.
  • Veenland, Jifke F.
Cancers 2022 Journal Article, cited 0 times
Website

The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status

  • Castaldo, Rossana
  • Pane, Katia
  • Nicolai, Emanuele
  • Salvatore, Marco
  • Franzese, Monica
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
In breast cancer studies, combining quantitative radiomic with genomic signatures can help identifying and characterizing radiogenomic phenotypes, in function of molecular receptor status. Biomedical imaging processing lacks standards in radiomic feature normalization methods and neglecting feature normalization can highly bias the overall analysis. This study evaluates the effect of several normalization techniques to predict four clinical phenotypes such as estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and triple negative (TN) status, by quantitative features. The Cancer Imaging Archive (TCIA) radiomic features from 91 T1-weighted Dynamic Contrast Enhancement MRI of invasive breast cancers were investigated in association with breast invasive carcinoma miRNA expression profiling from the Cancer Genome Atlas (TCGA). Three advanced machine learning techniques (Support Vector Machine, Random Forest, and Naive Bayesian) were investigated to distinguish between molecular prognostic indicators and achieved an area under the ROC curve (AUC) values of 86%, 93%, 91%, and 91% for the prediction of ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative, respectively. In conclusion, radiomic features enable to discriminate major breast cancer molecular subtypes and may yield a potential imaging biomarker for advancing precision medicine.

Predicting risk of metastases and recurrence in soft-tissue sarcomas via Radiomics and Formal Methods

  • Casale, R.
  • Varriano, G.
  • Santone, A.
  • Messina, C.
  • Casale, C.
  • Gitto, S.
  • Sconfienza, L. M.
  • Bali, M. A.
  • Brunese, L.
2023 Journal Article, cited 0 times
Website
OBJECTIVE: Soft-tissue sarcomas (STSs) of the extremities are a group of malignancies arising from the mesenchymal cells that may develop distant metastases or local recurrence. In this article, we propose a novel methodology aimed to predict metastases and recurrence risk in patients with these malignancies by evaluating magnetic resonance radiomic features that will be formally verified through formal logic models. MATERIALS AND METHODS: This is a retrospective study based on a public dataset evaluating MRI scans T2-weighted fat-saturated or short tau inversion recovery and patients having "metastases/local recurrence" (group B) or "no metastases/no local recurrence" (group A) as clinical outcomes. Once radiomic features are extracted, they are included in formal models, on which is automatically verified the logic property written by a radiologist and his computer scientists coworkers. RESULTS: Evaluating the Formal Methods efficacy in predicting distant metastases/local recurrence in STSs (group A vs group B), our methodology showed a sensitivity and specificity of 0.81 and 0.67, respectively; this suggests that radiomics and formal verification may be useful in predicting future metastases or local recurrence development in soft tissue sarcoma. DISCUSSION: Authors discussed about the literature to consider Formal Methods as a valid alternative to other Artificial Intelligence techniques. CONCLUSIONS: An innovative and noninvasive rigourous methodology can be significant in predicting local recurrence and metastases development in STSs. Future works can be the assessment on multicentric studies to extract objective disease information, enriching the connection between the radiomic quantitative analysis and the radiological clinical evidences.

Development and external validation of a non-invasive molecular status predictor of chromosome 1p/19q co-deletion based on MRI radiomics analysis of Low Grade Glioma patients

  • Casale, R.
  • Lavrova, E.
  • Sanduleanu, S.
  • Woodruff, H. C.
  • Lambin, P.
Eur J Radiol 2021 Journal Article, cited 0 times
Website
PURPOSE: The 1p/19q co-deletion status has been demonstrated to be a prognostic biomarker in lower grade glioma (LGG). The objective of this study was to build a magnetic resonance (MRI)-derived radiomics model to predict the 1p/19q co-deletion status. METHOD: 209 pathology-confirmed LGG patients from 2 different datasets from The Cancer Imaging Archive were retrospectively reviewed; one dataset with 159 patients as the training and discovery dataset and the other one with 50 patients as validation dataset. Radiomics features were extracted from T2- and T1-weighted post-contrast MRI resampled data using linear and cubic interpolation methods. For each of the voxel resampling methods a three-step approach was used for feature selection and a random forest (RF) classifier was trained on the training dataset. Model performance was evaluated on training and validation datasets and clinical utility indexes (CUIs) were computed. The distributions and intercorrelation for selected features were analyzed. RESULTS: Seven radiomics features were selected from the cubic interpolated features and five from the linear interpolated features on the training dataset. The RF classifier showed similar performance for cubic and linear interpolation methods in the training dataset with accuracies of 0.81 (0.75-0.86) and 0.76 (0.71-0.82) respectively; in the validation dataset the accuracy dropped to 0.72 (0.6-0.82) using cubic interpolation and 0.72 (0.6-0.84) using linear resampling. CUIs showed the model achieved satisfactory negative values (0.605 using cubic interpolation and 0.569 for linear interpolation). CONCLUSIONS: MRI has the potential for predicting the 1p/19q status in LGGs. Both cubic and linear interpolation methods showed similar performance in external validation.

Deep learning-based tumor segmentation and classification in breast MRI with 3TP method

  • Carvalho, Edson Damasceno
  • da Silva Neto, Otilio Paulo
  • de Carvalho Filho, Antônio Oseas
Biomedical Signal Processing and Control 2024 Journal Article, cited 0 times
Abstract Background and Objective: Timely diagnosis of early breast cancer plays a critical role in improving patient outcome and increasing treatment effectiveness. Dynamic contrast-enhancing magnetic resonance imaging (DCE-MRI) is a minimally invasive test widely used in the analysis of breast cancer. Manual analysis of DCE-MRI images by the specialist is extremely complex, exhaustive, and can lead to misunderstandings. Thus, the development of automated methods for analyzing DCE-MRI images of the breast is increasing. In this research, we propose an automatic methodology capable of detecting tumors and classifying their malignancy in a DCE-MRI breast image. Methodology: The proposed method consists of the use of two deep learning architectures, that is, SegNet and UNet, for breast tumor segmentation and the three-time-point (3TP) method for classifying the malignancy of segmented tumors. Results: The proposed methodology was tested on the public Quantitative Imaging Network (QIN) Breast DCE-MRI image set, and the best result in segmentation was a Dice of 0.9332 and IoU of 0.9799. For the classification of tumor malignancy, the methodology presented an accuracy of 100%. Conclusions: In our research, we demonstrate that the problem of mammary tumor segmentation in DCE-MRI images can be efficiently solved using deep learning architectures, and tumor malignancy classification can be done through the three-time method. The method can be integrated as a support system for the specialist in treating patients with breast cancer.

A Multimodal Ensemble Driven by Multiobjective Optimisation to Predict Overall Survival in Non-Small-Cell Lung Cancer

  • Caruso, C. M.
  • Guarrasi, V.
  • Cordelli, E.
  • Sicilia, R.
  • Gentile, S.
  • Messina, L.
  • Fiore, M.
  • Piccolo, C.
  • Beomonte Zobel, B.
  • Iannello, G.
  • Ramella, S.
  • Soda, P.
2022 Journal Article, cited 0 times
Website
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary information from the data of different modalities for prognostic and predictive purposes. This knowledge could be used to optimise current treatments and maximise their effectiveness. To predict overall survival, in this work, we investigate the use of multimodal learning on the CLARO dataset, which includes CT images and clinical data collected from a cohort of non-small-cell lung cancer patients. Our method allows the identification of the optimal set of classifiers to be included in the ensemble in a late fusion approach. Specifically, after training unimodal models on each modality, it selects the best ensemble by solving a multiobjective optimisation problem that maximises both the recognition performance and the diversity of the predictions. In the ensemble, the labels of each sample are assigned using the majority voting rule. As further validation, we show that the proposed ensemble outperforms the models learning a single modality, obtaining state-of-the-art results on the task at hand.

Multimodal mixed reality visualisation for intraoperative surgical guidance

  • Cartucho, João
  • Shapira, David
  • Ashrafian, Hutan
  • Giannarou, Stamatia
International Journal of Computer Assisted Radiology and Surgery 2020 Journal Article, cited 0 times
Website

PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms

  • Carter, L. M.
  • Crawford, T. M.
  • Sato, T.
  • Furuta, T.
  • Choi, C.
  • Kim, C. H.
  • Brown, J. L.
  • Bolch, W. E.
  • Zanzonico, P. B.
  • Lewis, J. S.
J Nucl Med 2019 Journal Article, cited 0 times
Website
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.

Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics

  • Carré, Alexandre
  • Klausner, Guillaume
  • Edjlali, Myriam
  • Lerousseau, Marvin
  • Briend-Diop, Jade
  • Sun, Roger
  • Ammari, Samy
  • Reuzé, Sylvain
  • Andres, Emilie Alvarez
  • Estienne, Théo
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

Automatic Brain Tumor Segmentation with a Bridge-Unet Deeply Supervised Enhanced with Downsampling Pooling Combination, Atrous Spatial Pyramid Pooling, Squeeze-and-Excitation and EvoNorm

  • Carré, Alexandre
  • Deutsch, Eric
  • Robert, Charlotte
2022 Book Section, cited 0 times
Website
Segmentation of brain tumors is a critical task for patient disease management. Since this task is time-consuming and subject to inter-expert delineation variation, automatic methods are of significant interest. The Multimodal Brain Tumor Segmentation Challenge (BraTS) has been in place for about a decade and provides a common platform to compare different automatic segmentation algorithms based on multiparametric magnetic resonance imaging (mpMRI) of gliomas. This year the challenge has taken a big step forward by multiplying the total data by approximately 3. We address the image segmentation challenge by developing a network based on a Bridge-Unet and improved with a concatenation of max and average pooling for downsampling, Squeeze-and-Excitation (SE) block, Atrous Spatial Pyramid Pooling (ASSP), and EvoNorm-S0. Our model was trained using the 1251 training cases from the BraTS 2021 challenge and achieved an average Dice similarity coefficient (DSC) of 0.92457, 0.87811 and 0.84094, as well as a 95% Hausdorff distance (HD) of 4.19442, 7.55256 and 14.13390 mm for the whole tumor, tumor core, and enhanced tumor, respectively on the online validation platform composed of 219 cases. Similarly, our solution achieved a DSC of 0.92548, 0.87628 and 0.87122, as well as HD95 of 4.30711, 17.84987 and 12.23361 mm on the test dataset composed of 530 cases. Overall, our approach yielded well balanced performance for each tumor subregion.

AutoComBat: a generic method for harmonizing MRI-based radiomic features

  • Carré, A.
  • Battistella, E.
  • Niyoteka, S.
  • Sun, R.
  • Deutsch, E.
  • Robert, C.
2022 Journal Article, cited 0 times
Website
The use of multicentric data is becoming essential for developing generalizable radiomic signatures. In particular, Magnetic Resonance Imaging (MRI) data used in brain oncology are often heterogeneous in terms of scanners and acquisitions, which significantly impact quantitative radiomic features. Various methods have been proposed to decrease dependency, including methods acting directly on MR images, i.e., based on the application of several preprocessing steps before feature extraction or the ComBat method, which harmonizes radiomic features themselves. The ComBat method used for radiomics may be misleading and presents some limitations, such as the need to know the labels associated with the "batch effect". In addition, a statistically representative sample is required and the applicability of a signature whose batch label is not present in the train set is not possible. This work aimed to compare a priori and a posteriori radiomic harmonization methods and propose a code adaptation to be machine learning compatible. Furthermore, we have developed AutoComBat, which aims to automatically determine the batch labels, using either MRI metadata or quality metrics as inputs of the proposed constrained clustering. A heterogeneous dataset consisting of high and low-grade gliomas coming from eight different centers was considered. The different methods were compared based on their ability to decrease relative standard deviation of radiomic features extracted from white matter and on their performance on a classification task using different machine learning models. ComBat and AutoComBat using image-derived quality metrics as inputs for batch assignment and preprocessing methods presented promising results on white matter harmonization, but with no clear consensus for all MR images. Preprocessing showed the best results on the T1w-gd images for the grading task. For T2w-flair, AutoComBat, using either metadata plus quality metrics or metadata alone as inputs, performs better than the conventional ComBat, highlighting its potential for data harmonization. Our results are MRI weighting, feature class and task dependent and require further investigations on other datasets.

MultiATTUNet: Brain Tumor Segmentation and Survival Multitasking

  • Carmo, Diedre
  • Rittner, Leticia
  • Lotufo, Roberto
2021 Book Section, cited 0 times
Segmentation of Glioma from three dimensional magnetic resonance imaging (MRI) is useful for diagnosis and surgical treatment of patients with brain tumor. Manual segmentation is expensive, requiring medical specialists. In the recent years, the Brain Tumor Segmentation Challenge (BraTS) has been calling researchers to submit automated glioma segmentation and survival prediction methods for evaluation and discussion over their public, multimodality MRI dataset, with manual annotations. This work presents an exploration of different solutions to the problem, using 3D UNets and self attention for multitasking both predictions and also training (2D) EfficientDet derived segmentations, with the best results submitted for the official challenge leaderboard. We show that end-to-end multitasking survival and segmentation, in this case, led to better results.

Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations

  • Cardenas, Carlos E
  • Mohamed, Abdallah S R
  • Yang, Jinzhong
  • Gooding, Mark
  • Veeraraghavan, Harini
  • Kalpathy-Cramer, Jayashree
  • Ng, Sweet Ping
  • Ding, Yao
  • Wang, Jihong
  • Lai, Stephen Y
  • Fuller, Clifton D
  • Sharp, Greg
Med Phys 2020 Dataset, cited 0 times
Website
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.

FPB: Improving Multi-Scale Feature Representation Inside Convolutional Layer Via Feature Pyramid Block

  • Cao, Zheng
  • Zhang, Kailai
  • Wu, Ji
2020 Conference Paper, cited 0 times
Website
Multi-scale features exist widely in biomedical images. For example, the scale of lesions may vary greatly according to different diseases. Effective representation of multi-scale features is essential for fully perceiving and understanding objects, which guarantees the performance of models. However, in biomedical image tasks, the insufficiency of data may prevent models from effectively capturing multi-scale features. In this paper, we propose Feature Pyramid Block (FPB), a novel structure to improve multi-scale feature representation within a single convolutional layer, which can be easily plugged into existing convolutional networks. Experiments on public biomedical image datasets prove consistent performance improvement with FPB. Furthermore, the convergence speed is faster and the computational costs are lower when using FPB, which proves high efficiency of our method.

A 4D-CBCT correction network based on contrastive learning for dose calculation in lung cancer

  • Cao, N.
  • Wang, Z.
  • Ding, J.
  • Zhang, H.
  • Zhang, S.
  • Gao, L.
  • Sun, J.
  • Xie, K.
  • Ni, X.
Radiat Oncol 2024 Journal Article, cited 0 times
Website
OBJECTIVE: This study aimed to present a deep-learning network called contrastive learning-based cycle generative adversarial networks (CLCGAN) to mitigate streak artifacts and correct the CT value in four-dimensional cone beam computed tomography (4D-CBCT) for dose calculation in lung cancer patients. METHODS: 4D-CBCT and 4D computed tomography (CT) of 20 patients with locally advanced non-small cell lung cancer were used to paired train the deep-learning model. The lung tumors were located in the right upper lobe, right lower lobe, left upper lobe, and left lower lobe, or in the mediastinum. Additionally, five patients to create 4D synthetic computed tomography (sCT) for test. Using the 4D-CT as the ground truth, the quality of the 4D-sCT images was evaluated by quantitative and qualitative assessment methods. The correction of CT values was evaluated holistically and locally. To further validate the accuracy of the dose calculations, we compared the dose distributions and calculations of 4D-CBCT and 4D-sCT with those of 4D-CT. RESULTS: The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) of the 4D-sCT increased from 87% and 22.31 dB to 98% and 29.15 dB, respectively. Compared with cycle consistent generative adversarial networks, CLCGAN enhanced SSIM and PSNR by 1.1% (p < 0.01) and 0.42% (p < 0.01). Furthermore, CLCGAN significantly decreased the absolute mean differences of CT value in lungs, bones, and soft tissues. The dose calculation results revealed a significant improvement in 4D-sCT compared to 4D-CBCT. CLCGAN was the most accurate in dose calculations for left lung (V5Gy), right lung (V5Gy), right lung (V20Gy), PTV (D98%), and spinal cord (D2%), with the relative dose difference were reduced by 6.84%, 3.84%, 1.46%, 0.86%, 3.32% compared to 4D-CBCT. CONCLUSIONS: Based on the satisfactory results obtained in terms of image quality, CT value measurement, it can be concluded that CLCGAN-based corrected 4D-CBCT can be utilized for dose calculation in lung cancer.

A CNN-transformer fusion network for COVID-19 CXR image classification

  • Cao, K.
  • Deng, T.
  • Zhang, C.
  • Lu, L.
  • Li, L.
PLoS One 2022 Journal Article, cited 0 times
Website
The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers' proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19.

A quantitative model based on clinically relevant MRI features differentiates lower grade gliomas and glioblastoma

  • Cao, H.
  • Erson-Omay, E. Z.
  • Li, X.
  • Gunel, M.
  • Moliterno, J.
  • Fulbright, R. K.
Eur Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVES: To establish a quantitative MR model that uses clinically relevant features of tumor location and tumor volume to differentiate lower grade glioma (LRGG, grades II and III) and glioblastoma (GBM, grade IV). METHODS: We extracted tumor location and tumor volume (enhancing tumor, non-enhancing tumor, peritumor edema) features from 229 The Cancer Genome Atlas (TCGA)-LGG and TCGA-GBM cases. Through two sampling strategies, i.e., institution-based sampling and repeat random sampling (10 times, 70% training set vs 30% validation set), LASSO (least absolute shrinkage and selection operator) regression and nine-machine learning method-based models were established and evaluated. RESULTS: Principal component analysis of 229 TCGA-LGG and TCGA-GBM cases suggested that the LRGG and GBM cases could be differentiated by extracted features. For nine machine learning methods, stack modeling and support vector machine achieved the highest performance (institution-based sampling validation set, AUC > 0.900, classifier accuracy > 0.790; repeat random sampling, average validation set AUC > 0.930, classifier accuracy > 0.850). For the LASSO method, regression model based on tumor frontal lobe percentage and enhancing and non-enhancing tumor volume achieved the highest performance (institution-based sampling validation set, AUC 0.909, classifier accuracy 0.830). The formula for the best performance of the LASSO model was established. CONCLUSIONS: Computer-generated, clinically meaningful MRI features of tumor location and component volumes resulted in models with high performance (validation set AUC > 0.900, classifier accuracy > 0.790) to differentiate lower grade glioma and glioblastoma. KEY POINTS: * Lower grade glioma and glioblastoma have significant different location and component volume distributions. * We built machine learning prediction models that could help accurately differentiate lower grade gliomas and GBM cases. We introduced a fast evaluation model for possible clinical differentiation and further analysis.

Lung Cancer Identification via Deep Learning: A Multi-Stage Workflow

  • Canavesi, Irene
  • D'Arnese, Eleonora
  • Caramaschi, Sara
  • Santambrogio, Marco D.
2022 Conference Paper, cited 0 times
Website
Lung cancer diagnosis involves different screening exams concluding with a biopsy. Although it is among the most diagnosed, lung cancer is characterized by a very high mortality rate caused by its aggressive nature. Though a swift identification is essential, the current procedure requires multiple physicians to visually inspect many images, leading to a lengthy analysis time. In this context, to support the radiologists and automate such repetitive processes, Deep Learning (DL) techniques have found their way as helpful diagnosis support tools. With this work, we propose an end-to-end multi-step framework for lung cancer localization within routinely acquired Computed Tomography images. The framework is composed of a first step of lung segmentation, followed by a patch classification model, and ends with a mass segmentation module. Lung segmentation reaches an accuracy of 99.6% even when considerable damages are present, while the patch classifier achieves a sensitivity of 85.48% in identifying patches containing masses. Finally, we evaluate the end-to-end framework for mass segmentation, which proves to be the most challenging task reaching a mean Dice coefficient of 68.56%.

A real use case of semi-supervised learning for mammogram classification in a local clinic of Costa Rica

  • Calderon-Ramirez, S.
  • Murillo-Hernandez, D.
  • Rojas-Salazar, K.
  • Elizondo, D.
  • Yang, S.
  • Moemeni, A.
  • Molina-Cabello, M.
2022 Journal Article, cited 13 times
Website
The implementation of deep learning-based computer-aided diagnosis systems for the classification of mammogram images can help in improving the accuracy, reliability, and cost of diagnosing patients. However, training a deep learning model requires a considerable amount of labelled images, which can be expensive to obtain as time and effort from clinical practitioners are required. To address this, a number of publicly available datasets have been built with data from different hospitals and clinics, which can be used to pre-train the model. However, using models trained on these datasets for later transfer learning and model fine-tuning with images sampled from a different hospital or clinic might result in lower performance. This is due to the distribution mismatch of the datasets, which include different patient populations and image acquisition protocols. In this work, a real-world scenario is evaluated where a novel target dataset sampled from a private Costa Rican clinic is used, with few labels and heavily imbalanced data. The use of two popular and publicly available datasets (INbreast and CBIS-DDSM) as source data, to train and test the models on the novel target dataset, is evaluated. A common approach to further improve the model's performance under such small labelled target dataset setting is data augmentation. However, often cheaper unlabelled data is available from the target clinic. Therefore, semi-supervised deep learning, which leverages both labelled and unlabelled data, can be used in such conditions. In this work, we evaluate the semi-supervised deep learning approach known as MixMatch, to take advantage of unlabelled data from the target dataset, for whole mammogram image classification. We compare the usage of semi-supervised learning on its own, and combined with transfer learning (from a source mammogram dataset) with data augmentation, as also against regular supervised learning with transfer learning and data augmentation from source datasets. It is shown that the use of a semi-supervised deep learning combined with transfer learning and data augmentation can provide a meaningful advantage when using scarce labelled observations. Also, we found a strong influence of the source dataset, which suggests a more data-centric approach needed to tackle the challenge of scarcely labelled data. We used several different metrics to assess the performance gain of using semi-supervised learning, when dealing with very imbalanced test datasets (such as the G-mean and the F2-score), as mammogram datasets are often very imbalanced. Graphical Abstract Description of the test-bed implemented in this work. Two different source data distributions were used to fine-tune the different models tested in this work. The target dataset is the in-house CR-Chavarria-2020 dataset.

Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing

  • Cai, Yiheng
  • Li, Yuanyuan
  • Qiu, Changyan
  • Ma, Jie
  • Gao, Xurong
IEEE Access 2019 Journal Article, cited 0 times
Website
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.

An Online Mammography Database with Biopsy Confirmed Types

  • Cai, Hongmin
  • Wang, Jinhua
  • Dan, Tingting
  • Li, Jiao
  • Fan, Zhihao
  • Yi, Weiting
  • Cui, Chunyan
  • Jiang, Xinhua
  • Li, Li
Scientific data 2023 Journal Article, cited 1 times
Website
Breast carcinoma is the second largest cancer in the world among women. Early detection of breast cancer has been shown to increase the survival rate, thereby significantly increasing patients’ lifespan. Mammography, a noninvasive imaging tool with low cost, is widely used to diagnose breast disease at an early stage due to its high sensitivity. Although some public mammography datasets are useful, there is still a lack of open access datasets that expand beyond the white population as well as missing biopsy confirmation or with unknown molecular subtypes. To fill this gap, we build a database containing two online breast mammographies. The dataset named by Chinese Mammography Database (CMMD) contains 3712 mammographies involved 1775 patients, which is divided into two branches. The first dataset CMMD1 contains 1026 cases (2214 mammographies) with biopsy confirmed type of benign or malignant tumors. The second dataset CMMD2 includes 1498 mammographies for 749 patients with known molecular subtypes. Our database is constructed to enrich the diversity of mammography data and promote the development of relevant fields.

Prognostic generalization of multi-level CT-dose fusion dosiomics from primary tumor and lymph node in nasopharyngeal carcinoma

  • Cai, C.
  • Lv, W.
  • Chi, F.
  • Zhang, B.
  • Zhu, L.
  • Yang, G.
  • Zhao, S.
  • Zhu, Y.
  • Han, X.
  • Dai, Z.
  • Wang, X.
  • Lu, L.
Med Phys 2022 Journal Article, cited 0 times
Website
OBJECTIVES: To investigate the prognostic performance of multi-level CT-dose fusion dosiomics at the image-, matrix- and feature-levels from the gross tumor volume at nasopharynx and the involved lymph node for nasopharyngeal carcinoma (NPC) patients. MATERIALS AND METHODS: Two hundred and nineteen NPC patients (175 vs. 44 for training vs. internal validation) were used to train prediction model, and thirty two NPC patients were used for external validation. We first extracted CT and dose information from intratumoral nasopharynx (GTV_nx) and lymph node (GTV_nd) regions. Then the corresponding peritumoral regions (RING_3mm and RING_5mm) were also considered. Thus, the individual and combination of intra- and peri-tumoral regions were as follows: GTV_nx, GTV_nd, RING_3mm_nx, RING_3mm_nd, RING_5mm_nx, RING_5mm_nd, GTV_nxnd, RING_3mm_nxnd, RING_5mm_nxnd, GTV+RING_3mm_nxnd and GTV+RING_5mm_nxnd. For each region, eleven models were built by combining 5 clinical parameters and 127 features from (1) dose images alone; (2-7) fused dose and CT images via wavelet-based fusion (WF) using CT weights of 0.2, 0.4, 0.6 and 0.8, gradient transfer fusion (GTF), and guided filtering-based fusion (GFF); (8) fused matrices (sumMat); (9-10) fused features derived via feature averaging (avgFea) and feature concatenation (conFea); and finally, (11) CT images alone. The C-index and Kaplan-Meier curves with log-rank test were used to assess model performance. RESULTS: The fusion models' performance was better than single CT/dose model on both internal and external validation. Models combined the information from both GTV_nx and GTV_nd regions outperformed the single region model. For internal validation, GTV+RING_3mm_nxnd GFF model achieved the highest C-index both in recurrence-free survival (RFS) and metastasis-free survival (MFS) predictions (RFS: 0.822; MFS: 0.786). The highest C-index in external validation set was achieved by RING_3mm_nxnd model (RFS: 0.762; MFS: 0.719). The GTV+RING_3mm_nxnd GFF model is able to significantly separate patients into high-risk and low-risk groups compared to dose-only or CT-only models. CONCLUSION: Fusion dosiomics model combining the primary tumor, the involved lymph node, and 3mm peritumoral information outperformed single modality models for different outcome predictions, which is helpful for clinical decision-making and the development of personalized treatment. This article is protected by copyright. All rights reserved.

Four‐Dimensional Machine Learning Radiomics for the Pretreatment Assessment of Breast Cancer Pathologic Complete Response to Neoadjuvant Chemotherapy in Dynamic Contrast‐Enhanced MRI

  • Caballo, Marco
  • Sanderink, Wendelien BG
  • Han, Luyi
  • Gao, Yuan
  • Athanasiou, Alexandra
  • Mann, Ritse M
Journal of Magnetic Resonance Imaging 2022 Journal Article, cited 1 times
Website

Selective segmentation of a feature that has two distinct intensities

  • Burrows, Liam
  • Chen, Ke
  • Torella, Francesco
Journal of Algorithms & Computational Technology 2021 Journal Article, cited 0 times
Website
It is common for a segmentation model to compute and locate edges or regions separated by edges according to a certain distribution of intensity. However such edge information is not always useful to extract an object or feature that has two distinct intensities e.g. segmentation of a building with signages in front or of an organ that has diseased regions, unless some of kind of manual editing is applied or a learning idea is used. This paper proposes an automatic and selective segmentation model that can segment a feature that has two distinct intensities by a single click. A patch like idea is employed to design our two stage model, given only one geometric marker to indicate the location of the inside region. The difficult case where the inside region is leaning towards the boundary of the interested feature is investigated with recommendations given and reliability tested. The model is mainly presented 2D but it can be easily generalised to 3D. We have implemented the model for segmenting both 2D and 3D images.

Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage

  • Burnside, Elizabeth S
  • Drukker, Karen
  • Li, Hui
  • Bonaccio, Ermelinda
  • Zuley, Margarita
  • Ganott, Marie
  • Net, Jose M
  • Sutton, Elizabeth J
  • Brandt, Kathleen R
  • Whitman, Gary J
Cancer 2016 Journal Article, cited 28 times
Website

E1D3 U-Net for Brain Tumor Segmentation: Submission to the RSNA-ASNR-MICCAI BraTS 2021 challenge

  • Bukhari, Syed Talha
  • Mohy-ud-Din, Hassan
2022 Book Section, cited 5 times
Website
Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in medical image segmentation tasks. A common feature in most top-performing CNNs is an encoder-decoder architecture inspired by the U-Net. For multi-region brain tumor segmentation, 3D U-Net architecture and its variants provide the most competitive segmentation performances. In this work, we propose an interesting extension of the standard 3D U-Net architecture, specialized for brain tumor segmentation. The proposed network, called E1D3 U-Net, is a one-encoder, three-decoder fully-convolutional neural network architecture where each decoder segments one of the hierarchical regions of interest: whole tumor, tumor core, and enhancing core. On the BraTS 2018 validation (unseen) dataset, E1D3 U-Net demonstrates single-prediction performance comparable with most state-of-the-art networks in brain tumor segmentation, with reasonable computational requirements and without ensembling. As a submission to the RSNA-ASNR-MICCAI BraTS 2021 challenge, we also evaluate our proposal on the BraTS 2021 dataset. E1D3 U-Net showcases the flexibility in the standard 3D U-Net architecture which we exploit for the task of brain tumor segmentation.

Comparing nonrigid registration techniques for motion corrected MR prostate diffusion imaging

  • Buerger, C
  • Sénégas, J
  • Kabus, S
  • Carolus, H
  • Schulz, H
  • Agarwal, H
  • Turkbey, B
  • Choyke, PL
  • Renisch, S
Medical Physics 2015 Journal Article, cited 4 times
Website
PURPOSE: T2-weighted magnetic resonance imaging (MRI) is commonly used for anatomical visualization in the pelvis area, such as the prostate, with high soft-tissue contrast. MRI can also provide functional information such as diffusion-weighted imaging (DWI) which depicts the molecular diffusion processes in biological tissues. The combination of anatomical and functional imaging techniques is widely used in oncology, e.g., for prostate cancer diagnosis and staging. However, acquisition-specific distortions as well as physiological motion lead to misalignments between T2 and DWI and consequently to a reduced diagnostic value. Image registration algorithms are commonly employed to correct for such misalignment. METHODS: The authors compare the performance of five state-of-the-art nonrigid image registration techniques for accurate image fusion of DWI with T2. RESULTS: Image data of 20 prostate patients with cancerous lesions or cysts were acquired. All registration algorithms were validated using intensity-based as well as landmark-based techniques. CONCLUSIONS: The authors' results show that the "fast elastic image registration" provides most accurate results with a target registration error of 1.07 +/- 0.41 mm at minimum execution times of 11 +/- 1 s.

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

  • Buda, Mateusz
  • Saha, Ashirbani
  • Mazurowski, Maciej A
2019 Journal Article, cited 1 times
Website
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.

Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers

  • Buckler, AndrewJ
  • Ouellette, M.
  • Danagoulian, J.
  • Wernsing, G.
  • Liu, TiffanyTing
  • Savig, Erica
  • Suzek, BarisE
  • Rubin, DanielL
  • Paik, David
2013 Journal Article, cited 17 times
Website

Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model

  • Buch, Karen
  • Kuno, Hirofumi
  • Qureshi, Muhammad M
  • Li, Baojun
  • Sakai, Osamu
Journal of applied clinical medical physics 2018 Journal Article, cited 0 times
Website

Two Stages CNN-Based Segmentation of Gliomas, Uncertainty Quantification and Prediction of Overall Patient Survival

  • Buatois, Thibault
  • Puybareau, Élodie
  • Tochon, Guillaume
  • Chazalon, Joseph
2020 Book Section, cited 1 times
Website
This paper proposes, in the context of brain tumor study, a fast automatic method that segments tumors and predicts patient overall survival. The segmentation stage is implemented using two fully convolutional networks based on VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2019 BraTS Challenge. The first network yields to a binary segmentation (background vs lesion) and the second one focuses on the enhancing and non-enhancing tumor classes. The final multiclass segmentation is a fusion of the results of these two networks. The prediction stage is implemented using kernel principal component analysis and random forest classifiers. It only requires a predicted segmentation of the tumor and a homemade atlas. Its simplicity allows to train it with very few examples and it can be used after any segmentation process.

A Novel Hybridized Feature Extraction Approach for Lung Nodule Classification Based on Transfer Learning Technique

  • Bruntha, P. M.
  • Pandian, S. I. A.
  • Anitha, J.
  • Abraham, S. S.
  • Kumar, S. N.
J Med Phys 2022 Journal Article, cited 0 times
Website
Purpose: In the field of medical diagnosis, deep learning-based computer-aided detection of diseases will reduce the burden of physicians in the diagnosis of diseases especially in the case of lung cancer nodule classification. Materials and Methods: A hybridized model which integrates deep features from Residual Neural Network using transfer learning and handcrafted features from the histogram of oriented gradients feature descriptor is proposed to classify the lung nodules as benign or malignant. The intrinsic convolutional neural network (CNN) features have been incorporated and they can resolve the drawbacks of handcrafted features that do not completely reflect the specific characteristics of a nodule. In the meantime, they also reduce the need for a large-scale annotated dataset for CNNs. For classifying malignant nodules and benign nodules, radial basis function support vector machine is used. The proposed hybridized model is evaluated on the LIDC-IDRI dataset. Results: It has achieved an accuracy of 97.53%, sensitivity of 98.62%, specificity of 96.88%, precision of 95.04%, F1 score of 0.9679, false-positive rate of 3.117%, and false-negative rate of 1.38% and has been compared with other state of the art techniques. Conclusions: The performance of the proposed hybridized feature-based classification technique is better than the deep features-based classification technique in lung nodule classification.

Formal methods for prostate cancer gleason score and treatment prediction using radiomic biomarkers

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Magnetic resonance imaging 2020 Journal Article, cited 11 times
Website

An ensemble learning approach for brain cancer detection exploiting radiomic features

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Comput Methods Programs Biomed 2019 Journal Article, cited 1 times
Website
BACKGROUND AND OBJECTIVE: The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS: A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS: We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION: The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.

Cancer as a Model System for Testing Metabolic Scaling Theory

  • Brummer, Alexander B.
  • Savage, Van M.
Frontiers in Ecology and Evolution 2021 Journal Article, cited 0 times
Website
Biological allometries, such as the scaling of metabolism to mass, are hypothesized to result from natural selection to maximize how vascular networks fill space yet minimize internal transport distances and resistance to blood flow. Metabolic scaling theory argues two guiding principles—conservation of fluid flow and space-filling fractal distributions—describe a diversity of biological networks and predict how the geometry of these networks influences organismal metabolism. Yet, mostly absent from past efforts are studies that directly, and independently, measure metabolic rate from respiration and vascular architecture for the same organ, organism, or tissue. Lack of these measures may lead to inconsistent results and conclusions about metabolism, growth, and allometric scaling. We present simultaneous and consistent measurements of metabolic scaling exponents from clinical images of lung cancer, serving as a first-of-its-kind test of metabolic scaling theory, and identifying potential quantitative imaging biomarkers indicative of tumor growth. We analyze data for 535 clinical PET-CT scans of patients with non-small cell lung carcinoma to establish the presence of metabolic scaling between tumor metabolism and tumor volume. Furthermore, we use computer vision and mathematical modeling to examine predictions of metabolic scaling based on the branching geometry of the tumor-supplying blood vessel networks in a subset of 56 patients diagnosed with stage II-IV lung cancer. Examination of the scaling of maximum standard uptake value with metabolic tumor volume, and metabolic tumor volume with gross tumor volume, yield metabolic scaling exponents of 0.64 (0.20) and 0.70 (0.17), respectively. We compare these to the value of 0.85 (0.06) derived from the geometric scaling of the tumor-supplying vasculature. These results: (1) inform energetic models of growth and development for tumor forecasting; (2) identify imaging biomarkers in vascular geometry related to blood volume and flow; and (3) highlight unique opportunities to develop and test the metabolic scaling theory of ecology in tumors transitioning from avascular to vascular geometries.

Fitting Segmentation Networks on Varying Image Resolutions Using Splatting

  • Brudfors, M.
  • Balbastre, Y.
  • Ashburner, J.
  • Rees, G.
  • Nachev, P.
  • Ourselin, S.
  • Cardoso, M. J.
2022 Conference Paper, cited 0 times
Website
Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step.

Repeatability of radiotherapy dose-painting prescriptions derived from a multiparametric magnetic resonance imaging model of glioblastoma infiltration

  • Brighi, C.
  • Verburg, N.
  • Koh, E. S.
  • Walker, A.
  • Chen, C.
  • Pillay, S.
  • de Witt Hamer, P. C.
  • Aly, F.
  • Holloway, L. C.
  • Keall, P. J.
  • Waddington, D. E. J.
Phys Imaging Radiat Oncol 2022 Journal Article, cited 0 times
Website
Background and purpose: Glioblastoma (GBM) patients have a dismal prognosis. Tumours typically recur within months of surgical resection and post-operative chemoradiation. Multiparametric magnetic resonance imaging (mpMRI) biomarkers promise to improve GBM outcomes by identifying likely regions of infiltrative tumour in tumour probability (TP) maps. These regions could be treated with escalated dose via dose-painting radiotherapy to achieve higher rates of tumour control. Crucial to the technical validation of dose-painting using imaging biomarkers is the repeatability of the derived dose prescriptions. Here, we quantify repeatability of dose-painting prescriptions derived from mpMRI. Materials and methods: TP maps were calculated with a clinically validated model that linearly combined apparent diffusion coefficient (ADC) and relative cerebral blood volume (rBV) or ADC and relative cerebral blood flow (rBF) data. Maps were developed for 11 GBM patients who received two mpMRI scans separated by a short interval prior to chemoradiation treatment. A linear dose mapping function was applied to obtain dose-painting prescription (DP) maps for each session. Voxel-wise and group-wise repeatability metrics were calculated for parametric, TP and DP maps within radiotherapy margins. Results: DP maps derived from mpMRI were repeatable between imaging sessions (ICC > 0.85). ADC maps showed higher repeatability than rBV and rBF maps (Wilcoxon test, p = 0.001). TP maps obtained from the combination of ADC and rBF were the most stable (median ICC: 0.89). Conclusions: Dose-painting prescriptions derived from a mpMRI model of tumour infiltration have a good level of repeatability and can be used to generate reliable dose-painting plans for GBM patients.

Comparative study of preclinical mouse models of high-grade glioma for nanomedicine research: the importance of reproducing blood-brain barrier heterogeneity

  • Brighi, C.
  • Reid, L.
  • Genovesi, L. A.
  • Kojic, M.
  • Millar, A.
  • Bruce, Z.
  • White, A. L.
  • Day, B. W.
  • Rose, S.
  • Whittaker, A. K.
  • Puttick, S.
THERANOSTICS 2020 Journal Article, cited 32 times
Website
The clinical translation of new nanoparticle-based therapies for high-grade glioma (HGG) remains extremely poor. This has partly been due to the lack of suitable preclinical mouse models capable of replicating the complex characteristics of recurrent HGG (rHGG), namely the heterogeneous structural and functional characteristics of the blood-brain barrier (BBB). The goal of this study is to compare the characteristics of the tumor BBB of rHGG with two different mouse models of HGG, the ubiquitously used U87 cell line xenograft model and a patient-derived cell line WK1 xenograft model, in order to assess their suitability for nanomedicine research. Method: Structural MRI was used to assess the extent of BBB opening in mouse models with a fully developed tumor, and dynamic contrast enhanced MRI was used to obtain values of BBB permeability in contrast enhancing tumor. H&E and immunofluorescence staining were used to validate results obtained from the in vivo imaging studies. Results: The extent of BBB disruption and permeability in the contrast enhancing tumor was significantly higher in the U87 model than in rHGG. These values in the WK1 model are similar to those of rHGG. The U87 model is not infiltrative, has an entirely abnormal and leaky vasculature and it is not of glial origin. The WK1 model infiltrates into the non-neoplastic brain parenchyma, it has both regions with intact BBB and regions with leaky BBB and remains of glial origin. Conclusion: The WK1 mouse model more accurately reproduces the extent of BBB disruption, the level of BBB permeability and the histopathological characteristics found in rHGG patients than the U87 mouse model, and is therefore a more clinically relevant model for preclinical evaluations of emerging nanoparticle-based therapies for HGG.

Constructing 3D-Printable CAD Models of Prostates from MR Images

  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
2013 Conference Proceedings, cited 1 times
Website
This paper describes the development of a procedure to generate patient-specific, three-dimensional (3D) solid models of prostates (and related anatomy) from magnetic resonance (MR) images. The 3D models are rendered in STL file format which can be physically printed or visualized on a holographic display system. An example is presented in which a 3D model is printed following this procedure.

A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis

  • Brassey, Charlotte A
  • O'Mahoney, Thomas G
  • Chamberlain, Andrew T
  • Sellers, William I
Journal of human evolution 2018 Journal Article, cited 3 times
Website

Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer

  • Braman, Nathaniel
  • Prasanna, Prateek
  • Whitney, Jon
  • Singh, Salendra
  • Beig, Niha
  • Etesami, Maryam
  • Bates, David D. B.
  • Gallagher, Katherine
  • Bloch, B. Nicolas
  • Vulchi, Manasa
  • Turk, Paulette
  • Bera, Kaustav
  • Abraham, Jame
  • Sikov, William M.
  • Somlo, George
  • Harris, Lyndsay N.
  • Gilmore, Hannah
  • Plecha, Donna
  • Varadan, Vinay
  • Madabhushi, Anant
JAMA Netw Open 2019 Journal Article, cited 0 times
Website
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer. Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy. Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019. Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting. Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002). Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.

Radiomics and deep learning methods for the prediction of 2-year overall survival in LUNG1 dataset

  • Braghetto, A.
  • Marturano, F.
  • Paiusco, M.
  • Baiesi, M.
  • Bettinelli, A.
2022 Journal Article, cited 0 times
Website
In this study, we tested and compared radiomics and deep learning-based approaches on the public LUNG1 dataset, for the prediction of 2-year overall survival (OS) in non-small cell lung cancer patients. Radiomic features were extracted from the gross tumor volume using Pyradiomics, while deep features were extracted from bi-dimensional tumor slices by convolutional autoencoder. Both radiomic and deep features were fed to 24 different pipelines formed by the combination of four feature selection/reduction methods and six classifiers. Direct classification through convolutional neural networks (CNNs) was also performed. Each approach was investigated with and without the inclusion of clinical parameters. The maximum area under the receiver operating characteristic on the test set improved from 0.59, obtained for the baseline clinical model, to 0.67 +/- 0.03, 0.63 +/- 0.03 and 0.67 +/- 0.02 for models based on radiomic features, deep features, and their combination, and to 0.64 +/- 0.04 for direct CNN classification. Despite the high number of pipelines and approaches tested, results were comparable and in line with previous works, hence confirming that it is challenging to extract further imaging-based information from the LUNG1 dataset for the prediction of 2-year OS.

Classifying the Acquisition Sequence for Brain MRIs Using Neural Networks on Single Slices

  • Braeker, N.
  • Schmitz, C.
  • Wagner, N.
  • Stanicki, B. J.
  • Schroder, C.
  • Ehret, F.
  • Furweger, C.
  • Zwahlen, D. R.
  • Forster, R.
  • Muacevic, A.
  • Windisch, P.
2022 Journal Article, cited 2 times
Website
Background Neural networks for analyzing MRIs are oftentimes trained on particular combinations of perspectives and acquisition sequences. Since real-world data are less structured and do not follow a standard denomination of acquisition sequences, this impedes the transition from deep learning research to clinical application. The purpose of this study is therefore to assess the feasibility of classifying the acquisition sequence from a single MRI slice using convolutional neural networks. Methods A total of 113 MRI slices from 52 patients were used in a transfer learning approach to train three convolutional neural networks of different complexities to predict the acquisition sequence, while 27 slices were used for internal validation. The model then underwent external validation on 600 slices from 273 patients belonging to one of four classes (T1-weighted without contrast enhancement, T1-weighted with contrast enhancement, T2-weighted, and diffusion-weighted). Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results The neural networks achieved a categorical accuracy of 0.79, 0.81, and 0.84 on the external validation data. The implementation of Grad-CAM showed no clear pattern of focus except for T2-weighted slices, where the network focused on areas containing cerebrospinal fluid. Conclusion Automatically classifying the acquisition sequence using neural networks seems feasible and could be used to facilitate the automatic labelling of MRI data.

Singular value decomposition using block least mean square method for image denoising and compression

  • Boyat, Ajay Kumar
  • Khare, Parth
2015 Conference Proceedings, cited 1 times
Website
Image denoising is a well documented part of Image processing. It has always posed a problem for researchers and there is no dearth of solutions extended. Obtaining a denoised and perfectly similar image after application of processes represents a mirage that has been chased a lot. In this paper, we attempt to combine the effects of block least mean square algorithm (BLMS) to maximizes the Peak Signal to Noise Ratio (PSNR), along with singular valued decomposition (SVD), so as to achieve results that bring us closer to our aim of perfect reconstruction. The results showed that the combination of these methods provides easy computation, coupled with efficiency and as such is an effective way of approaching the problem.

Radiogenomics of Clear Cell Renal Cell Carcinoma: Associations Between mRNA-Based Subtyping and CT Imaging Features

  • Bowen, Lan
  • Xiaojing, Li
Academic Radiology 2018 Journal Article, cited 0 times
Website

A CT-based transfer learning approach to predict NSCLC recurrence: The added-value of peritumoral region

  • Bove, S.
  • Fanizzi, A.
  • Fadda, F.
  • Comes, M. C.
  • Catino, A.
  • Cirillo, A.
  • Cristofaro, C.
  • Montrone, M.
  • Nardone, A.
  • Pizzutilo, P.
  • Tufaro, A.
  • Galetta, D.
  • Massafra, R.
PLoS One 2023 Journal Article, cited 0 times
Website
Non-small cell lung cancer (NSCLC) represents 85% of all new lung cancer diagnoses and presents a high recurrence rate after surgery. Thus, an accurate prediction of recurrence risk in NSCLC patients at diagnosis could be essential to designate risk patients to more aggressive medical treatments. In this manuscript, we apply a transfer learning approach to predict recurrence in NSCLC patients, exploiting only data acquired during its screening phase. Particularly, we used a public radiogenomic dataset of NSCLC patients having a primary tumor CT image and clinical information. Starting from the CT slice containing the tumor with maximum area, we considered three different dilatation sizes to identify three Regions of Interest (ROIs): CROP (without dilation), CROP 10 and CROP 20. Then, from each ROI, we extracted radiomic features by means of different pre-trained CNNs. The latter have been combined with clinical information; thus, we trained a Support Vector Machine classifier to predict the NSCLC recurrence. The classification performances of the devised models were finally evaluated on both the hold-out training and hold-out test sets, in which the original sample has been previously divided. The experimental results showed that the model obtained analyzing CROP 20 images, which are the ROIs containing more peritumoral area, achieved the best performances on both the hold-out training set, with an AUC of 0.73, an Accuracy of 0.61, a Sensitivity of 0.63, and a Specificity of 0.60, and on the hold-out test set, with an AUC value of 0.83, an Accuracy value of 0.79, a Sensitivity value of 0.80, and a Specificity value of 0.78. The proposed model represents a promising procedure for early predicting recurrence risk in NSCLC patients.

Using Separated Inputs for Multimodal Brain Tumor Segmentation with 3D U-Net-like Architectures

  • Boutry, N.
  • Chazalon, J.
  • Puybareau, E.
  • Tochon, G.
  • Talbot, H.
  • Géraud, T.
2020 Book Section, cited 5 times
Website
The work presented in this paper addresses the MICCAI BraTS 2019 challenge devoted to brain tumor segmentation using magnetic resonance images. For each task of the challenge, we proposed and submitted for evaluation an original method. For the tumor segmentation task (Task 1), our convolutional neural network is based on a variant of the U-Net architecture of Ronneberger et al. with two modifications: first, we separate the four convolution parts to decorrelate the weights corresponding to each modality, and second, we provide volumes of size 240∗240∗3 as inputs in these convolution parts. This way, we profit of the 3D aspect of the input signal, and we do not use the same weights for separate inputs. For the overall survival task (Task 2), we compute explainable features and use a kernel PCA embedding followed by a Random Forest classifier to build a predictor with very few training samples. For the uncertainty estimation task (Task 3), we introduce and compare lightweight methods based on simple principles which can be applied to any segmentation approach. The overall performance of each of our contribution is honorable given the low computational requirements they have both for training and testing.

Health Vigilance for Medical Imaging Diagnostic Optimization: Automated segmentation of COVID-19 lung infection from CT images

  • Bourekkadi, S.
  • Mohamed, Chala
  • Nsiri, Benayad
  • Abdelmajid, Soulaymani
  • Abdelghani, Mokhtari
  • Brahim, Benaji
  • Hami, H.
  • Mokhtari, A.
  • Slimani, K.
  • Soulaymani, A.
E3S Web of Conferences 2021 Journal Article, cited 0 times
Website
Covid-19 disease has confronted the world with an unprecedented health crisis, faced with its quick spread, the health system is called upon to increase its vigilance. So, it is essential to set up a quick and automated diagnosis that can alleviate pressure on health systems. Many techniques used to diagnose the covid-19 disease, including imaging techniques, like computed tomography (CT). In this paper, we present an automatic method for COVID-19 Lung Infection Segmentation from CT Images, that can be integrated into a decision support system for the diagnosis of covid-19 disease. To achieve this goal, we focused to new techniques based on artificial intelligent concept, in particular the uses of deep convolutional neural network, and we are interested in our study to the most popular architecture used in the medical imaging community based on encoder-decoder models. We use an open access data collection for Artificial Intelligence COVID-19 CT segmentation or classification as dataset, the proposed model implemented on keras framework in python. A short description of model, training, validation and predictions is given, at the end we compare the result with an existing labeled data. We tested our trained model on new images, we obtained for Area under the ROC Curve the value 0.884 from the prediction result compared with manual expert segmentation. Finally, an overview is given for future works, and use of the proposed model into homogeneous framework in a medical imaging context for clinical purpose.

Glioblastoma Surgery Imaging–Reporting and Data System: Validation and Performance of the Automated Segmentation Task

  • Bouget, David
  • Eijgelaar, Roelant S.
  • Pedersen, André
  • Kommers, Ivar
  • Ardon, Hilko
  • Barkhof, Frederik
  • Bello, Lorenzo
  • Berger, Mitchel S.
  • Nibali, Marco Conti
  • Furtner, Julia
  • Fyllingen, Even Hovig
  • Hervey-Jumper, Shawn
  • Idema, Albert J. S.
  • Kiesel, Barbara
  • Kloet, Alfred
  • Mandonnet, Emmanuel
  • Müller, Domenique M. J.
  • Robe, Pierre A.
  • Rossi, Marco
  • Sagberg, Lisa M.
  • Sciortino, Tommaso
  • Van den Brink, Wimar A.
  • Wagemakers, Michiel
  • Widhalm, Georg
  • Witte, Marnix G.
  • Zwinderman, Aeilko H.
  • Reinertsen, Ingerid
  • De Witt Hamer, Philip C.
  • Solheim, Ole
Cancers 2021 Journal Article, cited 2 times
Website
Simple Summary Neurosurgical decisions for patients with glioblastoma depend on visual inspection of a preoperative MR scan to determine the tumor characteristics. To avoid subjective estimates and manual tumor delineation, automatic methods and standard reporting are necessary. We compared and extensively assessed the performances of two deep learning architectures on the task of automatic tumor segmentation. A total of 1887 patients from 14 institutions, manually delineated by a human rater, were compared to automated segmentations generated by neural networks. The automated segmentations were in excellent agreement with the manual segmentations, and external validity, as well as generalizability were demonstrated. Together with automatic tumor feature computation and standardized reporting, our Glioblastoma Surgery Imaging Reporting And Data System (GSI-RADS) exhibited the potential for more accurate data-driven clinical decisions. The trained models and software are open-source and open-access, enabling comparisons among surgical cohorts, multicenter trials, and patient registries. Abstract For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime.

Enhanced breast mass mammography classification approach based on pre-processing and hybridization of transfer learning models

  • Boudouh, S. S.
  • Bouakkaz, M.
J Cancer Res Clin Oncol 2023 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: The second most prevalent cause of death among women is now breast cancer, surpassing heart disease. Mammography images must accurately identify breast masses to diagnose early breast cancer, which can significantly increase the patient's survival percentage. Although, due to the diversity of breast masses and the complexity of their microenvironment, it is still a significant issue. Hence, an issue that researchers need to continue searching into is how to establish a reliable breast mass detection approach in an effective factor application to increase patient survival. Even though several machine and deep learning-based approaches were proposed to address these issues, pre-processing strategies and network architectures were insufficient for breast mass detection in mammogram scans, which directly influences the accuracy of the proposed models. METHODS: Aiming to resolve these issues, we propose a two-stage classification method for breast mass mammography scans. First, we introduce a pre-processing stage divided into three sub-strategies, which include several filters for Region Of Interest (ROI) extraction, noise removal, and image enhancements. Secondly, we propose a classification stage based on transfer learning techniques for feature extraction, and global pooling for classification instead of standard machine learning algorithms or fully connected layers. However, instead of using the traditional fine-tuning feature extraction phase, we proposed a hybrid model where we concatenate two recent pre-trained CNNs to assist the feature extraction phase, rather than using one. RESULTS: Using the CBIS-DDSM dataset, we managed to increase mainly each of the accuracy, sensitivity, and specificity reaching the highest accuracy of 98,1% using the Median filter for noise removal. Followed by the Gaussian filter trial with 96% accuracy, meanwhile, the winner filter attained the lowest accuracy of 94.13%. Moreover, the usage of global average pooling as a classifier is suitable in our case better than global max pooling. CONCLUSION: The experimental findings demonstrate that the suggested strategy of breast Mass detection in mammography can outperform the top-ranked methods currently in use in terms of classification performance.

3D automatic levels propagation approach to breast MRI tumor segmentation

  • Bouchebbah, Fatah
  • Slimani, Hachem
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
Magnetic Resonance Imaging MRI is a relevant tool for breast cancer screening. Moreover, an accurate 3D segmentation of breast tumors from MRI scans plays a key role in the analysis of the disease. In this manuscript, we propose a novel 3D automatic method for segmenting MRI breast tumors, called 3D Automatic Levels Propagation Approach (3D-ALPA). The proposed method performs the segmentation automatically in two steps: in the first step, the entire MRI volume to process is segmented slice by slice. Specifically, using a new automatic approach called 2D Automatic Levels Propagation Approach (2D-ALPA) which is an improved version of a previous semi-automatic approach, named 2D Levels Propagation Approach (2D-LPA). In the second step, the partial segmentations obtained after the application of 2D-ALPA are recombined to rebuild the complete volume(s) of tumor(s). 3D-ALPA has many characteristics, mainly: it is an automatic method which can take into consideration multi-tumor segmentation, and it has the property to be easily applicable according to the Axial, Coronal, as well as Sagittal planes. Therefore, it offers a multi-view representation of the segmented tumor(s). To validate the new 3D-ALPA method, we have firstly performed tests on a 2D private dataset composed of eighteen patients to estimate the accuracy of the new 2D-ALPA in comparison to the previous 2D-LPA. The obtained results have been in favor of the proposed 2D-ALPA, showing hence an improvement in accuracy after integrating the automatization in the 2D-ALPA approach. Then, we have evaluated the complete 3D-ALPA method on a 3D private dataset constituted of MRI exams of twenty-two patients having real breast tumors of different types, and on the public RIDER dataset. Essentially, 3D-ALPA has been evaluated regarding two main features: segmentation accuracy and running time, by considering two kinds of breast tumors: non-enhanced and enhanced tumors. The experimental studies have shown that 3D-ALPA has produced better results for the both kinds of tumors than a recent and concurrent method in the literature that addresses the same problematic.

Development of a multi-task learning V-Net for pulmonary lobar segmentation on CT and application to diseased lungs

  • Boubnovski, M. M.
  • Chen, M.
  • Linton-Reid, K.
  • Posma, J. M.
  • Copley, S. J.
  • Aboagye, E. O.
Clin Radiol 2022 Journal Article, cited 0 times
Website
AIM: To develop a multi-task learning (MTL) V-Net for pulmonary lobar segmentation on computed tomography (CT) and application to diseased lungs. MATERIALS AND METHODS: The described methodology utilises tracheobronchial tree information to enhance segmentation accuracy through the algorithm's spatial familiarity to define lobar extent more accurately. The method undertakes parallel segmentation of lobes and auxiliary tissues simultaneously by employing MTL in conjunction with V-Net-attention, a popular convolutional neural network in the imaging realm. Its performance was validated by an external dataset of patients with four distinct lung conditions: severe lung cancer, COVID-19 pneumonitis, collapsed lungs, and chronic obstructive pulmonary disease (COPD), even though the training data included none of these cases. RESULTS: The following Dice scores were achieved on a per-segment basis: normal lungs 0.97, COPD 0.94, lung cancer 0.94, COVID-19 pneumonitis 0.94, and collapsed lung 0.92, all at p<0.05. CONCLUSION: Despite severe abnormalities, the model provided good performance at segmenting lobes, demonstrating the benefit of tissue learning. The proposed model is poised for adoption in the clinical setting as a robust tool for radiologists and researchers to define the lobar distribution of lung diseases and aid in disease treatment planning.

Integration of operator-validated contours in deformable image registration for dose accumulation in radiotherapy

  • Bosma, L. S.
  • Ries, M.
  • Denis de Senneville, B.
  • Raaymakers, B. W.
  • Zachiu, C.
Phys Imaging Radiat Oncol 2023 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Deformable image registration (DIR) is a core element of adaptive radiotherapy workflows, integrating daily contour propagation and/or dose accumulation in their design. Propagated contours are usually manually validated and may be edited, thereby locally invalidating the registration result. This means the registration cannot be used for dose accumulation. In this study we proposed and evaluated a novel multi-modal DIR algorithm that incorporated contour information to guide the registration. This integrates operator-validated contours with the estimated deformation vector field and warped dose. MATERIALS AND METHODS: The proposed algorithm consisted of both a normalized gradient field-based data-fidelity term on the images and an optical flow data-fidelity term on the contours. The Helmholtz-Hodge decomposition was incorporated to ensure anatomically plausible deformations. The algorithm was validated for same- and cross-contrast Magnetic Resonance (MR) image registrations, Computed Tomography (CT) registrations, and CT-to-MR registrations for different anatomies, all based on challenging clinical situations. The contour-correspondence, anatomical fidelity, registration error, and dose warping error were evaluated. RESULTS: The proposed contour-guided algorithm considerably and significantly increased contour overlap, decreasing the mean distance to agreement by a factor of 1.3 to 13.7, compared to the best algorithm without contour-guidance. Importantly, the registration error and dose warping error decreased significantly, by a factor of 1.2 to 2.0. CONCLUSIONS: Our contour-guided algorithm ensured that the deformation vector field and warped quantitative information were consistent with the operator-validated contours. This provides a feasible semi-automatic strategy for spatially correct warping of quantitative information even in difficult and artefacted cases.

A SIMPLI (Single-cell Identification from MultiPLexed Images) approach for spatially-resolved tissue phenotyping at single-cell resolution

  • Bortolomeazzi, M.
  • Montorsi, L.
  • Temelkovski, D.
  • Keddar, M. R.
  • Acha-Sagredo, A.
  • Pitcher, M. J.
  • Basso, G.
  • Laghi, L.
  • Rodriguez-Justo, M.
  • Spencer, J.
  • Ciccarelli, F. D.
2022 Journal Article, cited 1 times
Website
Multiplexed imaging technologies enable the study of biological tissues at single-cell resolution while preserving spatial information. Currently, high-dimension imaging data analysis is technology-specific and requires multiple tools, restricting analytical scalability and result reproducibility. Here we present SIMPLI (Single-cell Identification from MultiPLexed Images), a flexible and technology-agnostic software that unifies all steps of multiplexed imaging data analysis. After raw image processing, SIMPLI performs a spatially resolved, single-cell analysis of the tissue slide as well as cell-independent quantifications of marker expression to investigate features undetectable at the cell level. SIMPLI is highly customisable and can run on desktop computers as well as high-performance computing environments, enabling workflow parallelisation for large datasets. SIMPLI produces multiple tabular and graphical outputs at each step of the analysis. Its containerised implementation and minimum configuration requirements make SIMPLI a portable and reproducible solution for multiplexed imaging data analysis. Software is available at "SIMPLI [ https://github.com/ciccalab/SIMPLI ]".

A full pipeline to analyze lung histopathology images

  • Borras Ferris, Lluis
  • Püttmann, Simon
  • Marini, Niccolò
  • Vatrano, Simona
  • Fragetta, Filippo
  • Caputo, Alessandro
  • Ciompi, Francesco
  • Atzori, Manfredo
  • Müller, Henning
  • Tomaszewski, John E.
  • Ward, Aaron D.
2024 Conference Paper, cited 0 times
Histopathology images involve the analysis of tissue samples to diagnose several diseases, such as cancer. The analysis of tissue samples is a time-consuming procedure, manually made by medical experts, namely pathologists. Computational pathology aims to develop automatic methods to analyze Whole Slide Images (WSI), which are digitized histopathology images, showing accurate performance in terms of image analysis. Although the amount of available WSIs is increasing, the capacity of medical experts to manually analyze samples is not expanding proportionally. This paper presents a full automatic pipeline to classify lung cancer WSIs, considering four classes: Small Cell Lung Cancer (SCLC), non-small cell lung cancer divided into LUng ADenocarcinoma (LUAD) and LUng Squamous cell Carcinoma (LUSC), and normal tissue. The pipeline includes a self-supervised algorithm for pre-training the model and Multiple Instance Learning (MIL) for WSI classification. The model is trained with 2,226 WSIs and it obtains an AUC of 0.8558 ± 0.0051 and a weighted f1-score of 0.6537 ± 0.0237 for the 4-class classification on the test set. The capability of the model to generalize was evaluated by testing it on the public The Cancer Genome Atlas (TCGA) dataset on LUAD and LUSC classification. In this task, the model obtained an AUC of 0.9433 ± 0.0198 and a weighted f1-score of 0.7726 ± 0.0438.

Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram

  • Borguezan, Bruno Max
  • Lopes, Agnaldo José
  • Saito, Eduardo Haruo
  • Higa, Claudio
  • Silva, Aristófanes Corrêa
  • Nunes, Rodolfo Acatauassú
Pulmonary Medicine 2019 Journal Article, cited 0 times
Website
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.

Dataset on renal tumor diameter assessment by multiple observers in normal-dose and low-dose CT

  • Borgbjerg, J.
  • Larsen, N. E.
  • Salte, I. M.
  • Gronli, N. R.
  • Klaestrup, E.
  • Negard, A.
2023 Journal Article, cited 0 times
Website
Computed tomography-based active surveillance is increasingly used to manage small renal tumors, regardless of patient age. However, there is an unmet need for decreasing radiation exposure while maintaining the necessary accuracy and reproducibility in radiographic measurements, allowing for detecting even minor changes in renal mass size. In this article, we present supplementary data from a multiobserver investigation. We explored the accuracy and reproducibility of low-dose CT (75% dose reduction) compared to normal-dose CT in assessing maximum axial renal tumor diameter. Open-access CT datasets from the 2019 Kidney and Kidney Tumor Segmentation Challenge were used. A web-based platform for assessing observer performance was used by six radiologist observers to obtain and provide data on tumor diameters and accompanying viewing settings, in addition to key images of each measurement and an interactive module for exploring diameter measurements. These data can serve as a baseline and inform future studies investigating and validating lower-dose CT protocols for active surveillance of small renal masses.

CT Colonography: External Clinical Validation of an Algorithm for Computer-assisted Prone and Supine Registration

  • Boone, Darren J
  • Halligan, Steve
  • Roth, Holger R
  • Hampshire, Tom E
  • Helbren, Emma
  • Slabaugh, Greg G
  • McQuillan, Justine
  • McClelland, Jamie R
  • Hu, Mingxing
  • Punwani, Shonit
RadiologyRadiology 2013 Journal Article, cited 5 times
Website
PURPOSE: To perform external validation of a computer-assisted registration algorithm for prone and supine computed tomographic (CT) colonography and to compare the results with those of an existing centerline method. MATERIALS AND METHODS: All contributing centers had institutional review board approval; participants provided informed consent. A validation sample of CT colonographic examinations of 51 patients with 68 polyps (6-55 mm) was selected from a publicly available, HIPAA compliant, anonymized archive. No patients were excluded because of poor preparation or inadequate distension. Corresponding prone and supine polyp coordinates were recorded, and endoluminal surfaces were registered automatically by using a computer algorithm. Two observers independently scored three-dimensional endoluminal polyp registration success. Results were compared with those obtained by using the normalized distance along the colonic centerline (NDACC) method. Pairwise Wilcoxon signed rank tests were used to compare gross registration error and McNemar tests were used to compare polyp conspicuity. RESULTS: Registration was possible in all 51 patients, and 136 paired polyp coordinates were generated (68 polyps) to test the algorithm. Overall mean three-dimensional polyp registration error (mean +/- standard deviation, 19.9 mm +/- 20.4) was significantly less than that for the NDACC method (mean, 27.4 mm +/- 15.1; P = .001). Accuracy was unaffected by colonic segment (P = .76) or luminal collapse (P = .066). During endoluminal review by two observers (272 matching tasks, 68 polyps, prone to supine and supine to prone coordinates), 223 (82%) polyp matches were visible (120 degrees field of view) compared with just 129 (47%) when the NDACC method was used (P < .001). By using multiplanar visualization, 48 (70%) polyps were visible after scrolling +/- 15 mm in any multiplanar axis compared with 16 (24%) for NDACC (P < .001). CONCLUSION: Computer-assisted registration is more accurate than the NDACC method for mapping the endoluminal surface and matching the location of polyps in corresponding prone and supine CT colonographic acquisitions.

Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline

  • Bonavita, I.
  • Rafael-Palou, X.
  • Ceresa, M.
  • Piella, G.
  • Ribas, V.
  • Gonzalez Ballester, M. A.
Comput Methods Programs Biomed 2020 Journal Article, cited 3 times
Website
BACKGROUND AND OBJECTIVE: The early identification of malignant pulmonary nodules is critical for a better lung cancer prognosis and less invasive chemo or radio therapies. Nodule malignancy assessment done by radiologists is extremely useful for planning a preventive intervention but is, unfortunately, a complex, time-consuming and error-prone task. This explains the lack of large datasets containing radiologists malignancy characterization of nodules; METHODS: In this article, we propose to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection. For training and testing purposes we used independent subsets of the LIDC dataset; RESULTS: Adding the probabilities of nodules malignity in a baseline lung cancer pipeline improved its F1-weighted score by 14.7%, whereas integrating the malignancy model itself using transfer learning outperformed the baseline prediction by 11.8% of F1-weighted score; CONCLUSIONS: Despite the limited size of the lung cancer datasets, integrating predictive models of nodule malignancy improves prediction of lung cancer.

PieceNet: A Redundant UNet Ensemble

  • Bommineni, Vikas L.
2021 Book Section, cited 4 times
Website
Segmentation of gliomas is essential to aid clinical diagnosis and treatment; however, imaging artifacts and heterogeneous shape complicate this task. In the last few years, researchers have shown the effectiveness of 3D UNets on this problem. They have found success using 3D patches to predict the class label for the center voxel; however, even a single patch-based UNet may miss representations that another UNet could learn. To circumvent this issue, I developed PieceNet, a deep learning model using a novel ensemble of patch-based 3D UNets. In particular, I used uncorrected modalities to train a standard 3D UNet for all label classes as well as one 3D UNet for each individual label class. Initial results indicate this 4-network ensemble is potentially a superior technique to a traditional patch-based 3D UNet on uncorrected images; however, further work needs to be done to allow for more competitive enhancing tumor segmentation. Moreover, I developed a linear probability model using radiomic and non-imaging features that predicts post-surgery survival.

Dynamic conformal arcs for lung stereotactic body radiation therapy: A comparison with volumetric-modulated arc therapy

  • Bokrantz, R.
  • Wedenberg, M.
  • Sandwall, P.
J Appl Clin Med Phys 2020 Journal Article, cited 1 times
Website
This study constitutes a feasibility assessment of dynamic conformal arc (DCA) therapy as an alternative to volumetric-modulated arc therapy (VMAT) for stereotactic body radiation therapy (SBRT) of lung cancer. The rationale for DCA is lower geometric complexity and hence reduced risk for interplay errors induced by respiratory motion. Forward planned DCA and inverse planned DCA based on segment-weight optimization were compared to VMAT for single arc treatments of five lung patients. Analysis of dose-volume histograms and clinical goal fulfillment revealed that DCA can generate satisfactory and near equivalent dosimetric quality to VMAT, except for complex tumor geometries. Segment-weight optimized DCA provided spatial dose distributions qualitatively similar to those for VMAT. Our results show that DCA, and particularly segment-weight optimized DCA, may be an attractive alternative to VMAT for lung SBRT treatments if the patient anatomy is favorable.

Harnessing multimodal data integration to advance precision oncology

  • Boehm, Kevin M
  • Khosravi, Pegah
  • Vanguri, Rami
  • Gao, Jianjiong
  • Shah, Sohrab P
Nature Reviews Cancer 2022 Journal Article, cited 0 times
Website

Unsupervised Data Drift Detection Using Convolutional Autoencoders: A Breast Cancer Imaging Scenario

  • Bóbeda, Javier
  • García-González, María Jesús
  • Pérez-Herrera, Laura Valeria
  • López-Linares, Karen
2023 Conference Proceedings, cited 0 times
Website
Imaging AI models are starting to reach real clinical settings, where model drift can happen due to diverse factors. That is why model monitoring must be set up in order to prevent model degradation over time. In this context, we test and propose a data drift detection solution based on unsupervised deep learning for a breast cancer imaging setting. A convolutional autoencoder is trained on a baseline set of expected images and controlled drifts are introduced in the data in order to test if a set of metrics extracted from the reconstructions and the latent space are able to distinguish them. We prove that this is a valid tool that manages to detect subtle differences even within these complex kind of images.

Automated nuclear segmentation in head and neck squamous cell carcinoma (HNSCC) pathology reveals relationships between cytometric features and ESTIMATE stromal and immune scores

  • Blocker, Stephanie J.
  • Cook, James
  • Everitt, Jeffrey I.
  • Austin, Wyatt M.
  • Watts, Tammara L.
  • Mowery, Yvonne M.
The American Journal of Pathology 2022 Journal Article, cited 0 times
Website
The tumor microenvironment (TME) plays an important role in the progression of head and neck squamous cell carcinoma (HNSCC). Currently, pathological assessment of TME is non-standardized and subject to observer bias. Genome-wide transcriptomic approaches to understanding the TME, while less subject to bias, are expensive and not currently part of standard of care for HNSCC. To identify pathology-based biomarkers that correlate with genomic and transcriptomic signatures of TME in HNSCC, cytometric feature maps were generated in a publicly available cohort of patients with HNSCC with available whole-slide tissue images and genomic and transcriptomic phenotyping (N=49). Cytometric feature maps were generated based on whole-slide nuclear detection, using a deep learning algorithm trained for StarDist nuclear segmentation. Cytometric features were measured for each patient and compared to transcriptomic measurements, including Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) scores, as well as stemness scores. When corrected for multiple comparisons, one feature (nuclear circularity) demonstrated a significant linear correlation with ESTIMATE stromal score. Two features (nuclear maximum and minimum diameter) correlated significantly with ESTIMATE immune score. Three features (nuclear solidity, nuclear minimum diameter, and nuclear circularity) correlated significantly with transcriptomic stemness score. This study provides preliminary evidence that observer-independent, automated tissue slide analysis can provide insights into the HNSCC TME which correlate with genomic and transcriptomic assessments.

Multiparametric MRI and auto-fixed volume of interest-based radiomics signature for clinically significant peripheral zone prostate cancer

  • Bleker, J.
  • Kwee, T. C.
  • Dierckx, Rajo
  • de Jong, I. J.
  • Huisman, H.
  • Yakar, D.
Eur Radiol 2020 Journal Article, cited 2 times
Website
OBJECTIVES: To create a radiomics approach based on multiparametric magnetic resonance imaging (mpMRI) features extracted from an auto-fixed volume of interest (VOI) that quantifies the phenotype of clinically significant (CS) peripheral zone (PZ) prostate cancer (PCa). METHODS: This study included 206 patients with 262 prospectively called mpMRI prostate imaging reporting and data system 3-5 PZ lesions. Gleason scores > 6 were defined as CS PCa. Features were extracted with an auto-fixed 12-mm spherical VOI placed around a pin point in each lesion. The value of dynamic contrast-enhanced imaging(DCE), multivariate feature selection and extreme gradient boosting (XGB) vs. univariate feature selection and random forest (RF), expert-based feature pre-selection, and the addition of image filters was investigated using the training (171 lesions) and test (91 lesions) datasets. RESULTS: The best model with features from T2-weighted (T2-w) + diffusion-weighted imaging (DWI) + DCE had an area under the curve (AUC) of 0.870 (95% CI 0.980-0.754). Removal of DCE features decreased AUC to 0.816 (95% CI 0.920-0.710), although not significantly (p = 0.119). Multivariate and XGB outperformed univariate and RF (p = 0.028). Expert-based feature pre-selection and image filters had no significant contribution. CONCLUSIONS: The phenotype of CS PZ PCa lesions can be quantified using a radiomics approach based on features extracted from T2-w + DWI using an auto-fixed VOI. Although DCE features improve diagnostic performance, this is not statistically significant. Multivariate feature selection and XGB should be preferred over univariate feature selection and RF. The developed model may be a valuable addition to traditional visual assessment in diagnosing CS PZ PCa. KEY POINTS: * T2-weighted and diffusion-weighted imaging features are essential components of a radiomics model for clinically significant prostate cancer; addition of dynamic contrast-enhanced imaging does not significantly improve diagnostic performance. * Multivariate feature selection and extreme gradient outperform univariate feature selection and random forest. * The developed radiomics model that extracts multiparametric MRI features with an auto-fixed volume of interest may be a valuable addition to visual assessment in diagnosing clinically significant prostate cancer.

Tensor-RT-Based Transfer Learning Model for Lung Cancer Classification

  • Bishnoi, V.
  • Goel, N.
J Digit Imaging 2023 Journal Article, cited 0 times
Website
Cancer is a leading cause of death across the globe, in which lung cancer constitutes the maximum mortality rate. Early diagnosis through computed tomography scan imaging helps to identify the stages of lung cancer. Several deep learning-based classification methods have been employed for developing automatic systems for the diagnosis and detection of computed tomography scan lung slices. However, the diagnosis based on nodule detection is a challenging task as it requires manual annotation of nodule regions. Also, these computer-aided systems have yet not achieved the desired performance in real-time lung cancer classification. In the present paper, a high-speed real-time transfer learning-based framework is proposed for the classification of computed tomography lung cancer slices into benign and malignant. The proposed framework comprises of three modules: (i) pre-processing and segmentation of lung images using K-means clustering based on cosine distance and morphological operations; (ii) tuning and regularization of the proposed model named as weighted VGG deep network (WVDN); (iii) model inference in Nvidia tensor-RT during post-processing for the deployment in real-time applications. In this study, two pre-trained CNN models were experimented and compared with the proposed model. All the models have been trained on 19,419 computed tomography scan lung slices, which were obtained from the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset. The proposed model achieved the best classification metric, an accuracy of 0.932, precision, recall, an F1 score of 0.93, and Cohen's kappa score of 0.85. A statistical evaluation has also been performed on the classification parameters and achieved a p-value <0.0001 for the proposed model. The quantitative and statistical results validate the improved performance of the proposed model as compared to state-of-the-art methods. The proposed framework is based on complete computed tomography slices rather than the marked annotations and may help in improving clinical diagnosis.

Enhanced Region Growing for Brain Tumor MR Image Segmentation

  • Biratu, E. S.
  • Schwenker, F.
  • Debelee, T. G.
  • Kebede, S. R.
  • Negera, W. G.
  • Molla, H. T.
2021 Journal Article, cited 30 times
Website
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach's performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.

The Liver Tumor Segmentation Benchmark (LiTS)

  • Bilic, Patrick
  • Christ, Patrick
  • Li, Hongwei Bran
  • Vorontsov, Eugene
  • Ben-Cohen, Avi
  • Kaissis, Georgios
  • Szeskin, Adi
  • Jacobs, Colin
  • Mamani, Gabriel Efrain Humpire
  • Chartrand, Gabriel
  • Lohöfer, Fabian
  • Holch, Julian Walter
  • Sommer, Wieland
  • Hofmann, Felix
  • Hostettler, Alexandre
  • Lev-Cohain, Naama
  • Drozdzal, Michal
  • Amitai, Michal Marianne
  • Vivanti, Refael
  • Sosna, Jacob
  • Ezhov, Ivan
  • Sekuboyina, Anjany
  • Navarro, Fernando
  • Kofler, Florian
  • Paetzold, Johannes C.
  • Shit, Suprosanna
  • Hu, Xiaobin
  • Lipková, Jana
  • Rempfler, Markus
  • Piraud, Marie
  • Kirschke, Jan
  • Wiestler, Benedikt
  • Zhang, Zhiheng
  • Hülsemeyer, Christian
  • Beetz, Marcel
  • Ettlinger, Florian
  • Antonelli, Michela
  • Bae, Woong
  • Bellver, Míriam
  • Bi, Lei
  • Chen, Hao
  • Chlebus, Grzegorz
  • Dam, Erik B.
  • Dou, Qi
  • Fu, Chi-Wing
  • Georgescu, Bogdan
  • Giró-i-Nieto, Xavier
  • Gruen, Felix
  • Han, Xu
  • Heng, Pheng-Ann
  • Hesser, Jürgen
  • Moltz, Jan Hendrik
  • Igel, Christian
  • Isensee, Fabian
  • Jäger, Paul
  • Jia, Fucang
  • Kaluva, Krishna Chaitanya
  • Khened, Mahendra
  • Kim, Ildoo
  • Kim, Jae-Hun
  • Kim, Sungwoong
  • Kohl, Simon
  • Konopczynski, Tomasz
  • Kori, Avinash
  • Krishnamurthi, Ganapathy
  • Li, Fan
  • Li, Hongchao
  • Li, Junbo
  • Li, Xiaomeng
  • Lowengrub, John
  • Ma, Jun
  • Maier-Hein, Klaus
  • Maninis, Kevis-Kokitsi
  • Meine, Hans
  • Merhof, Dorit
  • Pai, Akshay
  • Perslev, Mathias
  • Petersen, Jens
  • Pont-Tuset, Jordi
  • Qi, Jin
  • Qi, Xiaojuan
  • Rippel, Oliver
  • Roth, Karsten
  • Sarasua, Ignacio
  • Schenk, Andrea
  • Shen, Zengming
  • Torres, Jordi
  • Wachinger, Christian
  • Wang, Chunliang
  • Weninger, Leon
  • Wu, Jianrong
  • Xu, Daguang
  • Yang, Xiaoping
  • Yu, Simon Chun-Ho
  • Yuan, Yading
  • Yue, Miao
  • Zhang, Liping
  • Cardoso, Jorge
  • Bakas, Spyridon
  • Braren, Rickmer
  • Heinemann, Volker
  • Pal, Christopher
  • Tang, An
  • Kadoury, Samuel
  • Soler, Luc
  • van Ginneken, Bram
  • Greenspan, Hayit
  • Joskowicz, Leo
  • Menze, Bjoern
Medical Image Analysis 2023 Journal Article, cited 612 times
Website
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.

Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views

  • Bier, B.
  • Goldmann, F.
  • Zaech, J. N.
  • Fotouhi, J.
  • Hegeman, R.
  • Grupp, R.
  • Armand, M.
  • Osgood, G.
  • Navab, N.
  • Maier, A.
  • Unberath, M.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
Purpose Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ . Results On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. Conclusion We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.

Convolutional neural networks for head and neck tumor segmentation on 7-channel multiparametric MRI: a leave-one-out analysis

  • Bielak, Lars
  • Wiedenmann, Nicole
  • Berlin, Arnie
  • Nicolay, Nils Henrik
  • Gunashekar, Deepa Darshini
  • Hagele, Leonard
  • Lottner, Thomas
  • Grosu, Anca-Ligia
  • Bock, Michael
Radiat Oncol 2020 Journal Article, cited 1 times
Website
BACKGROUND: Automatic tumor segmentation based on Convolutional Neural Networks (CNNs) has shown to be a valuable tool in treatment planning and clinical decision making. We investigate the influence of 7 MRI input channels of a CNN with respect to the segmentation performance of head&neck cancer. METHODS: Head&neck cancer patients underwent multi-parametric MRI including T2w, pre- and post-contrast T1w, T2*, perfusion (ktrans, ve) and diffusion (ADC) measurements at 3 time points before and during radiochemotherapy. The 7 different MRI contrasts (input channels) and manually defined gross tumor volumes (primary tumor and lymph node metastases) were used to train CNNs for lesion segmentation. A reference CNN with all input channels was compared to individually trained CNNs where one of the input channels was left out to identify which MRI contrast contributes the most to the tumor segmentation task. A statistical analysis was employed to account for random fluctuations in the segmentation performance. RESULTS: The CNN segmentation performance scored up to a Dice similarity coefficient (DSC) of 0.65. The network trained without T2* data generally yielded the worst results, with DeltaDSCGTV-T = 5.7% for primary tumor and DeltaDSCGTV-Ln = 5.8% for lymph node metastases compared to the network containing all input channels. Overall, the ADC input channel showed the least impact on segmentation performance, with DeltaDSCGTV-T = 2.4% for primary tumor and DeltaDSCGTV-Ln = 2.2% respectively. CONCLUSIONS: We developed a method to reduce overall scan times in MRI protocols by prioritizing those sequences that add most unique information for the task of automatic tumor segmentation. The optimized CNNs could be used to aid in the definition of the GTVs in radiotherapy planning, and the faster imaging protocols will reduce patient scan times which can increase patient compliance. TRIAL REGISTRATION: The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under register number DRKS00003830 on August 20th, 2015.

Comparative evaluation of conventional and deep learning methods for semi-automated segmentation of pulmonary nodules on CT

  • Bianconi, Francesco
  • Fravolini, Mario Luca
  • Pizzoli, Sofia
  • Palumbo, Isabella
  • Minestrini, Matteo
  • Rondini, Maria
  • Nuvoli, Susanna
  • Spanu, Angela
  • Palumbo, Barbara
Quant Imaging Med Surg 2021 Journal Article, cited 2 times
Website
Background: Accurate segmentation of pulmonary nodules on computed tomography (CT) scans plays a crucial role in the evaluation and management of patients with suspicion of lung cancer (LC). When performed manually, not only the process requires highly skilled operators, but is also tiresome and time-consuming. To assist the physician in this task several automated and semi-automated methods have been proposed in the literature. In recent years, in particular, the appearance of deep learning has brought about major advances in the field. Methods: Twenty-four (12 conventional and 12 based on deep learning) semi-automated-'one-click'-methods for segmenting pulmonary nodules on CT were evaluated in this study. The experiments were carried out on two datasets: a proprietary one (383 images from a cohort of 111 patients) and a public one (259 images from a cohort of 100). All the patients had a positive transcript for suspect pulmonary nodules. Results: The methods based on deep learning clearly outperformed the conventional ones. The best performance [Sorensen-Dice coefficient (DSC)] in the two datasets was, respectively, 0.853 and 0.763 for the deep learning methods, and 0.761 and 0.704 for the traditional ones. Conclusions: Deep learning is a viable approach for semi-automated segmentation of pulmonary nodules on CT scans.

A comparison of ground truth estimation methods

  • Biancardi, Alberto M
  • Jirapatnakul, Artit C
  • Reeves, Anthony P
International Journal of Computer Assisted Radiology and Surgery 2010 Journal Article, cited 17 times
Website
PURPOSE: Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for the development of diagnostic tools by means of algorithm validation, measurement metric analysis, accurate size estimation. Four methods that estimate GTs from multiple readers' documentations by considering the spatial location of voxels were compared: thresholded Probability-Map at 0.50 (TPM(0.50)) and at 0.75 (TPM(0.75)), simultaneous truth and performance level estimation (STAPLE) and truth estimate from self distances (TESD). METHODS: A subset of the publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented by all four radiologists. The pair-wise similarities between the estimated GTs were analyzed by computing the respective Jaccard coefficients. Then, with respect to the readers' marking volumes, the estimated volumes were ranked and the sign test of the differences between them was performed. RESULTS: (a) the rank variations among the four methods and the volume differences between STAPLE and TESD are not statistically significant, (b) TPM(0.50) estimates are statistically larger (c) TPM(0.75) estimates are statistically smaller (d) there is some spatial disagreement in the estimates as the one-sided 90% confidence intervals between TPM(0.75) and TPM(0.50), TPM(0.75) and STAPLE, TPM(0.75) and TESD, TPM(0.50) and STAPLE, TPM(0.50) and TESD, STAPLE and TESD, respectively, show: [0.67, 1.00], [0.67, 1.00], [0.77, 1.00], [0.93, 1.00], [0.85, 1.00], [0.85, 1.00]. CONCLUSIONS: The method used to estimate the GT is important: the differences highlighted that STAPLE and TESD, notwithstanding a few weaknesses, appear to be equally viable as a GT estimator, while the increased availability of computing power is decreasing the appeal afforded to TPMs. Ultimately, the choice of which GT estimation method, between the two, should be preferred depends on the specific characteristics of the marked data that is used with respect to the two elements that differentiate the method approaches: relative reliabilities of the readers and the reliability of the region boundaries.

Artificial intelligence in cancer imaging: Clinical challenges and applications

  • Bi, Wenya Linda
  • Hosny, Ahmed
  • Schabath, Matthew B
  • Giger, Maryellen L
  • Birkbak, Nicolai J
  • Mehrtash, Alireza
  • Allison, Tavis
  • Arnaout, Omar
  • Abbosh, Christopher
  • Dunn, Ian F
CA: a cancer journal for clinicians 2019 Journal Article, cited 0 times
Website

G-DOC Plus–an integrative bioinformatics platform for precision medicine

  • Bhuvaneshwar, Krithika
  • Belouali, Anas
  • Singh, Varun
  • Johnson, Robert M
  • Song, Lei
  • Alaoui, Adil
  • Harris, Michael A
  • Clarke, Robert
  • Weiner, Louis M
  • Gusev, Yuriy
BMC Bioinformatics 2016 Journal Article, cited 14 times
Website

Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision

  • Bhattacharya, Sayantan
  • Sharma, Apoorv
  • Gupta, Rinki
  • Bhan, Anupama
2020 Conference Proceedings, cited 0 times
Website

Radial Cumulative Frequency Distribution: A New Imaging Signature to Detect Chromosomal Arms 1p/19q Co-deletion Status in Glioma

  • Debanjali Bhattacharya
  • Neelam Sinha
  • Jitender Saini
2020 Conference Proceedings, cited 0 times
Website
Gliomas are the most common primary brain tumor and are associated with high mortality. Gene mutations are one of the hallmarks of glioma formation, determining its aggressiveness as well as patients’s response towards treatment. The paper presents a novel approach to detect chromosomal arms 1p/19q co-deletion status non-invasively in low-graded glioma based on its textural characteristics in frequency domain. For this, we derived Radial Cumulative Frequency Distribution (RCFD) function from Fourier power spectrum of consecutive glioma slices. Multi-parametric MRIs of 159 grade-2 and grade-3 glioma patients, having biopsy proven 1p/19q mutational status (non-deletion: n = 57 and co-deletion: n = 102) was used in this study. Different RCFD textural features were extracted to quantify MRI image signature pattern of mutant and wildtype glioma. Owing to skewed dataset we have performed RUSBoost classification; yielding average accuracy of 73.5% for grade-2 and 83% for grade-3 glioma subjects. The efficacy of the proposed technique is discussed further in comparison with state-of-art methods.

COMPARISON OF A PATIENT-SPECIFIC COMPUTED TOMOGRAPHY ORGAN DOSE SOFTWARE WITH COMMERCIAL PHANTOM-BASED TOOLS

  • Bhat, Bindu
2023 Thesis, cited 0 times
Website
Computed Tomography imaging is an important diagnostic tool but carries some risk due to radiation dose used to form the image. Currently, CT scanners report a measure of radiation dose for each scan that reflects the radiation emitted by the scanner, not the radiation dose absorbed by the patient. The radiation dose absorbed by organs, known as organ dose, is a more relevant metric that is important for risk assessment and CT protocol optimization. Tools for rapid organ-dose estimation are available but are limited to using general patient models. These publicly available tools are unable to model patient-specific anatomy and positioning within the scanner. To address these limitations, the Personalized Rapid Estimator of Dose in Computed Tomography (PREDICT) dosimetry tool was recently developed. This study validated the organ doses estimated by ‘PREDICT’ with ground truth values. The patient-specific PREDICT performance was also compared to two publicly available phantom-based methods: VirtualDose and NCICT. The PREDICT tool demonstrated lower organ dose errors compared to the phantom-based methods, demonstrating the benefit of patient-specific modeling. This study also developed a method to extract the walls of cavity organs, such as the bladder and the intestines, and quantified the effect of organ wall extraction on organ dose. The study found that the exogenous material within the cavity organ can affect organ dose estimate, therefore demonstrating the importance of boundary wall extraction in dosimetry tools such as PREDICT.

Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images

  • Bhandary, Abhir
  • Prabhu, G. Ananth
  • Rajinikanth, V.
  • Thanaraj, K. Palani
  • Satapathy, Suresh Chandra
  • Robbins, David E.
  • Shasky, Charles
  • Zhang, Yu-Dong
  • Tavares, João Manuel R. S.
  • Raja, N. Sri Madhava
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Website
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.

A Reversible Medical Image Watermarking for ROI Tamper Detection and Recovery

  • Bhalerao, Siddharth
  • Ansari, Irshad Ahmad
  • Kumar, Anil
2023 Journal Article, cited 0 times
Website
Medical data security is an active area of research. With the increasing rate of digitalization, telemedicine industry is experiencing rapid growth, and medical data security has become more important than ever. In this work, a region-based reversible medical image watermarking scheme has been proposed. The scheme has ROI (region of interest) tamper detection and recovery capabilities. The medical image is divided into ROI and RONI (region of noninterest) regions. In ROI region, authentication data have been embedded using prediction-error expansion technique. The compressed copy of ROI has been embedded in RONI region. Data embedding in RONI region have been performed using difference histogram expansion technique. Reversible techniques are used for data embedding in both ROI and RONI. The proposed scheme authenticates both ROI and RONI for tampering. The scheme is 100% reversible when there is no tampering. The scheme checks for ROI tampering and recovers the ROI in its original state when tampering is detected. The scheme is able to perform equally well on different classes of medical images. The scheme provides average PSNR and SSIM equal to 55 dB and 0.99, respectively, for different types of medical images.

Cuckoo search based multi-objective algorithm with decomposition for detection of masses in mammogram images

  • Bhalerao, Pramod B.
  • Bonde, Sanjiv V.
International Journal of Information Technology 2021 Journal Article, cited 0 times
Website
Breast cancer is the most recurrent cancer in the United States after skin cancer. Early detection of masses in mammograms will help drop the death rate. This paper provides a hybrid approach based on a multiobjective evolutionary algorithm (MOEA) and cuckoo search. Using cuckoo search for decomposing problem into a single objective (single nest) for each Pareto optimal solution. The proposed method CS-MOEA/DE is evaluated using MIAS and DDSM datasets. A novel hybrid approach consists of nature-inspired cuckoo search and multiobjective optimization with Differential evolution, which is unique and includes detection of masses in a mammogram. The proposed work is evaluated based on 110 (50 + 60) images; the overall accuracy found for the proposed hybrid method is 96.74%. The experimental outcome shows that our proposed method provides better results than other state-of-the-art methods like the Otsu method, Kapur's Entropy, Cuckoo Search-based modified BHE.

Brain Tumor Segmentation Based on 3D Residual U-Net

  • Bhalerao, Megh
  • Thakur, Siddhesh
2020 Book Section, cited 16 times
Website
We propose a deep learning based approach for automatic brain tumor segmentation utilizing a three-dimensional U-Net extended by residual connections. In this work, we did not incorporate architectural modifications to the existing 3D U-Net, but rather evaluated different training strategies for potential improvement of performance. Our model was trained on the dataset of the International Brain Tumor Segmentation (BraTS) challenge 2019 that comprise multi-parametric magnetic resonance imaging (mpMRI) scans from 335 patients diagnosed with a glial tumor. Furthermore, our model was evaluated on the BraTS 2019 independent validation data that consisted of another 125 brain tumor mpMRI scans. The results that our 3D Residual U-Net obtained on the BraTS 2019 test data are Mean Dice scores of 0.697, 0.828, 0.772 and Hausdorff95 distances of 25.56, 14.64, 26.69 for enhancing tumor, whole tumor, and tumor core, respectively.

Fuzzy volumetric delineation of brain tumor and survival prediction

  • Bhadani, Saumya
  • Mitra, Sushmita
  • Banerjee, Subhashis
Soft Computing 2020 Journal Article, cited 0 times
Website
A novel three-dimensional detailed delineation algorithm is introduced for Glioblastoma multiforme tumors in MRI. It efficiently delineates the whole tumor, enhancing core, edema and necrosis volumes using fuzzy connectivity and multi-thresholding, based on a single seed voxel. While the whole tumor volume delineation uses FLAIR and T2 MRI channels, the outlining of the enhancing core, necrosis and edema volumes employs the T1C channel. Discrete curve evolution is initially applied for multi-thresholding, to determine intervals around significant (visually critical) points, and a threshold is determined in each interval using bi-level Otsu’s method or Li and Lee’s entropy. This is followed by an interactive whole tumor volume delineation using FLAIR and T2 MRI sequences, requiring a single user-defined seed. An efficient and robust whole tumor extraction is executed using fuzzy connectedness and dynamic thresholding. Finally, the segmented whole tumor volume in T1C MRI channel is again subjected to multi-level segmentation, to delineate its sub-parts, encompassing enhancing core, necrosis and edema. This was followed by survival prediction of patients using the concept of habitats. Qualitative and quantitative evaluation, on FLAIR, T2 and T1C MR sequences of 29 GBM patients, establish its superiority over related methods, visually as well as in terms of Dice scores, Sensitivity and Hausdorff distance.

Optimizing Convolutional Neural Network by Hybridized Elephant Herding Optimization Algorithm for Magnetic Resonance Image Classification of Glioma Brain Tumor Grade

  • Timea Bezdan
  • Stefan Milosevic
  • K Venkatachalam
  • Miodrag Zivkovic
  • Nebojsa Bacanin
  • Ivana Strumberger
2021 Conference Paper, cited 0 times
Website
Gliomas belong to the group of the most frequent types of brain tumors. For this specific type of brain tumors, in its beginning stages, it is extremely complex to get the exact diagnosis. Even with the works from the most experienced doctors, it will not be possible without magnetic resonance imaging, which aids to make the diagnosis of brain tumors. In order to create classification of the images, to where the class of glioma belongs to, for achieving superior performance, convolutional neural networks can be used. For achieving high-level accuracy on the image classification, the convolutional network hyperparameters’ calibrations must reach a very accurate response of high accuracy results and this task proves to take up a lot of computational time and energy. Proceeding with the proposed solution, in this scientific research paper a metaheuristic method has been proposed to automatically search and target the near-optimal values of convolutional neural network hyperparameters based on hybridized version of elephant herding optimization swarm intelligence metaheuristics. The hybridized elephant herding optimization has been incorporated for convolutional neural network hyperparameters’ tuning to develop a system for automatic and instantaneous image classification of glioma brain tumors grades from the magnetic resonance imaging. Comparative analysis was performed with other methods tested on the same problem instance an results proved superiority of approach proposed in this paper.

Accurate segmentation of lung nodule with low contrast boundaries by least weight navigation

  • Beula, R. Janefer
  • Wesley, A. Boyed
Multimedia Tools and Applications 2023 Journal Article, cited 0 times
Website
Accurate segmentation of lung nodules with low contrast boundaries in CT images is a challenging task since the intensity of nodules and non-nodules overlap with each other. This work proposes a lung nodule segmentation scheme based on least weight navigation (LWN) that segments the lung nodule accurately with such low contrast boundaries. The complete lung nodule segmentation is categorized intothree stages namely, (i) Lung segmentation, (ii) Coarse segmentation of nodule, and (iii) Fine segmentation of nodule. The lung segmentation aims to eliminate the background other than the lung, whereas the coarse segmentation eliminates the lung leaving the nodules. The lung segmentation and coarse segmentation can be achieved using the traditional algorithms namely, dilation, erosion, and Otsu’s thresholding. The proposed work focused on fine segmentation where the boundaries are accurately detected by the LWN algorithm. The LWN algorithm estimates the edge points and then navigation is performed based on the least weight. This navigation is done till the final termination is reached, which results in accurate segmentation results. The experimental validation was done on LIDC and Cancer Imaging dataset with three different nodules such as Juxta vascular, Juxta pleura, and Solitary. The evaluation was done using the metrics such as dice similarity coefficient (DSC), sensitivity (SEN), positive prediction value (PPV). Hausdorff distance (HD) andProbability rand index(PRI). The proposed approach provides a DSC, SEN, and PPV of 84.27%, 89.92%, and 80.12% respectively. The result reveals that the proposed work outperforms the traditional lung nodule segmentation algorithms.

NERONE: The Fast Way to Efficiently Execute Your Deep Learning Algorithm At the Edge

  • Berzoini, R.
  • D'Arnese, E.
  • Conficconi, D.
  • Santambrogio, M. D.
IEEE J Biomed Health Inform 2023 Journal Article, cited 0 times
Website
Semantic segmentation and classification are pivotal in many clinical applications, such as radiation dose quantification and surgery planning. While manually labeling images is highly time-consuming, the advent of Deep Learning (DL) has introduced a valuable alternative. Nowadays, DL models inference is run on Graphics Processing Units (GPUs), which are power-hungry devices, and, therefore, are not the most suited solution in constrained environments where Field Programmable Gate Arrays (FPGAs) become an appealing alternative given their remarkable performance per watt ratio. Unfortunately, FPGAs are hard to use for non-experts, and the creation of tools to open their employment to the computer vision community is still limited. For these reasons, we propose NERONE, which allows end users to seamlessly benefit from FPGA acceleration and energy efficiency without modifying their DL development flows. To prove the capability of NERONE to cover different network architectures, we have developed four models, one for each of the chosen datasets (three for segmentation and one for classification), and we deployed them, thanks to NERONE, on three different embedded FPGA-powered boards achieving top average energy efficiency improvements of 3.4x and 1.9x against a mobile and a datacenter GPU devices, respectively.

On How to Push Efficient Medical Semantic Segmentation to the Edge: the SENECA approach

  • Berzoini, Raffaele
  • D'Arnese, Eleonora
  • Conficconi, Davide
2022 Journal Article, cited 0 times
Semantic segmentation is the process of assigning each input image pixel a value representing a class, and it enables the clustering of pixels into object instances. It is a highly employed computer vision task in various fields such as autonomous driving and medical image analysis. In particular, in medical practice, semantic segmentation identifies different regions of interest within an image, like different organs or anomalies such as tumors. Fully Convolutional Networks (FCNs) have been employed to solve semantic segmentation in different fields and found their way in the medical one. In this context, the low contrast among semantically different areas, the constraint related to energy consumption, and computation resource availability increase the complexity and limit their adoption in daily practice. Based on these considerations, we propose SENECA to bring medical semantic segmentation to the edge with high energy efficiency and low segmentation time while preserving the accuracy. We reached a throughput of 335.4 ± 0.34 frames per second on the FPGA, 4.65× better than its GPU counterpart, with a global dice score of 93.04% ± 0.07 and an improvement in terms of energy efficiency with respect to the GPU of 12.7×.

Fast Marching Energy CNN

  • Bertrand, Théo
  • Makaroff, Nicolas
  • Cohen, Laurent D.
2023 Book Section, cited 0 times
Website
Leveraging geodesic distances and the geometrical information they convey is key for many data-oriented applications in imaging. Geodesic distance computation has been used for long for image segmentation using Image based metrics. We introduce a new method by generating isotropic Riemannian metrics adapted to a problem using CNN and give as illustrations an example of application. We then apply this idea to the segmentation of brain tumours as unit balls for the geodesic distance computed with the metric potential output by a CNN, thus imposing geometrical and topological constraints on the output mask. We show that geodesic distance modules work well in machine learning frameworks and can be used to achieve state-of-the-art performances while ensuring geometrical and/or topological properties.

Optimization with Soft Dice Can Lead to a Volumetric Bias

  • Bertels, Jeroen
  • Robben, David
  • Vandermeulen, Dirk
  • Suetens, Paul
2020 Book Section, cited 21 times
Website
Segmentation is a fundamental task in medical image analysis. The clinical interest is often to measure the volume of a structure. To evaluate and compare segmentation methods, the similarity between a segmentation and a predefined ground truth is measured using metrics such as the Dice score. Recent segmentation methods based on convolutional neural networks use a differentiable surrogate of the Dice score, such as soft Dice, explicitly as the loss function during the learning phase. Even though this approach leads to improved Dice scores, we find that, both theoretically and empirically on four medical tasks, it can introduce a volumetric bias for tasks with high inherent uncertainty. As such, this may limit the method’s clinical applicability.

Detection of Motion Artifacts in Thoracic CT Scans

  • Beri, Puneet S.
2020 Thesis, cited 0 times
Website
The analysis of a lung CT scan can be a complicated task due to the presence of certain image artifacts such as cardiac motion, respiratory motion, beam hardening artefacts, and so on. In this project, we have built a deep learning based model for the detection of these motion artifacts in the image. Using biomedical image segmentation models we have trained the model on lung CT scans from the LIDC dataset. The developed model is able to identify the regions in the scan which are affected by motion by segmenting the image. Further it is also able to separate normal (or easy to analyze) CT scans from CT scans that may provide incorrect quantitative analysis, even when the examples of image artifacts or low quality scans are scarce. In addition, the model is able to evaluate a quality score for the scan based on the amount of artifacts detected which could hamper its authenticity for the further diagnosisof disease or disease progression. We used two main approaches during the experimentation process - 2D slice based approaches and 2D patch based approaches of which the patch based approaches yielded the final model. The final model gave an AUC of 0.814 in the ROC analysis of the evaluation study conducted. Discussions on the approaches and findings of the final model are provided and future directions are proposed.

Pulmonary nodule detection using a cascaded SVM classifier

  • Bergtholdt, Martin
  • Wiemker, Rafael
  • Klinder, Tobias
2016 Conference Proceedings, cited 9 times
Website
Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

Adverse prognosis of glioblastoma contacting the subventricular zone: Biological correlates

  • Berendsen, S.
  • van Bodegraven, E.
  • Seute, T.
  • Spliet, W. G. M.
  • Geurts, M.
  • Hendrikse, J.
  • Schoysman, L.
  • Huiszoon, W. B.
  • Varkila, M.
  • Rouss, S.
  • Bell, E. H.
  • Kroonen, J.
  • Chakravarti, A.
  • Bours, V.
  • Snijders, T. J.
  • Robe, P. A.
PLoS One 2019 Journal Article, cited 2 times
Website
INTRODUCTION: The subventricular zone (SVZ) in the brain is associated with gliomagenesis and resistance to treatment in glioblastoma. In this study, we investigate the prognostic role and biological characteristics of subventricular zone (SVZ) involvement in glioblastoma. METHODS: We analyzed T1-weighted, gadolinium-enhanced MR images of a retrospective cohort of 647 primary glioblastoma patients diagnosed between 2005-2013, and performed a multivariable Cox regression analysis to adjust the prognostic effect of SVZ involvement for clinical patient- and tumor-related factors. Protein expression patterns of a.o. markers of neural stem cellness (CD133 and GFAP-delta) and (epithelial-) mesenchymal transition (NF-kappaB, C/EBP-beta and STAT3) were determined with immunohistochemistry on tissue microarrays containing 220 of the tumors. Molecular classification and mRNA expression-based gene set enrichment analyses, miRNA expression and SNP copy number analyses were performed on fresh frozen tissue obtained from 76 tumors. Confirmatory analyses were performed on glioblastoma TCGA/TCIA data. RESULTS: Involvement of the SVZ was a significant adverse prognostic factor in glioblastoma, independent of age, KPS, surgery type and postoperative treatment. Tumor volume and postoperative complications did not explain this prognostic effect. SVZ contact was associated with increased nuclear expression of the (epithelial-) mesenchymal transition markers C/EBP-beta and phospho-STAT3. SVZ contact was not associated with molecular subtype, distinct gene expression patterns, or markers of stem cellness. Our main findings were confirmed in a cohort of 229 TCGA/TCIA glioblastomas. CONCLUSION: In conclusion, involvement of the SVZ is an independent prognostic factor in glioblastoma, and associates with increased expression of key markers of (epithelial-) mesenchymal transformation, but does not correlate with stem cellness, molecular subtype, or specific (mi)RNA expression patterns.

Segmentation of three-dimensional images with parametric active surfaces and topology changes

  • Benninghoff, Heike
  • Garcke, Harald
Journal of Scientific ComputingJ Sci Comput 2017 Journal Article, cited 1 times
Website
In this paper, we introduce a novel parametric finite element method for segmentation of three-dimensional images. We consider a piecewise constant version of the Mumford-Shah and the Chan-Vese functionals and perform a region-based segmentation of 3D image data. An evolution law is derived from energy minimization problems which push the surfaces to the boundaries of 3D objects in the image. We propose a parametric scheme which describes the evolution of parametric surfaces. An efficient finite element scheme is proposed for a numerical approximation of the evolution equations. Since standard parametric methods cannot handle topology changes automatically, an efficient method is presented to detect, identify and perform changes in the topology of the surfaces. One main focus of this paper are the algorithmic details to handle topology changes like splitting and merging of surfaces and change of the genus of a surface. Different artificial images are studied to demonstrate the ability to detect the different types of topology changes. Finally, the parametric method is applied to segmentation of medical 3D images.

Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images

  • Benalcázar, Marco E
  • Brun, Marcel
  • Ballarin, Virginia
2015 Conference Proceedings, cited 0 times
Website

Deep Convolutional Neural Networks for Brain Tumor Segmentation: Boosting Performance Using Deep Transfer Learning: Preliminary Results

  • Ben Naceur, Mostefa
  • Akil, Mohamed
  • Saouli, Rachida
  • Kachouri, Rostom
2020 Book Section, cited 8 times
Website
Brain tumor segmentation through MRI images analysis is one of the most challenging issues in medical field. Among these issues, Glioblastomas (GBM) invade the surrounding tissue rather than displacing it, causing unclear boundaries, furthermore, GBM in MRI scans have the same appearance as Gliosis, stroke, inflammation and blood spots. Also, fully automatic brain tumor segmentation methods face other issues such as false positive and false negative regions. In this paper, we present new pipelines to boost the prediction of GBM tumoral regions. These pipelines are based on 3 stages, first stage, we developed Deep Convolutional Neural Networks (DCNNs), then in second stage we extract multi-dimensional features from higher-resolution representation of DCNNs, in third stage we developed machine learning algorithms, where we feed the extracted features from DCNNs into different algorithms such as Random forest (RF) and Logistic regression (LR), and principal component analysis with support vector machine (PCA-SVM). Our experiment results are reported on BRATS-2019 dataset where we achieved through our proposed pipelines the state-of-the-art performance. The average Dice score of our best proposed brain tumor segmentation pipeline is 0.85, 0.76, 0.74 for whole tumor, tumor core, and enhancing tumor, respectively. Finally, our proposed pipeline provides an accurate segmentation performance in addition to the computational efficiency in terms of inference time makes it practical for day-to-day use in clinical centers and for research.

Prostate Cancer Delineation in MRI Images Based on Deep Learning: Quantitative Comparison and Promising Perspective

  • Ben Loussaief, Eddardaa
  • Abdel-Nasser, Mohamed
  • Puig, Domènec
2021 Conference Paper, cited 0 times
Prostate cancer is the most common malignant male tumor. Magnetic Resonance Imaging (MRI) plays a crucial role in the detection, diagnosis, and treatment of prostate cancer diseases. Computer-aided diagnosis systems can help doctors to analyze MRI images and detect prostate cancer earlier. One of the key stages of prostate cancer CAD systems is the automatic delineation of the prostate. Deep learning has recently demonstrated promising segmentation results with medical images. The purpose of this paper is to compare the state-of-the-art of deep learning-based approaches for prostate delineation in MRI images and discussing their limitations and strengths. Besides, we introduce a promising perspective for prostate tumor classification in MRI images. This perspective includes the use of the best segmentation model to detect the prostate tumors in MRI images. Then, we will employ the segmented images to extract the radiomics features that will be used to discriminate benign or malignant prostate tumors.

Ensembles of Convolutional Neural Networks for Survival Time Estimation of High-Grade Glioma Patients from Multimodal MRI

  • Ben Ahmed, Kaoutar
  • Hall, Lawrence O.
  • Goldgof, Dmitry B.
  • Gatenby, Robert
Diagnostics 2022 Journal Article, cited 2 times
Website
Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.

Towards High Performing and Reliable Deep Convolutional Neural Network Models for Typically Limited Medical Imaging Datasets

  • Ben Ahmed, Kaoutar
2022 Thesis, cited 0 times
Website
Artificial Intelligence (AI) is “The science and engineering of making intelligent machines, especially intelligent computer programs” [93]. Artificial Intelligence has been applied in a wide range of fields including automobiles, space, robotics, and healthcare. According to recent reports, AI will have a huge impact on increasing the world economy by 2030 and it’s expected that the greatest impact will be in the field of healthcare. The global market size of AI in healthcare was estimated at USD 10.4 billion in 2021 and is expected to grow at a high rate from 2022 to 2030 (CAGR of 38.4%) [124]. Applications of AI in healthcare include robot-assisted surgery, disease detection, health monitoring, and automatic medical image analysis. Healthcare organizations are becoming increasingly in terested in how artificial intelligence can support better patient care while reducing costs and improving efficiencies. Deep learning is a subset of AI that is becoming transformative for healthcare. Deep learning offers fast and accurate data analysis. Deep learning is based on the concept of artificial neural networks to solve complex problems. In this dissertation, we propose deep learning-based solutions to the problems of limited medical imaging in two clinical contexts: brain tumor prognosis and COVID-19 diagno sis. For brain tumor prognosis, we suggest novel systems for overall survival prediction of Glioblastoma patients from small magnetic resonance imaging (MRI) datasets based on ensembles of convolutional neural networks (CNNs). For COVID-19 diagnosis, we reveal one critical problem with CNN-based approaches for predicting COVID-19 from chest X-ray (CXR) imaging: shortcut learning. Then, we experimentally suggest methods to mitigate this problem to build fair, reliable, robust, and transparent deep learning based clinical decision support systems. We discovered this problem with CNNs and using Chest Xray imaging. However, the issue and solutions generally apply to other imaging modalities and recognition problems.

Development of a 3D CNN-based AI Model for Automated Segmentation of the Prostatic Urethra

  • Belue, M. J.
  • Harmon, S. A.
  • Patel, K.
  • Daryanani, A.
  • Yilmaz, E. C.
  • Pinto, P. A.
  • Wood, B. J.
  • Citrin, D. E.
  • Choyke, P. L.
  • Turkbey, B.
Acad Radiol 2022 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVE: The combined use of prostate cancer radiotherapy and MRI planning is increasingly being used in the treatment of clinically significant prostate cancers. The radiotherapy dosage quantity is limited by toxicity in organs with de-novo genitourinary toxicity occurrence remaining unperturbed. Estimation of the urethral radiation dose via anatomical contouring may improve our understanding of genitourinary toxicity and its related symptoms. Yet, urethral delineation remains an expert-dependent and time-consuming procedure. In this study, we aim to develop a fully automated segmentation tool for the prostatic urethra. MATERIALS AND METHODS: This study incorporated 939 patients' T2-weighted MRI scans (train/validation/test/excluded: 657/141/140/1 patients), including in-house and public PROSTATE-x datasets, and their corresponding ground truth urethral contours from an expert genitourinary radiologist. The AI model was developed using MONAI framework and was based on a 3D-UNet. AI model performance was determined by Dice score (volume-based) and the Centerline Distance (CLD) between the prediction and ground truth centers (slice-based). All predictions were compared to ground truth in a systematic failure analysis to elucidate the model's strengths and weaknesses. The Wilcoxon-rank sum test was used for pair-wise comparison of group differences. RESULTS: The overall organ-adjusted Dice score for this model was 0.61 and overall CLD was 2.56 mm. When comparing prostates with symmetrical (n = 117) and asymmetrical (n = 23) benign prostate hyperplasia (BPH), the AI model performed better on symmetrical prostates compared to asymmetrical in both Dice score (0.64 vs. 0.51 respectively, p < 0.05) and mean CLD (2.3 mm vs. 3.8 mm respectively, p < 0.05). When calculating location-specific performance, the performance was highest at the apex and lowest at the base location of the prostate for Dice and CLD. Dice location dependence: symmetrical (Apex, Mid, Base: 0.69 vs. 0.67 vs. 0.54 respectively, p < 0.05) and asymmetrical (Apex, Mid, Base: 0.68 vs. 0.52 vs. 0.39 respectively, p < 0.05). CLD location dependence: symmetrical (Apex, Mid, Base: 1.43 mm vs. 2.15 mm vs. 3.28 mm, p < 0.05) and asymmetrical (Apex, Mid, Base: 1.83 mm vs. 3.1 mm vs. 6.24 mm, p < 0.05). CONCLUSION: We developed a fully automated prostatic urethra segmentation AI tool yielding its best performance in prostate glands with symmetric BPH features. This system can potentially be used to assist treatment planning in patients who can undergo whole gland radiation therapy or ablative focal therapy.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times
Website

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C. Chad
Journal of Magnetic Resonance Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers. PURPOSE/HYPOTHESIS: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading. STUDY TYPE: Retrospective study from The Cancer Imaging Archive (TCIA). POPULATION: Forty-nine subjects with low- and high-grade gliomas. FIELD STRENGTH/SEQUENCE: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition. ASSESSMENT: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2). STATISTICAL TESTS: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests. RESULTS: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone. DATA CONCLUSION: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019.

Evaluating the Use of rCBV as a Tumor Grade and Treatment Response Classifier Across NCI Quantitative Imaging Network Sites: Part II of the DSC-MRI Digital Reference Object (DRO) Challenge

  • Bell, Laura C
  • Semmineh, Natenael
  • An, Hongyu
  • Eldeniz, Cihat
  • Wahl, Richard
  • Schmainda, Kathleen M
  • Prah, Melissa A
  • Erickson, Bradley J
  • Korfiatis, Panagiotis
  • Wu, Chengyue
  • Sorace, Anna G
  • Yankeelov, Thomas E
  • Rutledge, Neal
  • Chenevert, Thomas L
  • Malyarenko, Dariya
  • Liu, Yichu
  • Brenner, Andrew
  • Hu, Leland S
  • Zhou, Yuxiang
  • Boxerman, Jerrold L
  • Yen, Yi-Fen
  • Kalpathy-Cramer, Jayashree
  • Beers, Andrew L
  • Muzi, Mark
  • Madhuranthakam, Ananth J
  • Pinho, Marco
  • Johnson, Brian
  • Quarles, C Chad
Tomography 2020 Journal Article, cited 1 times
Website
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking.

Longitudinal fan-beam computed tomography dataset for head-and-neck squamous cell carcinoma patients

  • Bejarano, Tatiana
  • De Ornelas-Couto, Mariluz
  • Mihaylov, Ivaylo B.
Medical Physics 2019 Journal Article, cited 0 times
Website
Purpose To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. Acquisition and validation methods This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2–27), mid-treatment CT at 22 days after start of treatment (range: 13–38), and post-treatment CT 65 days after start of treatment (range: 35–192). Patients received RT treatment to a total dose of 58–70 Gy, using daily 2.0–2.20 Gy, fractions for 30–35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. Data format and usage notes The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). Discussion This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.

Radiogenomic analysis of hypoxia pathway reveals computerized MRI descriptors predictive of overall survival in Glioblastoma

  • Beig, Niha
  • Patel, Jay
  • Prasanna, Prateek
  • Partovi, Sasan
  • Varadhan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
2017 Conference Proceedings, cited 3 times
Website

Radiogenomic analysis of hypoxia pathway is predictive of overall survival in Glioblastoma

  • Beig, N.
  • Patel, J.
  • Prasanna, P.
  • Hill, V.
  • Gupta, A.
  • Correa, R.
  • Bera, K.
  • Singh, S.
  • Partovi, S.
  • Varadan, V.
  • Ahluwalia, M.
  • Madabhushi, A.
  • Tiwari, P.
Sci RepScientific reports 2018 Journal Article, cited 5 times
Website
Hypoxia, a characteristic trait of Glioblastoma (GBM), is known to cause resistance to chemo-radiation treatment and is linked with poor survival. There is hence an urgent need to non-invasively characterize tumor hypoxia to improve GBM management. We hypothesized that (a) radiomic texture descriptors can capture tumor heterogeneity manifested as a result of molecular variations in tumor hypoxia, on routine treatment naive MRI, and (b) these imaging based texture surrogate markers of hypoxia can discriminate GBM patients as short-term (STS), mid-term (MTS), and long-term survivors (LTS). 115 studies (33 STS, 41 MTS, 41 LTS) with gadolinium-enhanced T1-weighted MRI (Gd-T1w) and T2-weighted (T2w) and FLAIR MRI protocols and the corresponding RNA sequences were obtained. After expert segmentation of necrotic, enhancing, and edematous/nonenhancing tumor regions for every study, 30 radiomic texture descriptors were extracted from every region across every MRI protocol. Using the expression profile of 21 hypoxia-associated genes, a hypoxia enrichment score (HES) was obtained for the training cohort of 85 cases. Mutual information score was used to identify a subset of radiomic features that were most informative of HES within 3-fold cross-validation to categorize studies as STS, MTS, and LTS. When validated on an additional cohort of 30 studies (11 STS, 9 MTS, 10 LTS), our results revealed that the most discriminative features of HES were also able to distinguish STS from LTS (p = 0.003).

Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma

  • Beig, Niha
  • Bera, Kaustav
  • Prasanna, Prateek
  • Antunes, Jacob
  • Correa, Ramon
  • Singh, Salendra
  • Saeed Bamashmos, Anas
  • Ismail, Marwa
  • Braman, Nathaniel
  • Verma, Ruchika
  • Hill, Virginia B
  • Statsevych, Volodymyr
  • Ahluwalia, Manmeet S
  • Varadan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
Clin Cancer Res 2020 Journal Article, cited 0 times
Website
PURPOSE: To (i) create a survival risk score using radiomic features from the tumor habitat on routine MRI to predict progression-free survival (PFS) in glioblastoma and (ii) obtain a biological basis for these prognostic radiomic features, by studying their radiogenomic associations with molecular signaling pathways. EXPERIMENTAL DESIGN: Two hundred three patients with pretreatment Gd-T1w, T2w, T2w-FLAIR MRI were obtained from 3 cohorts: The Cancer Imaging Archive (TCIA; n = 130), Ivy GAP (n = 32), and Cleveland Clinic (n = 41). Gene-expression profiles of corresponding patients were obtained for TCIA cohort. For every study, following expert segmentation of tumor subcompartments (necrotic core, enhancing tumor, peritumoral edema), 936 3D radiomic features were extracted from each subcompartment across all MRI protocols. Using Cox regression model, radiomic risk score (RRS) was developed for every protocol to predict PFS on the training cohort (n = 130) and evaluated on the holdout cohort (n = 73). Further, Gene Ontology and single-sample gene set enrichment analysis were used to identify specific molecular signaling pathway networks associated with RRS features. RESULTS: Twenty-five radiomic features from the tumor habitat yielded the RRS. A combination of RRS with clinical (age and gender) and molecular features (MGMT and IDH status) resulted in a concordance index of 0.81 (P < 0.0001) on training and 0.84 (P = 0.03) on the test set. Radiogenomic analysis revealed associations of RRS features with signaling pathways for cell differentiation, cell adhesion, and angiogenesis, which contribute to chemoresistance in GBM. CONCLUSIONS: Our findings suggest that prognostic radiomic features from routine Gd-T1w MRI may also be significantly associated with key biological processes that affect response to chemotherapy in GBM.

Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data

  • Beichel, Reinhard R
  • Smith, Brian J
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
Medical Physics 2017 Journal Article, cited 7 times
Website
PURPOSE: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. METHODS: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. RESULTS: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively. CONCLUSION: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.

Anatomical DCE-MRI phantoms generated from glioma patient data

  • Beers, Andrew
  • Chang, Ken
  • Brown, James
  • Zhu, Xia
  • Sengupta, Dipanjan
  • Willke, Theodore L
  • Gerstner, Elizabeth
  • Rosen, Bruce
  • Kalpathy-Cramer, Jayashree
2018 Conference Proceedings, cited 0 times
Website

Integration of proteomics with CT-based qualitative and radiomic features in high-grade serous ovarian cancer patients: an exploratory analysis

  • Beer, Lucian
  • Sahin, Hilal
  • Bateman, Nicholas W
  • Blazic, Ivana
  • Vargas, Hebert Alberto
  • Veeraraghavan, Harini
  • Kirby, Justin
  • Fevrier-Sullivan, Brenda
  • Freymann, John B
  • Jaffe, C Carl
European Radiology 2020 Journal Article, cited 1 times
Website

Semantic Composition of Data Analytical Processes

  • Bednár, Peter
  • Ivančáková, Juliana
  • Sarnovský, Martin
2024 Journal Article, cited 0 times
Website
This paper presents the semantic framework for the description and automatic composition of the data analytical processes. The framework specifies how to describe goals, input data, outputs and various data operators for data pre-processing and modelling that can be applied to achieve the goals. The main contribution of this paper is the formal language for the specification of the preconditions, postconditions, inputs and outputs of the data operators. The formal description of the operators with the logical expressions allows automatic composition of operators into the complex workflows achieving the specified goals of the data analysis. The evaluation of the semantic framework was performed on the two real-world use cases from the medical domain, where the automatically generated workflow was compared with the implementation manually programmed by the data scientist.

Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study

  • Becker, A. S.
  • Chaitanya, K.
  • Schawkat, K.
  • Müehlematter, U. J.
  • Hotker, A. M.
  • Konukoglu, E.
  • Donati, O. F.
Eur J Radiol 2019 Journal Article, cited 3 times
Website
PURPOSE: To evaluate the interreader variability in prostate and seminal vesicle (SV) segmentation on T2w MRI. METHODS: Six readers segmented the peripheral zone (PZ), transitional zone (TZ) and SV slice-wise on axial T2w prostate MRI examinations of n=80 patients. Twenty different similarity scores, including dice score (DS), Hausdorff distance (HD) and volumetric similarity coefficient (VS), were computed with the VISCERAL EvaluateSegmentation software for all structures combined and separately for the whole gland (WG=PZ+TZ), TZ and SV. Differences between base, midgland and apex were evaluated with DS slice-wise. Descriptive statistics for similarity scores were computed. Wilcoxon testing to evaluate differences of DS, HD and VS was performed. RESULTS: Overall segmentation variability was good with a mean DS of 0.859 (+/-SD=0.0542), HD of 36.6 (+/-34.9 voxels) and VS of 0.926 (+/-0.065). The WG showed a DS, HD and VS of 0.738 (+/-0.144), 36.2 (+/-35.6 vx) and 0.853 (+/-0.143), respectively. The TZ showed generally lower variability with a DS of 0.738 (+/-0.144), HD of 24.8 (+/-16 vx) and VS of 0.908 (+/-0.126). The lowest variability was found for the SV with DS of 0.884 (+/-0.0407), HD of 17 (+/-10.9 vx) and VS of 0.936 (+/-0.0509). We found a markedly lower DS of the segmentations in the apex (0.85+/-0.12) compared to the base (0.87+/-0.10, p<0.01) and the midgland (0.89+/-0.10, p<0.001). CONCLUSIONS: We report baseline values for interreader variability of prostate and SV segmentation on T2w MRI. Variability was highest in the apex, lower in the base, and lowest in the midgland.

Brain Tumor Automatic Detection from MRI Images Using Transfer Learning Model with Deep Convolutional Neural Network

  • Bayoumi, Esraa
  • Abd-Ellah, mahmoud
  • Khalaf, Ashraf A. M.
  • Gharieb, Reda
Journal of Advanced Engineering Trends 2021 Journal Article, cited 1 times
Website
Brain tumor detection successfully in early-stage plays important role in improving patient treatment and survival. Evaluating magnetic resonance imaging (MRI) images manually is a very difficult task due to the numerous numbers of images produced in the clinic routinely. So, there is a need for using a computer-aided diagnosis (CAD) system for early detection and classification of brain tumors as normal and abnormal. The paper aims to design and evaluate the convolution neural network (CNN) Transfer Learning state-of-the-art performance proposed for image classification over the recent years. Five different modifications have been applied to five different famous CNN to know the most effective modification. Five-layer modifications with parameter tuning are applied for each architecture providing a new CNN architecture for brain tumor detection. Most brain tumor datasets have a small number of images to train the deep learning structure. Therefore, two datasets are used in the evaluation to ensure the effectiveness of the proposed structures. Firstly, a standard dataset from the RIDER Neuro MRI database including 349 brain MRI images with 109 normal images and 240 abnormal images. Secondly, a collection of 120 brain MRI images including 60 abnormal images and 60 normal images. The results show that the proposed CNN Transfer Learning with MRI’s can learn significant biomarkers of brain tumor, however, the best accuracy, specificity, and sensitivity gained is 100% for all of them.

A novel decentralized model for storing and sharing neuroimaging data using ethereum blockchain and the interplanetary file system

  • Batchu, Sai
  • Henry, Owen S.
  • Hakim, Abraham A.
International Journal of Information Technology 2021 Journal Article, cited 0 times
Website
Current methods to store and transfer medical neuroimaging data raise issues with security and transparency, and novel protocols are needed. Ethereum smart contracts present an encouraging new option. Ethereum is an open-source platform that allows users to construct smart contracts—self-executable packages of code that exist in the Ethereum state and allow transactions under programmed conditions. The present study developed a proof-of-concept smart contract that stores patient brain tumor data such as patient identifier, disease, grade, chemotherapy drugs, and Karnofsky score. The InterPlanetary file system was used to efficiently store the image files, and the corresponding content identifier hashes were stored within the smart contracts. Testing with a private, proof-of-authority network required only 889 MB of memory per insertion to insert 350 patient records, while retrieval required 910 MB. Inserting 350 patient records required 907 ms. The concept presented in this study exemplifies the use of smart contracts and off chain data storage for efficient retrieval/insertion of medical neuroimaging data.

Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

  • Bashiri, Fereshteh Sadat
2019 Thesis, cited 0 times
Website
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.

Removing Mixture Noise from Medical Images Using Block Matching Filtering and Low-Rank Matrix Completion

  • Barzigar, Nafise
  • Roozgard, Aminmohammad
  • Verma, Pramode K
  • Cheng, Samuel
2012 Conference Proceedings, cited 2 times
Website

A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy

  • Bartelheimer, Kathrin
2020 Thesis, cited 0 times
Website
Abstract During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times. In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input. To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved. Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy. Translation of abstract (German) Anatomische Veränderungen im Laufe der fraktionierten Strahlentherapie erzeugen Unsicherheiten in der tatsächlich applizierten Dosisverteilung. Je steiler die Dosisgradienten in der Verteilung sind, desto größer wird der Einfluss von Patientendeformationen. Insbesondere in der Protonentherapie erzeugen schon kleine anatomische Veränderungen im mm-Bereich große Unsicherheiten in der Reichweite und somit extreme Unterschiede zur geplanten Dosis. Um solche anatomischen Veränderungen zu quantifizieren, werden Deformationsmodelle benötigt. Durch die aufkommenden Möglichkeiten von MR-guidance gewinnt das Weichgewebe an Sichtbarkeit. Allerdings gibt es bisher nur wenige Modelle für Weichgewebe, welche den Anforderungen von hochpräziser Strahlentherapie genügen. Die meisten Modelle berücksichtigen entweder nicht genügend anatomische Details oder benötigen lange Rechenzeiten. In dieser Arbeit wird ein schnelles Deformationsmodell für Weichgewebe entwickelt, welches es ermöglicht, Gewebeeigenschaften von heterogenem Gewebe zu berücksichtigen. Dieses Modell basiert auf dem Chainmail (CM)-Konzept, welches um drei grundlegende Eigenschaften erweitert wird. Rotationsfreiheitsgrade werden in das CM-Konzept eingebracht, um das charakteristische Deformationsverhalten zu verbessern. Es wird ein neues Konzept für multiple Deformationsinitiatoren entwickelt, um mit globalem Deformationsinput umgehen zu können. Und zuletzt wird ein Konzept zum Umgang mit verschiedenen Formen von Deformationsinput vorgestellt, welches eine hohe Flexibilität für die Kopplung zu anderen Modellen ermöglicht. Um diese Flexibilität des Modells zu zeigen, wurde es mit einem kinematischen Skelettmodell für die Kopf-Hals-Region gekoppelt, welches anatomisch korrekten Input für die Knochen liefert. Basierend auf exemplarischen Patientendatensätzen wurde gezeigt, dass das gekoppelte Modell realistisch aussehende, künstlich deformierte CTs erzeugen kann. Dies war sowohl für eine kleine Deformation im Bereich von interfraktionellen Bewegungen als auch für eine große Deformation, wie z.B. eine arms-up zu arms-down Bewegung, welche zwischen multimodalen Bildern auftreten kann, möglich. Die Ergebnisse zeigen eine starke Verbesserung der Biofidelity im Vergleich zum CM-Modell, und auch im Vergleich zu klinisch eingesetzten bildbasierten Deformationsmodellen. Die Rechenzeiten für das Modell liegen im Bereich von 30 min für single-threaded Berechnungen. Durch einfache Code-Parallelisierung können Zeiten im Bereich von 1 min erreicht werden. Anwendungen, die realistische CTs aus Vorwärtsdeformationen benötigen, werden von der verbesserten Biofidelity des entwickelten Modells profitieren. Mögliche Anwendungen sind die Erstellung von Plan-Bibliotheken und virtuellen Phantomen sowie Daten-Augmentation für deep-learning Ansätze. Aufgrund der geringen Rechenzeiten ist das Modell auch für Anwendungen in der Bildregistrierung gut geeignet. In diesem Kontext wird es zu einer verbesserten Berechnung der akkumulierten Dosis beitragen, welche für hochpräzise adaptive Strahlentherapie benötigt wird.

Equating quantitative emphysema measurements on different CT image reconstructions

  • Bartel, Seth T
  • Bierhals, Andrew J
  • Pilgram, Thomas K
  • Hong, Cheng
  • Schechtman, Kenneth B
  • Conradi, Susan H
  • Gierada, David S
Medical Physics 2011 Journal Article, cited 15 times
Website
PURPOSE: To mathematically model the relationship between CT measurements of emphysema obtained from images reconstructed using different section thicknesses and kernels and to evaluate the accuracy of the models for converting measurements to those of a reference reconstruction. METHODS: CT raw data from the lung cancer screening examinations of 138 heavy smokers were reconstructed at 15 different combinations of section thickness and kernel. An emphysema index was quantified as the percentage of the lung with attenuation below -950 HU (EI950). Linear, quadratic, and power functions were used to model the relationship between EI950 values obtained with a reference 1 mm, medium smooth kernel reconstruction and values from each of the other 14 reconstructions. Preferred models were selected using the corrected Akaike information criterion (AICc), coefficients of determination (R2), and residuals (conversion errors), and cross-validated by a jackknife approach using the leave-one-out method. RESULTS: The preferred models were power functions, with model R2 values ranging from 0.949 to 0.998. The errors in converting EI950 measurements from other reconstructions to the 1 mm, medium smooth kernel reconstruction in leave-one-out testing were less than 3.0 index percentage points for all reconstructions, and less than 1.0 index percentage point for five reconstructions. Conversion errors were related in part to image noise, emphysema distribution, and attenuation histogram parameters. Conversion inaccuracy related to increased kernel sharpness tended to be reduced by increased section thickness. CONCLUSIONS: Image reconstruction-related differences in quantitative emphysema measurements were successfully modeled using power functions.

Pathologically-Validated Tumor Prediction Maps in MRI

  • Barrington, Alex
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.

Variational Quantum Denoising Technique for Medical Images

  • Barbu, Tudor
2020 Conference Paper, cited 2 times
Website
A novel variational restoration framework for the medical images corrupted by quantum, or Poisson, noise is proposed in this research paper. The considered approach is using a variational scheme that leads to a nonlinear fourth-order PDE-based model. That partial differential equation model is then solved numerically by developing a consistent finite difference-based numerical approximation scheme converging to its variational solution. The obtained numerical algorithm removes successfully the quantum noise from the medical images, preserves their details, and outperforms other shot noise filtering solutions.

A transformer-based deep neural network for detection and classification of lung cancer via PET/CT images

  • Barbouchi, Khalil
  • Hamdi, Dhekra El
  • Elouedi, Ines
  • Aïcha, Takwa Ben
  • Echi, Afef Kacem
  • Slim, Ihsen
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2023 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of death for men and women worldwide and the second most frequent cancer. Therefore, early detection of the disease increases the cure rate. This paper presents a new approach to evaluate the ability of positron emission tomography/computed tomography (PET/CT) images to classify and detect lung cancer using deep learning techniques. Our approach aims to fully automate lung cancer's anatomical localization from PET/CT images. It also searches to classify the tumor, which is essential as it makes it possible to determine the disease's speed of progression and the best treatments to adopt. We have built, in this work, an approach based on transformers by implementing the DETR model as a tool to detect the tumor and assist physicians in staging patients with lung cancer. The TNM staging system and histologic subtype classification were both taken as a standard for classification. Experimental results demonstrated that our approach achieves sound results on tumor localization, T staging, and histology classification. Our proposed approach detects tumors with an intersection over union (IOU) of 0.8 when tested on the Lung-PET-CT-Dx dataset. It also has yielded better accuracy than state-of-the-art T-staging and histologic classification methods. It classified T-stage and histologic subtypes with an accuracy of 0.97 and 0.94, respectively.

Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study

  • Barboriak, Daniel P
  • Zhang, Zheng
  • Desai, Pratikkumar
  • Snyder, Bradley S
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Sorensen, Gregory
  • Gilbert, Mark R
  • Boxerman, Jerrold L
RadiologyRadiology 2019 Journal Article, cited 2 times
Website
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.

A New Adaptive-Weighted Fusion Rule for Wavelet based PET/CT Fusion

  • Barani, R
  • Sumathi, M
International Journal of Signal Processing, Image Processing and Pattern Recognition 2016 Journal Article, cited 1 times
Website
In recent years the Wavelet Transform (WT) had an important role in various applications of signal and image processing. In Image Processing, WT is more useful in many domains like image denoising, feature segmentation, compression, restoration, image fusion, etc. In WT based image fusion, initially the source images are decomposed into approximation and detail coefficients and followed by combining the coefficients using the suitable fusion rules. The resultant fused image is reconstructed by applying inverse WT on the combined coefficients. This paper proposes a new adaptive fusion rule for combining the approximation coefficients of CT and PET images. The Excellency of the proposed fusion rule is stamped by measuring the image information metrics, EOG, SD and ENT on the decomposed approximation coefficients. On the other hand, the detail coefficients are combined using several existing fusion rules. The resultant fused images are quantitatively analyzed using the non-reference image quality, image fusion and error metrics. The analysis declares that the newly proposed fusion rule is more suitable for extracting the complementary information from CT and PET images and also produces the fused image which is rich in content with good contrast and sharpness.

Isodoses-a set theory-based patient-specific QA measure to compare planned and delivered isodose distributions in photon radiotherapy

  • Baran, M.
  • Tabor, Z.
  • Kabat, D.
  • Tulik, M.
  • Jelen, K.
  • Rzecki, K.
  • Forostianyi, B.
  • Balabuszek, K.
  • Koziarski, R.
  • Waligorski, M. P. R.
Strahlenther Onkol 2022 Journal Article, cited 0 times
Website
BACKGROUND: The gamma index and dose-volume histogram (DVH)-based patient-specific quality assurance (QA) measures commonly applied in radiotherapy planning are unable to simultaneously deliver detailed locations and magnitudes of discrepancy between isodoses of planned and delivered dose distributions. By exploiting statistical classification performance measures such as sensitivity or specificity, compliance between a planned and delivered isodose may be evaluated locally, both for organs-at-risk (OAR) and the planning target volume (PTV), at any specified isodose level. Thus, a patient-specific QA tool may be developed to supplement those presently available in clinical radiotherapy. MATERIALS AND METHODS: A method was developed to locally establish and report dose delivery errors in three-dimensional (3D) isodoses of planned (reference) and delivered (evaluated) dose distributions simultaneously as a function the dose level and of spatial location. At any given isodose level, the total volume of delivered dose containing the reference and the evaluated isodoses is locally decomposed into four subregions: true positive-subregions within both reference and evaluated isodoses, true negative-outside of both of these isodoses, false positive-inside the evaluated isodose but not the reference isodose, and false negatives-inside the reference isodose but not the evaluated isodose. Such subregions may be established over the whole volume of delivered dose. This decomposition allows the construction of a confusion matrix and calculation of various indices to quantify the discrepancies between the selected planned and delivered isodose distributions, over the complete range of values of dose delivered. The 3D projection and visualization of the spatial distribution of these discrepancies facilitates the application of the developed method in clinical practice. RESULTS: Several clinical photon radiotherapy plans were analyzed using the developed method. In some plans at certain isodose levels, dose delivery errors were found at anatomically significant locations. These errors were not otherwise highlighted-neither by gamma analysis nor by DVH-based QA measures. A specially developed 3D projection tool to visualize the spatial distribution of such errors against anatomical features of the patient aids in the proposed analysis of therapy plans. CONCLUSIONS: The proposed method is able to spatially locate delivery errors at selected isodose levels and may supplement the presently applied gamma analysis and DVH-based QA measures in patient-specific radiotherapy planning.

The involvement of brain regions associated with lower KPS and shorter survival time predicts a poor prognosis in glioma

  • Bao, Hongbo
  • Wang, Huan
  • Sun, Qian
  • Wang, Yujie
  • Liu, Hui
  • Liang, Peng
  • Lv, Zhonghua
2023 Journal Article, cited 0 times
Website
Background: Isocitrate dehydrogenase-wildtype glioblastoma (IDH-wildtype GBM) and IDH-mutant astrocytoma have distinct biological behaviors and clinical outcomes. The location of brain tumors is closely associated not only with clinical symptoms and prognosis but also with key molecular alterations such as IDH. Therefore, we hypothesize that the key brain regions influencing the prognosis of glioblastoma and astrocytoma are likely to differ. This study aims to (1) identify specific regions that are associated with the Karnofsky Performance Scale (KPS) or overall survival (OS) in IDH-wildtype GBM and IDH-mutant astrocytoma and (2) test whether the involvement of these regions could act as a prognostic indicator. Methods: A total of 111 patients with IDH-wildtype GBM and 78 patients with IDH-mutant astrocytoma from the Cancer Imaging Archive database were included in the study. Voxel-based lesion-symptom mapping (VLSM) was used to identify key brain areas for lower KPS and shorter OS. Next, we analyzed the structural and cognitive dysfunction associated with these regions. The survival analysis was carried out using Kaplan–Meier survival curves. Another 72 GBM patients and 48 astrocytoma patients from Harbin Medical University Cancer Hospital were used as a validation cohort. Results: Tumors located in the insular cortex, parahippocampal gyrus, and middle and superior temporal gyrus of the left hemisphere tended to lead to lower KPS and shorter OS in IDH-wildtype GBM. The regions that were significantly correlated with lower KPS in IDH-mutant astrocytoma included the subcallosal cortex and cingulate gyrus. These regions were associated with diverse structural and cognitive impairments. The involvement of these regions was an independent predictor for shorter survival in both GBM and astrocytoma. Conclusion: This study identified the specific regions that were significantly associated with OS or KPS in glioma. The results may help neurosurgeons evaluate patient survival before surgery and understand the pathogenic mechanisms of glioma in depth.

MRI-Based Deep Learning Method for Classification of IDH Mutation Status

  • Bangalore Yogananda, C. G.
  • Wagner, B. C.
  • Truong, N. C. D.
  • Holcomb, J. M.
  • Reddy, D. D.
  • Saadat, N.
  • Hatanpaa, K. J.
  • Patel, T. R.
  • Fei, B.
  • Lee, M. D.
  • Jain, R.
  • Bruce, R. J.
  • Pinho, M. C.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
2023 Journal Article, cited 0 times
Website
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI and genomic data were obtained from The Cancer Imaging Archive (TCIA) and The Erasmus Glioma Database (EGD). Two separate 2D networks were developed using nnU-Net, a T2w-image-only network (T2-net) and a multi-contrast network (MC-net). Each network was separately trained using TCIA (227 subjects) or TCIA + EGD data (683 subjects combined). The networks were trained to classify IDH mutation status and implement single-label tumor segmentation simultaneously. The trained networks were tested on over 1100 held-out datasets including 360 cases from UT Southwestern Medical Center, 136 cases from New York University, 175 cases from the University of Wisconsin-Madison, 456 cases from EGD (for the TCIA-trained network), and 495 cases from the University of California, San Francisco public database. A receiver operating characteristic curve (ROC) was drawn to calculate the AUC value to determine classifier performance. Results: T2-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 85.4% and 87.6% with AUCs of 0.86 and 0.89, respectively. MC-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 91.0% and 92.8% with AUCs of 0.94 and 0.96, respectively. We developed reliable, high-performing deep learning algorithms for IDH classification using both a T2-image-only and a multi-contrast approach. The networks were tested on more than 1100 subjects from diverse databases, making this the largest study on image-based IDH classification to date.

Fully Automated Brain Tumor Segmentation and Survival Prediction of Gliomas Using Deep Learning and MRI

  • Bangalore Yogananda, Chandan Ganesh
  • Wagner, Ben
  • Nalawade, Sahil S.
  • Murugesan, Gowtham K.
  • Pinho, Marco C.
  • Fei, Baowei
  • Madhuranthakam, Ananth J.
  • Maldjian, Joseph A.
2020 Book Section, cited 10 times
Website
Tumor segmentation of magnetic resonance images is a critical step in providing objective measures of predicting aggressiveness and response to therapy in gliomas. It has valuable applications in diagnosis, monitoring, and treatment planning of brain tumors. The purpose of this work was to develop a fully-automated deep learning method for tumor segmentation and survival prediction. Well curated brain tumor cases with multi-parametric MR Images from the BraTS2019 dataset were used. A three-group framework was implemented, with each group consisting of three 3D-Dense-UNets to segment whole-tumor (WT), tumor-core (TC) and enhancing-tumor (ET). Each group was trained using different approaches and loss-functions. The output segmentations of a particular label from their respective networks from the three groups were ensembled and post-processed. For survival analysis, a linear regression model based on imaging texture features and wavelet texture features extracted from each of the segmented components was implemented. The networks were tested on both the BraTS2019 validation and testing datasets. The segmentation networks achieved average dice-scores of 0.901, 0.844 and 0.801 for WT, TC and ET respectively on the validation dataset and achieved dice-scores of 0.877, 0.835 and 0.803 for WT, TC and ET respectively on the testing dataset. The survival prediction network achieved an accuracy score of 0.55 and mean squared error (MSE) of 119244 on the validation dataset and achieved an accuracy score of 0.51 and MSE of 455500 on the testing dataset. This method could be implemented as a robust tool to assist clinicians in primary brain tumor management and follow-up.

A Fully Automated Deep Learning Network for Brain Tumor Segmentation

  • Bangalore Yogananda, C. G.
  • Shah, B. R.
  • Vejdani-Jahromi, M.
  • Nalawade, S. S.
  • Murugesan, G. K.
  • Yu, F. F.
  • Pinho, M. C.
  • Wagner, B. C.
  • Emblem, K. E.
  • Bjornerud, A.
  • Fei, B.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
Tomography 2020 Journal Article, cited 40 times
Website
We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network's performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow.

Deep Active Learning for Glioblastoma Quantification

  • Banerjee, Subhashis
  • Strand, Robin
2023 Book Section, cited 0 times
Website
Generating pixel or voxel-wise annotations of radiological images to train deep learning-based segmentation models is a time consuming and expensive job involving precious time and effort of radiologists. Other challenges include obtaining diverse annotated training data that covers the entire spectrum of potential situations. In this paper, we propose an Active Learning (AL) based segmentation strategy involving a human annotator or “Oracle" to annotate interactively. The deep learning-based segmentation model learns in parallel by training in iterations with the annotated samples. A publicly available MRI dataset of brain tumors (Glioma) is used for the experimental studies. The efficiency of the proposed AL-based segmentation model is demonstrated in terms of annotation time requirement compared with the conventional Passive Learning (PL) based strategies. Experimentally it is also demonstrated that the proposed AL-based segmentation strategy achieves comparable or enhanced segmentation performance with much fewer annotations through quantitative and qualitative evaluations of the segmentation results.

Glioma Classification Using Deep Radiomics

  • Banerjee, Subhashis
  • Mitra, Sushmita
  • Masulli, Francesco
  • Rovetta, Stefano
SN Computer Science 2020 Journal Article, cited 1 times
Website
Glioma constitutes $$80\%$$80%of malignant primary brain tumors in adults, and is usually classified as high-grade glioma (HGG) and low-grade glioma (LGG). The LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy being challenging for brain tumor patients, noninvasive imaging techniques like magnetic resonance imaging (MRI) have been extensively employed in diagnosing brain tumors. Therefore, development of automated systems for the detection and prediction of the grade of tumors based on MRI data becomes necessary for assisting doctors in the framework of augmented intelligence. In this paper, we thoroughly investigate the power of deep convolutional neural networks (ConvNets) for classification of brain tumors using multi-sequence MR images. We propose novel ConvNet models, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out testing, and testing on the holdout dataset are used to evaluate the performance of the ConvNets. The results demonstrate that the proposed ConvNets achieve better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of $$95\%$$95%for the low/high grade glioma classification problem. A score of $$97\%$$97%is generated for classification of LGG with/without 1p/19q codeletion, without any additional effort toward extraction and selection of features. We study the properties of self-learned kernels/ filters in different layers, through visualization of the intermediate layer outputs. We also compare the results with that of state-of-the-art methods, demonstrating a maximum improvement of $$7\%$$7%on the grading performance of ConvNets and $$9\%$$9%on the prediction of 1p/19q codeletion status.

Ensemble of CNNs for Segmentation of Glioma Sub-regions with Survival Prediction

  • Banerjee, Subhashis
  • Arora, Harkirat Singh
  • Mitra, Sushmita
2020 Book Section, cited 5 times
Website
Gliomas are the most common malignant brain tumors, having varying level of aggressiveness, with Magnetic Resonance Imaging (MRI) being used for their diagnosis. As these tumors are highly heterogeneous in shape and appearance, their segmentation becomes a challenging task. In this paper we propose an ensemble of three Convolutional Neural Network (CNN) architectures viz. (i) P-Net, (ii) U-Net with spatial pooling, and (iii) ResInc-Net for glioma sub-regions segmentation. The segmented tumor Volume of Interest (VOI) is further used for extracting spatial habitat features for the prediction of Overall Survival (OS) of patients. A new aggregated loss function is used to help in effectively handling the data imbalance problem. The concept of modeling predictive distributions, test time augmentation and ensembling methods are used to reduce uncertainty and increase the confidence of the model prediction. The proposed integrated system (for Segmentation and OS prediction) is trained and validated on the Brain Tumor Segmentation (BraTS) Challenge 2019 dataset. We ranked among the top performing methods on Segmentation and Overall Survival prediction on the validation dataset, as observed from the leaderboard. We also ranked among the top four in the Uncertainty Quantification task on the testing dataset.

Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image

  • Bandyopadhyay, Oishila
  • Biswas, Arindam
  • Bhattacharya, Bhargab B
J Digit Imaging 2018 Journal Article, cited 0 times
Website
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.

MRI Brain Tumor Segmentation and Uncertainty Estimation Using 3D-UNet Architectures

  • Ballestar, Laura Mora
  • Vilaplana, Veronica
2021 Book Section, cited 0 times
Automation of brain tumor segmentation in 3D magnetic resonance images (MRIs) is key to assess the diagnostic and treatment of the disease. In recent years, convolutional neural networks (CNNs) have shown improved results in the task. However, high memory consumption is still a problem in 3D-CNNs. Moreover, most methods do not include uncertainty information, which is especially critical in medical diagnosis. This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data. The different trained models are then used to create an ensemble that leverages the properties of each model, thus increasing the performance. We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively. In addition, a hybrid approach is proposed that helps increase the accuracy of the segmentation. The model and uncertainty estimation measurements proposed in this work have been used in the BraTS’20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.

Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules

  • Balagurunathan, Yoganand
  • Schabath, Matthew B.
  • Wang, Hua
  • Liu, Ying
  • Gillies, Robert J.
2019 Journal Article, cited 0 times
Website
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.

Test–Retest Reproducibility Analysis of Lung CT Image Features

  • Balagurunathan, Yoganand
  • Kumar, Virendra
  • Gu, Yuhua
  • Kim, Jongphil
  • Wang, Hua
  • Liu, Ying
  • Goldgof, Dmitry B
  • Hall, Lawrence O
  • Korn, Rene
  • Zhao, Binsheng
2014 Journal Article, cited 85 times
Website
Quantitative size, shape, and texture features derived from computed tomographic (CT) images may be useful as predictive, prognostic, or response biomarkers in non-small cell lung cancer (NSCLC). However, to be useful, such features must be reproducible, non-redundant, and have a large dynamic range. We developed a set of quantitative three-dimensional (3D) features to describe segmented tumors and evaluated their reproducibility to select features with high potential to have prognostic utility. Thirty-two patients with NSCLC were subjected to unenhanced thoracic CT scans acquired within 15 min of each other under an approved protocol. Primary lung cancer lesions were segmented using semi-automatic 3D region growing algorithms. Following segmentation, 219 quantitative 3D features were extracted from each lesion, corresponding to size, shape, and texture, including features in transformed spaces (laws, wavelets). The most informative features were selected using the concordance correlation coefficient across test–retest, the biological range and a feature independence measure. There were 66 (30.14 %) features with concordance correlation coefficient ≥ 0.90 across test–retest and acceptable dynamic range. Of these, 42 features were non-redundant after grouping features with R2Bet ≥ 0.95. These reproducible features were found to be predictive of radiological prognosis. The area under the curve (AUC) was 91 % for a size-based feature and 92 % for the texture features (runlength, laws). We tested the ability of image features to predict a radiological prognostic score on an independent NSCLC (39 adenocarcinoma) samples, the AUC for texture features (runlength emphasis, energy) was 0.84 while the conventional size-based features (volume, longest diameter) was 0.80. Test–retest and correlation analyses have identified non-redundant CT image features with both high intra-patient reproducibility and inter-patient biological range. Thus making the case that quantitative image features are informative and prognostic biomarkers for NSCLC.

An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images

  • Bal, A.
  • Banerjee, M.
  • Chaki, R.
  • Sharma, P.
2021 Journal Article, cited 0 times
Website
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.

Secure telemedicine using RONI halftoned visual cryptography without pixel expansion

  • Bakshi, Arvind
  • Patel, Anoop Kumar
Journal of Information Security and ApplicationsJ Inf Secur Appl 2019 Journal Article, cited 0 times
Website
To provide quality healthcare services worldwide telemedicine is a well-known technique. It delivers healthcare services remotely. For the diagnosis of disease and prescription by the doctor, lots of information is needed to be shared over public and private channels. Medical information like MRI, X-Ray, CT-scan etc. contains very personal information and needs to be secured. Security like confidentiality, privacy, and integrity of medical data is still a challenge. It is observed that the existing security techniques like digital watermarking, encryption are not efficient for real-time use. This paper investigates the problem and provides the solution of security considering major aspects, using Visual Cryptography (VC). The proposed algorithm creates shares for parts of the image which does not have relevant information. All the information which contains data related to the disease is supposed to be relevant and is marked as the region of interest (ROI). The integrity of the image is maintained by inserting some information in the region of non-interest (RONI). All the shares generated are transmitted over different channels and embedded information is decrypted by overlapping (in XOR fashion) shares in theta(1) time. Visual perception of all the results discussed in this article is very clear. The proposed algorithm has performance metrics as PSNR (peak signal-to-noise ratio), SSIM (structure similarity matrix), and Accuracy having values 22.9452, 0.9701, and 99.8740 respectively. (C) 2019 Elsevier Ltd. All rights reserved.

A radiogenomic dataset of non-small cell lung cancer

  • Bakr, Shaimaa
  • Gevaert, Olivier
  • Echegaray, Sebastian
  • Ayers, Kelsey
  • Zhou, Mu
  • Shafiq, Majid
  • Zheng, Hong
  • Benson, Jalen Anthony
  • Zhang, Weiruo
  • Leung, Ann NC
Scientific data 2018 Journal Article, cited 1 times
Website

Predicting Lung Cancer Survival Time Using Deep Learning Techniques

  • Baker, Qanita Bani
  • Gharaibeh, Maram
  • Al-Harahsheh, Yara
2021 Conference Paper, cited 0 times
Website
Lung cancer is one of the most commonly diagnosed cancer. Most studies found that lung cancer patients have a survival time up to 5 years after the cancer is found. An accurate prognosis is the most critical aspect of a clinical decision-making process for patients. predicting patients' survival time helps healthcare professionals to make treatment recommendations based on the prediction. In this paper, we used various deep learning methods to predict the survival time of Non-Small Cell Lung Cancer (NSCLC) patients in days which has been evaluated on clinical and radiomics dataset. The dataset was extracted from computerized tomography (CT) images that contain data for 300 patients. The concordance index (C-index) was used to evaluate the models. We applied several deep learning approaches and the best accuracy gained is 70.05% on the OWKIN task using Multilayer Perceptron (MLP) which outperforms the baseline model provided by the OWKIN task organizers

GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.

  • Bakas, S.
  • Zeng, K.
  • Sotiras, A.
  • Rathore, S.
  • Akbari, H.
  • Gaonkar, B.
  • Rozycki, M.
  • Pati, S.
  • Davatzikos, C.
Brainlesion 2016 Journal Article, cited 49 times
Website
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.

The University of Pennsylvania glioblastoma (UPenn-GBM) cohort: advanced MRI, clinical, genomics, & radiomics

  • Bakas, S.
  • Sako, C.
  • Akbari, H.
  • Bilello, M.
  • Sotiras, A.
  • Shukla, G.
  • Rudie, J. D.
  • Santamaria, N. F.
  • Kazerooni, A. F.
  • Pati, S.
  • Rathore, S.
  • Mamourian, E.
  • Ha, S. M.
  • Parker, W.
  • Doshi, J.
  • Baid, U.
  • Bergman, M.
  • Binder, Z. A.
  • Verma, R.
  • Lustig, R. A.
  • Desai, A. S.
  • Bagley, S. J.
  • Mourelatos, Z.
  • Morrissette, J.
  • Watt, C. D.
  • Brem, S.
  • Wolf, R. L.
  • Melhem, E. R.
  • Nasrallah, M. P.
  • Mohan, S.
  • O'Rourke, D. M.
  • Davatzikos, C.
Sci Data 2022 Journal Article, cited 0 times
Website
Glioblastoma is the most common aggressive adult brain tumor. Numerous studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of: a) number of subjects, b) lack of consistent acquisition protocol, c) data quality, or d) accompanying clinical, demographic, and molecular information. Toward alleviating these limitations, we contribute the "University of Pennsylvania Glioblastoma Imaging, Genomics, and Radiomics" (UPenn-GBM) dataset, which describes the currently largest publicly available comprehensive collection of 630 patients diagnosed with de novo glioblastoma. The UPenn-GBM dataset includes (a) advanced multi-parametric magnetic resonance imaging scans acquired during routine clinical practice, at the University of Pennsylvania Health System, (b) accompanying clinical, demographic, and molecular information, (d) perfusion and diffusion derivative volumes, (e) computationally-derived and manually-revised expert annotations of tumor sub-regions, as well as (f) quantitative imaging (also known as radiomic) features corresponding to each of these regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.

Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features

  • Bakas, Spyridon
  • Akbari, Hamed
  • Sotiras, Aristeidis
  • Bilello, Michel
  • Rozycki, Martin
  • Kirby, Justin S.
  • Freymann, John B.
  • Farahani, Keyvan
  • Davatzikos, Christos
Scientific data 2017 Journal Article, cited 1036 times
Website
Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method.

BraTS Multimodal Brain Tumor Segmentation Challenge

  • Bakas, Spyridon
2017 Conference Proceedings, cited 2030 times
Website

Brain Tumor Segmentation with Cascaded Deep Convolutional Neural Network

  • Baid, Ujjwal
  • Shah, Nisarg A.
  • Talbar, Sanjay
2020 Book Section, cited 9 times
Website
Cancer is the second leading cause of death globally and is responsible for an estimated 9.6 million deaths in 2018. Approximately 70% of deaths from cancer occur in low and middle-income countries. One defining feature of cancer is the rapid creation of abnormal cells that grow uncontrollably causing tumor. Gliomas are brain tumors that arises from the glial cells in brain and comprise of 80% of all malignant brain tumors. Accurate delineation of tumor cells from healthy tissues is important for precise treatment planning. Because of different forms, shapes, sizes and similarity of the tumor tissues with rest of the brain segmentation of the Glial tumors is challenging. In this study we have proposed fully automatic two step approach for Glioblastoma (GBM) brain tumor segmentation with Cascaded U-Net. Training patches are extracted from 335 cases from Brain Tumor Segmentation (BraTS) Challenge for training and results are validated on 125 patients. The proposed approach is evaluated quantitatively in terms of Dice Similarity Coefficient (DSC) and Hausdorff95 distance.

Feature fusion Siamese network for breast cancer detection comparing current and prior mammograms

  • Bai, J.
  • Jin, A.
  • Wang, T.
  • Yang, C.
  • Nabavi, S.
Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Automatic detection of very small and nonmass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an artificial intelligence (AI) system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS: The proposed Siamese-based network uses high-resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS: We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (long short-term memory [LSTM] and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score, and area under the curve (AUC). Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS: Integrating prior mammogram images improves automatic cancer classification, specially for very small and nonmass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models.

Imaging genomics in cancer research: limitations and promises

  • Bai, Harrison X
  • Lee, Ashley M
  • Yang, Li
  • Zhang, Paul
  • Davatzikos, Christos
  • Maris, John M
  • Diskin, Sharon J
The British journal of radiology 2016 Journal Article, cited 28 times
Website

Brain Tumor Segmentation Based on Zernike Moments, Enhanced Ant Lion Optimization, and Convolutional Neural Network in MRI Images

  • Bagherian Kasgari, Abbas
  • Ranjbarzadeh, Ramin
  • Caputo, Annalina
  • Baseri Saadi, Soroush
  • Bendechache, Malika
2023 Book Section, cited 1 times
Website
Gliomas that form in glial cells in the spinal cord and brain are the most aggressive and common kinds of brain tumors (intra-axial brain tumors) due to their rapid progression and infiltrative nature. The procedure of recognizing tumor margins from healthy tissues is still an arduous and time-consuming task in the clinical routine. In this study, a robust and efficient machine learning-based pipeline is suggested for brain tumor segmentation. Moreover, we employ four MRI modalities for increasing the final accuracy of the segmentation results, namely, Flair, T1, T2, and T1ce. Firstly, eight feature maps are extracted from each modality using the Zernike moments approach. The Zernike moments can create a feature map using two parameters, namely, n and m. So, by changing these values, we are able to generate different sets of edge feature maps. Then, eight edge feature maps for each modality are selected to produce a final feature map. Next, four original images are encoded into new four images to represent more unique and key information using the Local Directional Number Pattern (LDNP). As different encoded image leads to obtaining different final results and accuracies, the Enhanced Ant Lion Optimization (EALO) was employed to find the best possible set of feature maps for creating the best possible encoded image. Finally, a CNN model is utilized to explore significant details from the brain tissue more efficiently which accepts four input patches. Overall, the suggested framework outperforms the baseline methods regarding Dice score and Recall.

Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT

  • Bagheri, Mohammad Hadi
  • Roth, Holger
  • Kovacs, William
  • Yao, Jianhua
  • Farhadi, Faraz
  • Li, Xiaobai
  • Summers, Ronald M
Acad Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.

Survival time prediction by integrating cox proportional hazards network and distribution function network

  • Baek, Eu-Tteum
  • Yang, Hyung Jeong
  • Kim, Soo Hyung
  • Lee, Guee Sang
  • Oh, In-Jae
  • Kang, Sae-Ryung
  • Min, Jung-Joon
BMC Bioinformatics 2021 Journal Article, cited 0 times
Website
BACKGROUND: The Cox proportional hazards model is commonly used to predict hazard ratio, which is the risk or probability of occurrence of an event of interest. However, the Cox proportional hazard model cannot directly generate an individual survival time. To do this, the survival analysis in the Cox model converts the hazard ratio to survival times through distributions such as the exponential, Weibull, Gompertz or log-normal distributions. In other words, to generate the survival time, the Cox model has to select a specific distribution over time. RESULTS: This study presents a method to predict the survival time by integrating hazard network and a distribution function network. The Cox proportional hazards network is adapted in DeepSurv for the prediction of the hazard ratio and a distribution function network applied to generate the survival time. To evaluate the performance of the proposed method, a new evaluation metric that calculates the intersection over union between the predicted curve and ground truth was proposed. To further understand significant prognostic factors, we use the 1D gradient-weighted class activation mapping method to highlight the network activations as a heat map visualization over an input data. The performance of the proposed method was experimentally verified and the results compared to other existing methods. CONCLUSIONS: Our results confirmed that the combination of the two networks, Cox proportional hazards network and distribution function network, can effectively generate accurate survival time.

Mammography and breast tomosynthesis simulator for virtual clinical trials

  • Badal, Andreu
  • Sharma, Diksha
  • Graff, Christian G.
  • Zeng, Rongping
  • Badano, Aldo
Computer Physics Communications 2021 Journal Article, cited 0 times
Website
Computer modeling and simulations are increasingly being used to predict the clinical performance of x-ray imaging devices in silico, and to generate synthetic patient images for training and testing of machine learning algorithms. We present a detailed description of the computational models implemented in the open source GPU-accelerated Monte Carlo x-ray imaging simulation code MC-GPU. This code, originally developed to simulate radiography and computed tomography, has been extended to replicate a commercial full-field digital mammography and digital breast tomosynthesis (DBT) device. The code was recently used to image 3000 virtual breast models with the aim of reproducing in silico a clinical trial used in support of the regulatory approval of DBT as a replacement of mammography for breast cancer screening. The updated code implements a more realistic x-ray source model (extended 3D focal spot, tomosynthesis acquisition trajectory, tube motion blurring) and an improved detector model (direct-conversion Selenium detector with depth-of-interaction effects, fluorescence tracking, electronic noise and anti-scatter grid). The software uses a high resolution voxelized geometry model to represent the breast anatomy. To reduce the GPU memory requirements, the code stores the voxels in memory within a binary tree structure. The binary tree is an efficient compression mechanism because many voxels with the same composition are combined in common tree branches while preserving random access to the phantom composition at any location. A delta scattering ray-tracing algorithm which does not require computing ray-voxel interfaces is used to minimize memory access. Multiple software verification and validation steps intended to establish the credibility of the implemented computational models are reported. The software verification was done using a digital quality control phantom and an ideal pinhole camera. The validation was performed reproducing standard bench testing experiments used in clinical practice and comparing with experimental measurements. A sensitivity study intended to assess the robustness of the simulated results to variations in some of the input parameters was performed using an in silico clinical trial pipeline with simulated lesions and mathematical observers. We show that MC-GPU is able to simulate x-ray projections that incorporate many of the sources of variability found in clinical images, and that the simulated results are robust to some uncertainty in the input parameters. Limitations of the implemented computational models are discussed. Program summary Program title: MCGPU_VICTRE CPC Library link to program files: http://dx.doi.org/10.17632/k5x2bsf27m.1 Licensing provisions: CC0 1.0 Programming language: C (with NVIDIA CUDA extensions) Nature of problem: The health risks associated with ionizing radiation impose a limit to the amount of clinical testing that can be done with x-ray imaging devices. In addition, radiation dose cannot be directly measured inside the body. For these reasons, a computational replica of an x-ray imaging device that simulates radiographic images of synthetic anatomical phantoms is of great value for device evaluation. The simulated radiographs and dosimetric estimates can be used for system design and optimization, task-based evaluation of image quality, machine learning software training, and in silico imaging trials. Solution method: Computational models of a mammography x-ray source and detector have been implemented. X-ray transport through matter is simulated using Monte Carlo methods customized for parallel execution in multiple Graphics Processing Units. The input patient anatomy is represented by voxels, which are efficiently stored in the video memory using a new binary tree structure compression mechanism.

Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm

  • Badal, Andreu
  • Cha, Kenny H.
  • Divel, Sarah E.
  • Graff, Christian G.
  • Zeng, Rongping
  • Badano, Aldo
2019 Conference Proceedings, cited 0 times
Website
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.

Optimized convolutional neural network by firefly algorithm for magnetic resonance image classification of glioma brain tumor grade

  • Bacanin, Nebojsa
  • Bezdan, Timea
  • Venkatachalam, K.
  • Al-Turjman, Fadi
Journal of Real-Time Image Processing 2021 Journal Article, cited 0 times
Website
The most frequent brain tumor types are gliomas. The magnetic resonance imaging technique helps to make the diagnosis of brain tumors. It is hard to get the diagnosis in the early stages of the glioma brain tumor, although the specialist has a lot of experience. Therefore, for the magnetic resonance imaging interpretation, a reliable and efficient system is required which helps the doctor to make the diagnosis in early stages. To make classification of the images, to which class the glioma belongs, convolutional neural networks, which proved that they can obtain an excellent performance in the image classification tasks, can be used. Convolutional network hyperparameters’ tuning is a very important issue in this domain for achieving high accuracy on the image classification; however, this task takes a lot of computational time. Approaching this issue, in this manuscript, we propose a metaheuristics method to automatically find the near-optimal values of convolutional neural network hyperparameters based on a modified firefly algorithm and develop a system for automatic image classification of glioma brain tumor grades from magnetic resonance imaging. First, we have tested the proposed modified algorithm on the set of standard unconstrained benchmark functions and the performance is compared to the original algorithm and other modified variants. Upon verifying the efficiency of the proposed approach in general, it is applied for hyperparameters’ optimization of the convolutional neural network. The IXI dataset and the cancer imaging archive with more collections of data are used for evaluation purposes, and additionally, the method is evaluated on the axial brain tumor images. The obtained experimental results and comparative analysis with other state-of-the-art algorithms tested under the same conditions show the robustness and efficiency of the proposed method.

BIOMEDICAL IMAGE RETRIEVAL USING LBWP

  • Babu, Joyce Sarah
  • Mathew, Soumya
  • Simon, Rini
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Detection of Brain Tumour in MRI Scan Images using Tetrolet Transform and SVM Classifier

  • Babu, B Shoban
  • Varadarajan, S
Indian Journal of Science and Technology 2017 Journal Article, cited 1 times
Website

Analysis of Classification Methods for Diagnosis of Pulmonary Nodules in CT Images

  • Baboo, Capt Dr S Santhosh
  • Iyyapparaj, E
IOSR Journal of Electrical and Electronics Engineering 2017 Journal Article, cited 0 times
Website
The main aim of this work is to propose a novel Computer-aided detection (CAD) system based on a Contextual clustering combined with region growing for assisting radiologists in early identification of lung cancer from computed tomography(CT) scans. Instead of using conventional thresholding approach, this proposed work uses Contextual Clustering which yields a more accurate segmentation of the lungs from the chest volume. Following segmentation GLCM features are extracted which are then classified using three different classifiers namely Random forest, SVM and k-NN.

OpenKBP: The open‐access knowledge‐based planning grand challenge and dataset

  • Babier, A.
  • Zhang, B.
  • Mahmood, R.
  • Moore, K. L.
  • Purdie, T. G.
  • McNiven, A. L.
  • Chan, T. C. Y.
Medical Physics 2021 Journal Article, cited 0 times
Website
PURPOSE: To advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP) in radiation therapy research. METHODS: We hosted OpenKBP, a 2020 AAPM Grand Challenge, and challenged participants to develop the best method for predicting the dose of contoured computed tomography (CT) images. The models were evaluated according to two separate scores: (a) dose score, which evaluates the full three-dimensional (3D) dose distributions, and (b) dose-volume histogram (DVH) score, which evaluates a set DVH metrics. We used these scores to quantify the quality of the models based on their out-of-sample predictions. To develop and test their models, participants were given the data of 340 patients who were treated for head-and-neck cancer with radiation therapy. The data were partitioned into training ( n = 200 ), validation ( n = 40 ), and testing ( n = 100 ) datasets. All participants performed training and validation with the corresponding datasets during the first (validation) phase of the Challenge. In the second (testing) phase, the participants used their model on the testing data to quantify the out-of-sample performance, which was hidden from participants and used to determine the final competition ranking. Participants also responded to a survey to summarize their models. RESULTS: The Challenge attracted 195 participants from 28 countries, and 73 of those participants formed 44 teams in the validation phase, which received a total of 1750 submissions. The testing phase garnered submissions from 28 of those teams, which represents 28 unique prediction methods. On average, over the course of the validation phase, participants improved the dose and DVH scores of their models by a factor of 2.7 and 5.7, respectively. In the testing phase one model achieved the best dose score (2.429) and DVH score (1.478), which were both significantly better than the dose score (2.564) and the DVH score (1.529) that was achieved by the runner-up models. Lastly, many of the top performing teams reported that they used generalizable techniques (e.g., ensembles) to achieve higher performance than their competition. CONCLUSION: OpenKBP is the first competition for knowledge-based planning research. The Challenge helped launch the first platform that enables researchers to compare KBP prediction methods fairly and consistently using a large open-source dataset and standardized metrics. OpenKBP has also democratized KBP research by making it accessible to everyone, which should help accelerate the progress of KBP research. The OpenKBP datasets are available publicly to help benchmark future KBP research.

A Pre-study on the Layer Number Effect of Convolutional Neural Networks in Brain Tumor Classification

  • Azat, Hedi Syamand
  • Sekeroglu, Boran
  • Dimililer, Kamil
2021 Conference Paper, cited 0 times
Website
Convolutional Neural Networks significantly influenced the revolution of Artificial Intelligence and Deep Learning, and it has become a basic model for image classification processes. However, Convolutional Neural Networks can be applied in different architectures and has many other parameters that require several experiments to reach the optimal results in applications. The number of images used, the input size of the images, the number of layers, and their parameters are the main factors that directly affect the success of the models. In this study, seven CNN architectures with different convolutional layers and dense layers were applied to the Brain Tumor Progression dataset. The CNN architectures are designed by gradually decreasing and increasing the layers, and the performance results on the considered dataset have been analyzed using five-fold cross-validation. The results showed that deeper architectures in binary classification tasks could reduce the performance rates up to 7%. It has been observed that models with the lowest number of layers are more successful in sensitivity results. General results demonstrated that networks with two convolutional and fully connected layers produced superior results depending on the filter and neuron number adjustments within their layers. The results might support the researchers to determine the initial architecture in binary classification studies.

A novel adaptive momentum method for medical image classification using convolutional neural network

  • Aytac, U. C.
  • Gunes, A.
  • Ajlouni, N.
BMC Med Imaging 2022 Journal Article, cited 0 times
Website
BACKGROUND: AI for medical diagnosis has made a tremendous impact by applying convolutional neural networks (CNNs) to medical image classification and momentum plays an essential role in stochastic gradient optimization algorithms for accelerating or improving training convolutional neural networks. In traditional optimizers in CNNs, the momentum is usually weighted by a constant. However, tuning hyperparameters for momentum can be computationally complex. In this paper, we propose a novel adaptive momentum for fast and stable convergence. METHOD: Applying adaptive momentum rate proposes increasing or decreasing based on every epoch's error changes, and it eliminates the need for momentum hyperparameter optimization. We tested the proposed method with 3 different datasets: REMBRANDT Brain Cancer, NIH Chest X-ray, COVID-19 CT scan. We compared the performance of a novel adaptive momentum optimizer with Stochastic gradient descent (SGD) and other adaptive optimizers such as Adam and RMSprop. RESULTS: Proposed method improves SGD performance by reducing classification error from 6.12 to 5.44%, and it achieved the lowest error and highest accuracy compared with other optimizers. To strengthen the outcomes of this study, we investigated the performance comparison for the state-of-the-art CNN architectures with adaptive momentum. The results shows that the proposed method achieved the highest with 95% compared to state-of-the-art CNN architectures while using the same dataset. The proposed method improves convergence performance by reducing classification error and achieves high accuracy compared with other optimizers.

Analysis of dual tree M‐band wavelet transform based features for brain image classification

  • Ayalapogu, Ratna Raju
  • Pabboju, Suresh
  • Ramisetty, Rajeswara Rao
Magnetic Resonance in Medicine 2018 Journal Article, cited 1 times
Website

Multi-threshold Attention U-Net (MTAU) Based Model for Multimodal Brain Tumor Segmentation in MRI Scans

  • Awasthi, Navchetan
  • Pardasani, Rohit
  • Gupta, Swati
2021 Book Section, cited 9 times
Website
Gliomas are one of the most frequent brain tumors and are classified into high grade and low grade gliomas. The segmentation of various regions such as tumor core, enhancing tumor etc. plays an important role in determining severity and prognosis. Here, we have developed a multi-threshold model based on attention U-Net for identification of various regions of the tumor in magnetic resonance imaging (MRI). We propose a multi-path segmentation and built three separate models for the different regions of interest. The proposed model achieved mean Dice Coefficient of 0.59, 0.72, and 0.61 for enhancing tumor, whole tumor and tumor core respectively on the training dataset. The same model gave mean Dice Coefficient of 0.57, 0.73, and 0.61 on the validation dataset and 0.59, 0.72, and 0.57 on the test dataset .

Oropharyngeal cancer prognosis based on clinicopathologic and quantitative imaging biomarkers with multiparametric model and machine learning methods

  • Avasthi, Anupama
  • Avasthi, Anushree A.
  • Rathi, Bhawna
2022 Book Section, cited 0 times
Website
Aim: There is an unmet need for integrating quantitative imaging biomarkers into current risk stratification tools and to investigate relationships between various clinical characteristics, both radiomics features as well as other clinical prognosticators for oropharyngeal cancer (OPC). Multivariate analysis and ML algorithms can be used to predict recurrence free survival in patients with OPC. Method: Open access clinical meta data and matched baseline contrast-enhanced computed tomography (CECT) scans were accessed for a cohort of 495 OPC patients treated between 2005 and 2012 available at Head and Neck Cancer CT Atlas. DOI: 10.7937/K9/ TCIA.2017.umz8dv6s. The Cox proportional hazards (CPHs) were used to evaluate a large number of prognostic variables toward survival of cancer patients. Kaplan-Meier method was deployed to estimate mean and median survival with 95% CI and was compared using log-rank test. ML algorithms using random forest (RF) classifiers were used for prediction. Variables used in the models were age, gender, smoking status, smoking, TNM characteristics, AJCC staging, acks, subsite of origin, therapeutic combination, radiation dose, radiation duration, relapse-free survival and vital status. Results: Performance of CPH and RSF model in terms of Harell’s c-index (95% confidence interval) was compared and RSF model had an error rate of 38.94% or a c-index of 0.61 which is compared with CPH index of 0.62 which indicates a medium-level prediction. Conclusion: ML is a promising toolset for improving prediction of oral cancer outcomes. However, it is a medium-level prediction, and additional work is needed to improve its accuracy and consistency. Additional refinements in the model may provide useful inputs for an improved personalized care and improving outcomes in HNSCC patients.

Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients

  • Athira, KV
  • Nithin, SS
Computer 2018 Journal Article, cited 0 times
Website
To develop a computer aided detection scheme to predict the stage 1 non-small cell lung cancer recurrence risk in lung cancer patients after surgery. By using chest computed tomography images; that taken before surgery, this system automatically segment the tumor that seen on CT images and extract the tumor related morphological and texture-based image features. We trained a Naïve Bayesian network classifier using six image features and an ANN classifier using two genomic biomarkers, these biomarkers are protein expression of the excision repair cross-complementing 1 gene (ERCC1) & a regulatory subunit of ribonucleotide reductase (RRM1) to predict the cancer recurrence risk, respectively. We developed a new approach that has a high potential to assist doctors in more effectively managing first stage NSCLC patients to reduce the cancer recurrence risk.

Neural image compression for non-small cell lung cancer subtype classification in H&E stained whole-slide images

  • Aswolinskiy, Witali
  • Tellez, David
  • Raya, Gabriel
  • van der Woude, Lieke
  • Looijen-Salamon, Monika
  • van der Laak, Jeroen
  • Grunberg, Katrien
  • Ciompi, Francesco
2021 Conference Proceedings, cited 0 times

Multimodal Brain Tumor Segmentation with Normal Appearance Autoencoder

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Carrizo, Gabriel
  • Toma-Dasu, Iuliana
  • Smedby, Örjan
2020 Conference Paper, cited 0 times
Website
We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model.

Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Buizza, Giulia
  • Toma-Dasu, Iuliana
  • Lazzeroni, Marta
  • Smedby, Orjan
Physica Medica 2019 Journal Article, cited 0 times
Website
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.

Prior-aware autoencoders for lung pathology segmentation

  • Astaraki, M.
  • Smedby, O.
  • Wang, C.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion segmentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and reconstruct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information regarding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On average, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model produces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.

Fusion of CT and MR Liver Images by SURF-Based Registration

  • Aslan, Muhammet Fatih
  • Durdu, Akif
International Journal of Intelligent Systems and Applications in Engineering 2019 Journal Article, cited 3 times
Website

Low-Rank Convolutional Networks for Brain Tumor Segmentation

  • Ashtari, Pooya
  • Maes, Frederik
  • Van Huffel, Sabine
2021 Book Section, cited 0 times
The automated segmentation of brain tumors is crucial for various clinical purposes from diagnosis to treatment planning to follow-up evaluations. The vast majority of effective models for tumor segmentation are based on convolutional neural networks with millions of parameters being trained. Such complex models can be highly prone to overfitting especially in cases where the amount of training data is insufficient. In this work, we devise a 3D U-Net-style architecture with residual blocks, in which low-rank constraints are imposed on weights of the convolutional layers in order to reduce overfitting. Within the same architecture, this helps to design networks with several times fewer parameters. We investigate the effectiveness of the proposed technique on the BraTS 2020 challenge.

Improved deep semantic medical image segmentation

  • Asgari Taghanaki, Saeid
2019 Thesis, cited 0 times
Website
The image semantic segmentation challenge consists of classifying each pixel of an image (or just several ones) into an instance, where each instance (or category) corresponds to an object. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. Following a comprehensive review of state-of-the-art deep learning-based medical and non-medical image segmentation solutions, we make the following contributions. A deep learning-based (medical) image segmentation typical pipeline includes designing layers (A), designing an architecture (B), and defining a loss function (C). A clean/modified (D)/adversarialy perturbed (E) image is fed into a model (consisting of layers and loss function) to predict a segmentation mask for scene understanding etc. In some cases where the number of segmentation annotations is limited, weakly supervised approaches (F) are leverages. For some applications where further analysis is needed e.g., predicting volumes and objects burden, the segmentation mask is fed into another post-processing step (G). In this thesis, we tackle each of the steps (A-G). I) As for step (A and E), we studied the effect of the adversarial perturbation on image segmentation models and proposed a method that improves the segmentation performance via a non-linear radial basis convolutional feature mapping by learning a Mahalanobis-like distance function on both adversarially perturbed and unperturbed images. Our method then maps the convolutional features onto a linearly well-separated manifold, which prevents small adversarial perturbations from forcing a sample to cross the decision boundary. II) As for step (B), we propose light, learnable skip connections which learn first to select the most discriminative channels and then aggregate the selected ones as single-channel attending to the most discriminative regions of input. Compared to the heavy classical skip connections, our method reduces the computation cost and memory usage while it improves segmentation performance. III) As for step (C), we examined the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning-based loss function. Specifically, we leverage the Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time, gradually learn better model parameters by penalizing for false positives/negatives using a cross-entropy term which also helps. IV) As for step (D), we propose a new segmentation performance-boosting paradigm that relies on optimally modifying the network's input instead of the network itself. In particular, we leverage the gradients of a trained segmentation network with respect to the input to transfer it into a space where the segmentation accuracy improves. V) As for step (F), we propose a weakly supervised image segmentation model with a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. VI) Although many semi-automatic segmentation based methods have been developed, as for step (G), we introduce a method that completely eliminates the segmentation step and directly estimates the volume and activity of the lesions from positron emission tomography scans.

Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation

  • Asaturyan, Hykoush
  • Gligorievski, Antonio
  • Villarini, Barbara
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 3 times
Website
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.

Effect of Applying Leakage Correction on rCBV Measurement Derived From DSC-MRI in Enhancing and Nonenhancing Glioma

  • Arzanforoosh, Fatemeh
  • Croal, Paula L.
  • van Garderen, Karin A.
  • Smits, Marion
  • Chappell, Michael A.
  • Warnert, Esther A. H.
Frontiers in Oncology 2021 Journal Article, cited 0 times
Website
Purpose: Relative cerebral blood volume (rCBV) is the most widely used parameter derived from DSC perfusion MR imaging for predicting brain tumor aggressiveness. However, accurate rCBV estimation is challenging in enhancing glioma, because of contrast agent extravasation through a disrupted blood-brain barrier (BBB), and even for nonenhancing glioma with an intact BBB, due to an elevated steady-state contrast agent concentration in the vasculature after first passage. In this study a thorough investigation of the effects of two different leakage correction algorithms on rCBV estimation for enhancing and nonenhancing tumors was conducted. Methods: Two datasets were used retrospectively in this study: 1. A publicly available TCIA dataset (49 patients with 35 enhancing and 14 nonenhancing glioma); 2. A dataset acquired clinically at Erasmus MC (EMC, Rotterdam, NL) (47 patients with 20 enhancing and 27 nonenhancing glial brain lesions). The leakage correction algorithms investigated in this study were: a unidirectional model-based algorithm with flux of contrast agent from the intra- to the extravascular extracellular space (EES); and a bidirectional model-based algorithm additionally including flow from EES to the intravascular space. Results: In enhancing glioma, the estimated average contrast-enhanced tumor rCBV significantly (Bonferroni corrected Wilcoxon Signed Rank Test, p < 0.05) decreased across the patients when applying unidirectional and bidirectional correction: 4.00 ± 2.11 (uncorrected), 3.19 ± 1.65 (unidirectional), and 2.91 ± 1.55 (bidirectional) in TCIA dataset and 2.51 ± 1.3 (uncorrected), 1.72 ± 0.84 (unidirectional), and 1.59 ± 0.9 (bidirectional) in EMC dataset. In nonenhancing glioma, a significant but smaller difference in observed rCBV was found after application of both correction methods used in this study: 1.42 ± 0.60 (uncorrected), 1.28 ± 0.46 (unidirectional), and 1.24 ± 0.37 (bidirectional) in TCIA dataset and 0.91 ± 0.49 (uncorrected), 0.77 ± 0.37 (unidirectional), and 0.67 ± 0.34 (bidirectional) in EMC dataset. Conclusion: Both leakage correction algorithms were found to change rCBV estimation with BBB disruption in enhancing glioma, and to a lesser degree in nonenhancing glioma. Stronger effects were found for bidirectional leakage correction than for unidirectional leakage correction.

Discovery of pre-therapy 2-deoxy-2-18 F-fluoro-D-glucose positron emission tomography-based radiomics classifiers of survival outcome in non-small-cell lung cancer patients

  • Arshad, Mubarik A
  • Thornton, Andrew
  • Lu, Haonan
  • Tam, Henry
  • Wallitt, Kathryn
  • Rodgers, Nicola
  • Scarsbrook, Andrew
  • McDermott, Garry
  • Cook, Gary J
  • Landau, David
European journal of nuclear medicine and molecular imaging 2018 Journal Article, cited 0 times
Website

Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture

  • Arora, Anuja
  • Jayal, Ambikesh
  • Gupta, Mayank
  • Mittal, Prakhar
  • Satapathy, Suresh Chandra
2021 Journal Article, cited 6 times
Website
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively.

Special Section Guest Editorial: LUNGx Challenge for computerized lung nodule classification: reflections and lessons learned

  • Armato, Samuel G
  • Hadjiiski, Lubomir
  • Tourassi, Georgia D
  • Drukker, Karen
  • Giger, Maryellen L
  • Li, Feng
  • Redmond, George
  • Farahani, Keyvan
  • Kirby, Justin S
  • Clarke, Laurence P
Journal of Medical Imaging 2015 Journal Article, cited 20 times
Website
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants' computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists' AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.

Collaborative projects

  • Armato, S
  • McNitt-Gray, M
  • Meyer, C
  • Reeves, A
  • Clarke, L
Int J CARS 2012 Journal Article, cited 307 times
Website

The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans

  • Armato III, Samuel G
  • McLennan, Geoffrey
  • Bidaut, Luc
  • McNitt-Gray, Michael F
  • Meyer, Charles R
  • Reeves, Anthony P
  • Zhao, Binsheng
  • Aberle, Denise R
  • Henschke, Claudia I
  • Hoffman, Eric A
  • Kazerooni, E. A.
  • MacMahon, H.
  • Van Beeke, E. J.
  • Yankelevitz, D.
  • Biancardi, A. M.
  • Bland, P. H.
  • Brown, M. S.
  • Engelmann, R. M.
  • Laderach, G. E.
  • Max, D.
  • Pais, R. C.
  • Qing, D. P.
  • Roberts, R. Y.
  • Smith, A. R.
  • Starkey, A.
  • Batrah, P.
  • Caligiuri, P.
  • Farooqi, A.
  • Gladish, G. W.
  • Jude, C. M.
  • Munden, R. F.
  • Petkovska, I.
  • Quint, L. E.
  • Schwartz, L. H.
  • Sundaram, B.
  • Dodd, L. E.
  • Fenimore, C.
  • Gur, D.
  • Petrick, N.
  • Freymann, J.
  • Kirby, J.
  • Hughes, B.
  • Casteele, A. V.
  • Gupte, S.
  • Sallamm, M.
  • Heath, M. D.
  • Kuhn, M. H.
  • Dharaiya, E.
  • Burns, R.
  • Fryd, D. S.
  • Salganicoff, M.
  • Anand, V.
  • Shreter, U.
  • Vastagh, S.
  • Croft, B. Y.
Medical Physics 2011 Journal Article, cited 546 times
Website
PURPOSE: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. METHODS: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. RESULTS: The Database contains 7371 lesions marked "nodule" by at least one radiologist. 2669 of these lesions were marked "nodule > or =3 mm" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. CONCLUSIONS: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.

Potentials of radiomics for cancer diagnosis and treatment in comparison with computer-aided diagnosis

  • Arimura, Hidetaka
  • Soufi, Mazen
  • Ninomiya, Kenta
  • Kamezawa, Hidemi
  • Yamada, Masahiro
Radiological Physics and Technology 2018 Journal Article, cited 0 times
Website
Computer-aided diagnosis (CAD) is a field that is essentially based on pattern recognition that improves the accuracy of a diagnosis made by a physician who takes into account the computer’s “opinion” derived from the quantitative analysis of radiological images. Radiomics is a field based on data science that massively and comprehensively analyzes a large number of medical images to extract a large number of phenotypic features reflecting disease traits, and explores the associations between the features and patients’ prognoses for precision medicine. According to the definitions for both, you may think that radiomics is not a paraphrase of CAD, but you may also think that these definitions are “image manipulation”. However, there are common and different features between the two fields. This review paper elaborates on these common and different features and introduces the potential of radiomics for cancer diagnosis and treatment by comparing it with CAD.

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

  • Ardila, D.
  • Kiraly, A. P.
  • Bharadwaj, S.
  • Choi, B.
  • Reicher, J. J.
  • Peng, L.
  • Tse, D.
  • Etemadi, M.
  • Ye, W.
  • Corrado, G.
  • Naidich, D. P.
  • Shetty, S.
Nat Med 2019 Journal Article, cited 1 times
Website
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.

Automatic classification of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques

  • Apostolopoulos, Ioannis D
  • Pintelas, Emmanuel G
  • Livieris, Ioannis E
  • Apostolopoulos, Dimitris J
  • Papathanasiou, Nikolaos D
  • Pintelas, Panagiotis E
  • Panayiotakis, George S
2021 Journal Article, cited 0 times
Website

Classification of lung nodule malignancy in computed tomography imaging utilising generative adversarial networks and semi-supervised transfer learning

  • Apostolopoulos, Ioannis D.
  • Papathanasiou, Nikolaos D.
  • Panayiotakis, George S.
Biocybernetics and Biomedical Engineering 2021 Journal Article, cited 2 times
Website
The pulmonary nodules' malignancy rating is commonly confined in patient follow-up; examining the nodule's activity is estimated with the Positron Emission Tomography (PET) system or biopsy. However, these strategies are usually after the initial detection of the malignant nodules acquired from the Computed Tomography (CT) scan. In this study, a Deep Learning methodology to address the challenge of the automatic characterisation of Solitary Pulmonary Nodules (SPN) detected in CT scans is proposed. The research methodology is based on Convolutional Neural Networks, which have proven to be excellent automatic feature extractors for medical images. The publicly available CT dataset, called Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), and a small CT scan dataset derived from a PET/CT system, is considered the classification target. New, realistic nodule representations are generated employing Deep Convolutional Generative Adversarial Networks to circumvent the shortage of large-scale data to train robust CNNs. Besides, a hierarchical CNN called Feature Fusion VGG19 (FF-VGG19) was developed to enhance feature extraction of the CNN proposed by the Visual Geometry Group (VGG). Moreover, the generated nodule images are separated into two classes by utilising a semi-supervised approach, called self-training, to tackle weak labelling due to DC-GAN inefficiencies. The DC-GAN can generate realistic SPNs, as the experts could only distinguish 23 % of the synthetic nodule images. As a result, the classification accuracy of FF-VGG19 on the LIDC-IDRI dataset increases by +7%, reaching 92.07 %, while the classification accuracy on the CT dataset is increased by 5 %, reaching 84,3 %.

Investigation of radiomics and deep convolutional neural networks approaches for glioma grading

  • Aouadi, S.
  • Torfeh, T.
  • Arunachalam, Y.
  • Paloor, S.
  • Riyas, M.
  • Hammoud, R.
  • Al-Hammadi, N.
Biomed Phys Eng Express 2023 Journal Article, cited 0 times
Website
Purpose.To determine glioma grading by applying radiomic analysis or deep convolutional neural networks (DCNN) and to benchmark both approaches on broader validation sets.Methods.Seven public datasets were considered: (1) low-grade glioma or high-grade glioma (369 patients, BraTS'20) (2) well-differentiated liposarcoma or lipoma (115, LIPO); (3) desmoid-type fibromatosis or extremity soft-tissue sarcomas (203, Desmoid); (4) primary solid liver tumors, either malignant or benign (186, LIVER); (5) gastrointestinal stromal tumors (GISTs) or intra-abdominal gastrointestinal tumors radiologically resembling GISTs (246, GIST); (6) colorectal liver metastases (77, CRLM); and (7) lung metastases of metastatic melanoma (103, Melanoma). Radiomic analysis was performed on 464 (2016) radiomic features for the BraTS'20 (others) datasets respectively. Random forests (RF), Extreme Gradient Boosting (XGBOOST) and a voting algorithm comprising both classifiers were tested. The parameters of the classifiers were optimized using a repeated nested stratified cross-validation process. The feature importance of each classifier was computed using the Gini index or permutation feature importance. DCNN was performed on 2D axial and sagittal slices encompassing the tumor. A balanced database was created, when necessary, using smart slices selection. ResNet50, Xception, EficientNetB0, and EfficientNetB3 were transferred from the ImageNet application to the tumor classification and were fine-tuned. Five-fold stratified cross-validation was performed to evaluate the models. The classification performance of the models was measured using multiple indices including area under the receiver operating characteristic curve (AUC).Results.The best radiomic approach was based on XGBOOST for all datasets; AUC was 0.934 (BraTS'20), 0.86 (LIPO), 0.73 (LIVER), (0.844) Desmoid, 0.76 (GIST), 0.664 (CRLM), and 0.577 (Melanoma) respectively. The best DCNN was based on EfficientNetB0; AUC was 0.99 (BraTS'20), 0.982 (LIPO), 0.977 (LIVER), (0.961) Desmoid, 0.926 (GIST), 0.901 (CRLM), and 0.89 (Melanoma) respectively.Conclusion.Tumor classification can be accurately determined by adapting state-of-the-art machine learning algorithms to the medical context.

Genomics-Based Models for Recurrence Prediction of Non-small Cells Lung Cancers

  • Aonpong, Panyanat
  • Iwamoto, Yutaro
  • Wang, Weibin
  • Lin, Lanfen
  • Chen, Yen-Wei
2021 Conference Paper, cited 0 times
Website
This research is designed to examine the recurrence of non-small lung cancer (NSCLC) prediction using genomics information to reach the maximum accuracy. The raw gene data show very good performance but require more precise examination. This work is study about the way to reduce the complexity of the gene data with minimal information loss. This processed gene data tends to have the ability to archive the reasonable prediction result with faster process. This work presents a comparison of the operations of the two steps, including gene selection and gene quantization, Linear quantization and K-mean quantization, using associated gene selected from 88 patient sample from the open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive Public Access. We use the different number of the group splitting and compare the performance of the recurrence prediction in both operations. The results of this study show us that the F-test method can provide us the best gene set that related to NSCLC recurrence. With F-test without quantization, accuracy of the prediction has been improved from 81.41% (using 5587 genes) to 91.83% (using selected 294 genes). With quantization methods, the suitable gene groups separation can maximize the accuracy to 93.42% using K-mean quantization.

Hand-Crafted and Deep Learning-Based Radiomics Models for Recurrence Prediction of Non-Small Cells Lung Cancers

  • Aonpong, Panyanat
  • Iwamoto, Yutaro
  • Wang, Weibin
  • Lin, Lanfen
  • Chen, Yen-Wei
Innovation in Medicine and Healthcare 2020 Journal Article, cited 0 times
Website
This research was created to examine the recurrence of non-small lung cancer (NSCLC) using computed-tomography images (CT-images) to avoid biopsy from patients because the cancer cells may have an uneven distribution which can lead to the investigation mistake. This work presents a comparison of the operations of two different methods: Hand-Crafted Radiomics model and deep learning-based radiomics model using 88 patient samples from open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive (TCIA) Public Access. In Hand-Crafted Radiomics Models, the pattern of NSCLC CT-images was analyzed in various statistics as radiomics features. The radiomics features associated with recurrence are selected through three statistical calculations: LASSO, Chi-2, and ANOVA. Then, those selected radiomics features were processed using different models. In the Deep Learning-based Radiomics Model, the proposed artificial neural network has been used to enhance the recurrence prediction. The Hand-Crafted Radiomics Model with non-selected, Lasso, Chi-2, and ANOVA, give the following results: 76.56% (AUC 0.6361), 76.83% (AUC 0.6375), 78.64% (AUC 0.6778), and 78.17% (AUC 0.6556), respectively, and the Deep Learning-based Radiomic Models, including ResNet50 and DenseNet121 give the following results: 79.00% (AUC 0.6714), and 79.31% (AUC 0.6712), respectively.

Improved Genotype-Guided Deep Radiomics Signatures for Recurrence Prediction of Non-Small Cell Lung Cancer

  • Aonpong, P.
  • Iwamoto, Y.
  • Han, X. H.
  • Lin, L.
  • Chen, Y. W.
Annu Int Conf IEEE Eng Med Biol Soc 2021 Journal Article, cited 0 times
Website
Non-small cell lung cancer (NSCLC) is a type of lung cancer that has a high recurrence rate after surgery. Precise prediction of preoperative prognosis for NSCLC recurrence tends to contribute to the suitable preparation for treatment. Currently, many studied have been conducted to predict the recurrence of NSCLC based on Computed Tomography-images (CT images) or genetic data. The CT image is not expensive but inaccurate. The gene data is more expensive but has high accuracy. In this study, we proposed a genotype-guided radiomics method called GGR and GGR_Fusion to make a higher accuracy prediction model with requires only CT images. The GGR is a two-step method which is consists of two models: the gene estimation model using deep learning and the recurrence prediction model using estimated genes. We further propose an improved performance model based on the GGR model called GGR_Fusion to improve the accuracy. The GGR_Fusion uses the extracted features from the gene estimation model to enhance the recurrence prediction model. The experiments showed that the prediction performance can be improved significantly from 78.61% accuracy, AUC=0.66 (existing radiomics method), 79.09% accuracy, AUC=0.68 (deep learning method) to 83.28% accuracy, AUC=0.77 by the proposed GGR and 84.39% accuracy, AUC=0.79 by the proposed GGR_Fusion.Clinical Relevance-This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 84.39% by our proposed method using only the CT image.

Breast Density Transformations Using CycleGANs for Revealing Undetected Findings in Mammograms

  • Anyfantis, Dionysios
  • Koutras, Athanasios
  • Apostolopoulos, George
  • Christoyianni, Ioanna
2023 Journal Article, cited 1 times
Website
Breast cancer is the most common cancer in women, a leading cause of morbidity and mortality, and a significant health issue worldwide. According to the World Health Organization’s cancer awareness recommendations, mammographic screening should be regularly performed on middle-aged or older women to increase the chances of early cancer detection. Breast density is widely known to be related to the risk of cancer development. The American College of Radiology Breast Imaging Reporting and Data System categorizes mammography into four levels based on breast density, ranging from ACR-A (least dense) to ACR-D (most dense). Computer-aided diagnostic (CAD) systems can now detect suspicious regions in mammograms and identify abnormalities more quickly and accurately than human readers. However, their performance is still influenced by the tissue density level, which must be considered when designing such systems. In this paper, we propose a novel method that uses CycleGANs to transform suspicious regions of mammograms from ACR-B, -C, and -D levels to ACR-A level. This transformation aims to reduce the masking effect caused by thick tissue and separate cancerous regions from surrounding tissue. Our proposed system enhances the performance of conventional CNN-based classifiers significantly by focusing on regions of interest that would otherwise be misidentified due to fatty masking. Extensive testing on different types of mammograms (digital and scanned X-ray film) demonstrates the effectiveness of our system in identifying normal, benign, and malignant regions of interest.

Fast wavelet based image characterization for content based medical image retrieval

  • Anwar, Syed Muhammad
  • Arshad, Fozia
  • Majid, Muhammad
2017 Conference Proceedings, cited 4 times
Website
A large collection of medical images surrounds health care centers and hospitals. Medical images produced by different modalities like magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and X-rays have increased incredibly with the advent of latest technologies for image acquisition. Retrieving clinical images of interest from these large data sets is a thought-provoking and demanding task. In this paper, a fast wavelet based medical image retrieval system is proposed that can aid physicians in the identification or analysis of medical images. The image signature is calculated using kurtosis and standard deviation as features. A possible use case is when the radiologist has some suspicion on diagnosis and wants further case histories, the acquired clinical images are sent (e.g. MRI images of brain) as a query to the content based medical image retrieval system. The system is tuned to retrieve the top most relevant images to the query. The proposed system is computationally efficient and more accurate in terms of the quality of retrieved images.

Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks

  • Antonio, Victor Andrew A
  • Ono, Naoaki
  • Saito, Akira
  • Sato, Tetsuo
  • Altaf-Ul-Amin, Md
  • Kanaya, Shigehiko
International Journal of Computer Assisted Radiology and Surgery 2018 Journal Article, cited 0 times
Website
PURPOSE: Convolutional neural networks have become rapidly popular for image recognition and image analysis because of its powerful potential. In this paper, we developed a method for classifying subtypes of lung adenocarcinoma from pathological images using neural network whose that can evaluate phenotypic features from wider area to consider cellular distributions. METHODS: In order to recognize the types of tumors, we need not only to detail features of cells, but also to incorporate statistical distribution of the different types of cells. Variants of autoencoders as building blocks of pre-trained convolutional layers of neural networks are implemented. A sparse deep autoencoder which minimizes local information entropy on the encoding layer is then proposed and applied to images of size [Formula: see text]. We applied this model for feature extraction from pathological images of lung adenocarcinoma, which is comprised of three transcriptome subtypes previously defined by the Cancer Genome Atlas network. Since the tumor tissue is composed of heterogeneous cell populations, recognition of tumor transcriptome subtypes requires more information than local pattern of cells. The parameters extracted using this approach will then be used in multiple reduction stages to perform classification on larger images. RESULTS: We were able to demonstrate that these networks successfully recognize morphological features of lung adenocarcinoma. We also performed classification and reconstruction experiments to compare the outputs of the variants. The results showed that the larger input image that covers a certain area of the tissue is required to recognize transcriptome subtypes. The sparse autoencoder network with [Formula: see text] input provides a 98.9% classification accuracy. CONCLUSION: This study shows the potential of autoencoders as a feature extraction paradigm and paves the way for a whole slide image analysis tool to predict molecular subtypes of tumors from pathological features.

The Medical Segmentation Decathlon

  • Antonelli, M.
  • Reinke, A.
  • Bakas, S.
  • Farahani, K.
  • Kopp-Schneider, A.
  • Landman, B. A.
  • Litjens, G.
  • Menze, B.
  • Ronneberger, O.
  • Summers, R. M.
  • van Ginneken, B.
  • Bilello, M.
  • Bilic, P.
  • Christ, P. F.
  • Do, R. K. G.
  • Gollub, M. J.
  • Heckers, S. H.
  • Huisman, H.
  • Jarnagin, W. R.
  • McHugo, M. K.
  • Napel, S.
  • Pernicka, J. S. G.
  • Rhode, K.
  • Tobon-Gomez, C.
  • Vorontsov, E.
  • Meakin, J. A.
  • Ourselin, S.
  • Wiesenfarth, M.
  • Arbelaez, P.
  • Bae, B.
  • Chen, S.
  • Daza, L.
  • Feng, J.
  • He, B.
  • Isensee, F.
  • Ji, Y.
  • Jia, F.
  • Kim, I.
  • Maier-Hein, K.
  • Merhof, D.
  • Pai, A.
  • Park, B.
  • Perslev, M.
  • Rezaiifar, R.
  • Rippel, O.
  • Sarasua, I.
  • Shen, W.
  • Son, J.
  • Wachinger, C.
  • Wang, L.
  • Wang, Y.
  • Xia, Y.
  • Xu, D.
  • Xu, Z.
  • Zheng, Y.
  • Simpson, A. L.
  • Maier-Hein, L.
  • Cardoso, M. J.
2022 Journal Article, cited 79 times
Website
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.

A Bi-FPN-Based Encoder–Decoder Model for Lung Nodule Image Segmentation

  • Annavarapu, Chandra Sekhara Rao
  • Parisapogu, Samson Anosh Babu
  • Keetha, Nikhil Varma
  • Donta, Praveen Kumar
  • Rajita, Gurindapalli
Diagnostics 2023 Journal Article, cited 0 times
Website
Early detection and analysis of lung cancer involve a precise and efficient lung nodule segmentation in computed tomography (CT) images. However, the anonymous shapes, visual features, and surroundings of the nodules as observed in the CT images pose a challenging and critical problem to the robust segmentation of lung nodules. This article proposes a resource-efficient model architecture: an end-to-end deep learning approach for lung nodule segmentation. It incorporates a Bi-FPN (bidirectional feature network) between an encoder and a decoder architecture. Furthermore, it uses the Mish activation function and class weights of masks with the aim of enhancing the efficiency of the segmentation. The proposed model was extensively trained and evaluated on the publicly available LUNA-16 dataset consisting of 1186 lung nodules. To increase the probability of the suitable class of each voxel in the mask, a weighted binary cross-entropy loss of each sample of training was utilized as network training parameter. Moreover, on the account of further evaluation of robustness, the proposed model was evaluated on the QIN Lung CT dataset. The results of the evaluation show that the proposed architecture outperforms existing deep learning models such as U-Net with a Dice Similarity Coefficient of 82.82% and 81.66% on both datasets.

Brain tumour classification using two-tier classifier with adaptive segmentation technique

  • Anitha, V
  • Murugavalli, S
IET Computer VisionIet Comput Vis 2016 Journal Article, cited 46 times
Website
A brain tumour is a mass of tissue that is structured by a gradual addition of anomalous cells and it is important to classify brain tumours from the magnetic resonance imaging (MRI) for treatment. Human investigation is the routine technique for brain MRI tumour detection and tumours classification. Interpretation of images is based on organised and explicit classification of brain MRI and also various techniques have been proposed. Information identified with anatomical structures and potential abnormal tissues which are noteworthy to treat are given by brain tumour segmentation on MRI, the proposed system uses the adaptive pillar K-means algorithm for successful segmentation and the classification methodology is done by the two-tier classification approach. In the proposed system, at first the self-organising map neural network trains the features extracted from the discrete wavelet transform blend wavelets and the resultant filter factors are consequently trained by the K-nearest neighbour and the testing process is also accomplished in two stages. The proposed two-tier classification system classifies the brain tumours in double training process which gives preferable performance over the traditional classification method. The proposed system has been validated with the support of real data sets and the experimental results showed enhanced performance.

Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

  • Anirudh, Rushil
  • Thiagarajan, Jayaraman J
  • Bremer, Timo
  • Kim, Hyojin
2016 Conference Proceedings, cited 33 times
Website

Imaging Genomics in Glioblastoma Multiforme: A Predictive Tool for Patients Prognosis, Survival, and Outcome

  • Anil, Rahul
  • Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America 2016 Journal Article, cited 3 times
Website
The integration of imaging characteristics and genomic data has started a new trend in approach toward management of glioblastoma (GBM). Many ongoing studies are investigating imaging phenotypical signatures that could explain more about the behavior of GBM and its outcome. The discovery of biomarkers has played an adjuvant role in treating and predicting the outcome of patients with GBM. Discovering these imaging phenotypical signatures and dysregulated pathways/genes is needed and required to engineer treatment based on specific GBM manifestations. Characterizing these parameters will establish well-defined criteria so researchers can build on the treatment of GBM through personal medicine.

A Multi Brain Tumor Classification Using a Deep Reinforcement Learning Model

  • Anil Kumar, B.
  • Lakshmidevi, N.
2022 Conference Paper, cited 0 times
Website
Brain Tumor is a type of disease where the abnormal cells will grow in the human brain. There will be different type of tumors in the brain and also these tumors will be in the spinal cord. Doctors will use some techniques to cure this tumors which are present in the brain. So the first task is to classify the different types of tumors and to give the respective treatment. In general the Magnetic-Resonance-Imaging (MRI) is used to find the type of tumor is present in the image or not and also identifies the position of the tumor. Basically images will have Benign or malignant type of tumors. Benign tumors are non-cancerous can be cured with the help of medicines. Malignant tumors are dangerous they can’t be cured with medicines it will leads to death of a person. MRI is used to classify these type of tumors. MRI images will use more time to evaluate the tumor and evaluation of the tumor is different for different doctors. So There is one more technique which is used to classify the brain tumor images are deep learning. Deep learning consists of supervised learning mechanism, unsupervised learning mechanism and Reinforcement learning mechanism. The DL model uses convolution neural network to classify the brain tumor images into Glioma, Meningioma and Pituitary from the given dataset and also used for classification and feature Extraction of images. The dataset is consisting of 3064 images which is included with Glioma, Meningioma and pituitary tumors. Here, Reinforcement learning mechanism is used for classifying the images based on the agent, reward, policy, state. The Deep Q-network which is part of Reinforcement learning is used for better accuracy. Reinforcement learning got more accuracy in classification Compared to different mechanisms like supervised, unsupervised mechanisms. In this Accuracy of the Brain Tumor classification is increased to 95.4% by using Reinforcement compared with the supervised learning. The results indicates that classification of the Brain Tumors.

Data Augmentation and Transfer Learning for Brain Tumor Detection in Magnetic Resonance Imaging

  • Anaya-Isaza, Andres
  • Mera-Jimenez, Leonel
IEEE Access 2022 Journal Article, cited 1 times
Website
The exponential growth of deep learning networks has allowed us to tackle complex tasks, even in fields as complicated as medicine. However, using these models requires a large corpus of data for the networks to be highly generalizable and with high performance. In this sense, data augmentation methods are widely used strategies to train networks with small data sets, being vital in medicine due to the limited access to data. A clear example of this is magnetic resonance imaging in pathology scans associated with cancer. In this vein, we compare the effect of several conventional data augmentation schemes on the ResNet50 network for brain tumor detection. In addition, we included our strategy based on principal component analysis. The training was performed with the network trained from zeros and transfer-learning, obtained from the ImageNet dataset. The investigation allowed us to achieve an F1 detection score of 92.34%. The score was achieved with the ResNet50 network through the proposed method and implementing the learning transfer. In addition, it was also concluded that the proposed method is different from the other conventional methods with a significance level of 0.05 through the Kruskal Wallis test statistic.

Brain Tumor Segmentation and Survival Prediction Using Automatic Hard Mining in 3D CNN Architecture

  • Anand, Vikas Kumar
  • Grampurohit, Sanjeev
  • Aurangabadkar, Pranav
  • Kori, Avinash
  • Khened, Mahendra
  • Bhat, Raghavendra S.
  • Krishnamurthi, Ganapathy
2021 Book Section, cited 16 times
Website
We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas and its constituents from multimodal Magnetic Resonance Images (MRI). The architecture uses dense connectivity patterns to reduce the number of weights and residual connection and is initialized with weights obtained from training this model with BraTS 2018 dataset. Hard mining is done during training to train for the difficult cases of segmentation tasks by increasing the dice similarity coefficient (DSC) threshold to choose the hard cases as epoch increases. On the BraTS2020 validation data (n = 125), this architecture achieved a tumor core, whole tumor, and active tumor dice of 0.744, 0.876, 0.714, respectively. On the test dataset, we get an increment in DSC of tumor core and active tumor by approximately 7%. In terms of DSC, our network performances on the BraTS 2020 test data are 0.775, 0.815, and 0.85 for enhancing tumor, tumor core, and whole tumor, respectively. Overall survival of a subject is determined using conventional machine learning from rediomics features obtained using generated segmentation mask. Our approach has achieved 0.448 and 0.452 as the accuracy on the validation and test dataset.

Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images

  • Anand, Shruthi
  • Vinod, Viji
  • Rampure, Anand
International Journal of Applied Engineering Research 2015 Journal Article, cited 4 times
Website

Detection of Leukemia Using Convolutional Neural Network

  • Anagha, V.
  • Disha, A.
  • Aishwarya, B. Y.
  • Nikkita, R.
  • Biradar, Vidyadevi G.
2022 Book Section, cited 0 times
Leukemia which is commonly known as blood cancer is a fatal type of cancer that affects white blood cells. It usually originates from the bone marrow and causes the development of abnormal blood cells called blasts. The diagnosis is made by blood tests and bone marrow biopsy which involve manual work and are time consuming. There is a need for development of an automatic tool for the detection of white blood cell cancer. Therefore, in this work, a classification model using Convolutional Neural Network with Deep Learning techniques as a basis is proposed. This work was implemented using Keras library with TensorFlow as backend. This model was trained and evaluated on cancer cell dataset C_NMC_2019 which includes white blood cell regions segmented from the microscopic blood smear images. The model offers an accuracy of 91% for training and 87% for testing which is satisfactory.

Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network

  • An, Feng-Ping
Complexity 2019 Journal Article, cited 0 times
Website
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.

A Predictive Clinical-Radiomics Nomogram for Survival Prediction of Glioblastoma Using MRI

  • Ammari, Samy
  • Sallé de Chou, Raoul
  • Balleyguier, Corinne
  • Chouzenoux, Emilie
  • Touat, Mehdi
  • Quillent, Arnaud
  • Dumont, Sarah
  • Bockel, Sophie
  • Garcia, Gabriel C. T. E.
  • Elhaik, Mickael
  • Francois, Bidault
  • Borget, Valentin
  • Lassau, Nathalie
  • Khettab, Mohamed
  • Assi, Tarek
Diagnostics 2021 Journal Article, cited 8 times
Website
Glioblastoma (GBM) is the most common and aggressive primary brain tumor in adult patients with a median survival of around one year. Prediction of survival outcomes in GBM patients could represent a huge step in treatment personalization. The objective of this study was to develop machine learning (ML) algorithms for survival prediction of GBM patient. We identified a radiomic signature on a training-set composed of data from the 2019 BraTS challenge (210 patients) from MRI retrieved at diagnosis. Then, using this signature along with the age of the patients for training classification models, we obtained on test-sets AUCs of 0.85, 0.74 and 0.58 (0.92, 0.88 and 0.75 on the training-sets) for survival at 9-, 12- and 15-months, respectively. This signature was then validated on an independent cohort of 116 GBM patients with confirmed disease relapse for the prediction of patients surviving less or more than the median OS of 22 months. Our model insured an AUC of 0.71 (0.65 on train). The Kaplan-Meier method showed significant OS difference between groups (log-rank p = 0.05). These results suggest that radiomic signatures may improve survival outcome predictions in GBM thus creating a solid clinical tool for tailoring therapy in this population.

Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment Based on Texture Analysis

  • Ammar, Mohammed
  • Mahmoudi, Saïd
  • Stylianos, Drisis
Procedia Computer Science 2016 Journal Article, cited 2 times
Website
MRI modality is one of the most usual techniques used for diagnosis and treatment planning of breast cancer. The aim of this study is to prove that texture based feature techniques such as co-occurrence matrix features extracted from MRI images can be used to quantify response of tumor treatment. To this aim, we use a dataset composed of two breast MRI examinations for 9 patients. Three of them were responders and six non responders. The first exam was achieved before the initiation of the treatment (baseline). The later one was done after the first cycle of the chemo treatment (control). A set of selected texture parameters have been selected and calculated for each exam. These selected parameters are: Cluster Shade, dissimilarity, entropy, homogeneity. The p-values estimated for the pathologic complete responders pCR and non pathologic complete responders pNCR patients prove that homogeneity (P-value=0.027) and cluster shade (P-value=0.0013) are the more relevant parameters related to pathologic complete responders pCR.

Hybrid Mass Detection in Breast MRI Combining Unsupervised Saliency Analysis and Deep Learning

  • Amit, Guy
  • Hadad, Omer
  • Alpert, Sharon
  • Tlusty, Tal
  • Gur, Yaniv
  • Ben-Ari, Rami
  • Hashoul, Sharbell
2017 Conference Paper, cited 15 times
Website
To interpret a breast MRI study, a radiologist has to examine over 1000 images, and integrate spatial and temporal information from multiple sequences. The automated detection and classification of suspicious lesions can help reduce the workload and improve accuracy. We describe a hybrid mass-detection algorithm that combines unsupervised candidate detection with deep learning-based classification. The detection algorithm first identifies image-salient regions, as well as regions that are cross-salient with respect to the contralateral breast image. We then use a convolutional neural network (CNN) to classify the detected candidates into true-positive and false-positive masses. The network uses a novel multi-channel image representation; this representation encompasses information from the anatomical and kinetic image features, as well as saliency maps. We evaluated our algorithm on a dataset of MRI studies from 171 patients, with 1957 annotated slices of malignant (59%) and benign (41%) masses. Unsupervised saliency-based detection provided a sensitivity of 0.96 with 9.7 false-positive detections per slice. Combined with CNN classification, the number of false positive detections dropped to 0.7 per slice, with 0.85 sensitivity. The multi-channel representation achieved higher classification performance compared to single-channel images. The combination of domain-specific unsupervised methods and general-purpose supervised learning offers advantages for medical imaging applications, and may improve the ability of automated algorithms to assist radiologists.

Multi-resolution 3D CNN for MRI Brain Tumor Segmentation and Survival Prediction

  • Amian, Mehdi
  • Soltaninejad, Mohammadreza
2020 Book Section, cited 31 times
Website
In this study, an automated three dimensional (3D) deep segmentation approach for detecting gliomas in 3D pre-operative MRI scans is proposed. Then, a classification algorithm based on random forests, for survival prediction is presented. The objective is to segment the glioma area and produce segmentation labels for its different sub-regions, i.e. necrotic and the non-enhancing tumor core, the peritumoral edema, and enhancing tumor. The proposed deep architecture for the segmentation task encompasses two parallel streamlines with two different resolutions. One deep convolutional neural network is to learn local features of the input data while the other one is set to have a global observation on whole image. Deemed to be complementary, the outputs of each stream are then merged to provide an ensemble complete learning of the input image. The proposed network takes the whole image as input instead of patch-based approaches in order to consider the semantic features throughout the whole volume. The algorithm is trained on BraTS 2019 which included 335 training cases, and validated on 127 unseen cases from the validation dataset using a blind testing approach. The proposed method was also evaluated on the BraTS 2019 challenge test dataset of 166 cases. The results show that the proposed methods provide promising segmentations as well as survival prediction. The mean Dice overlap measures of automatic brain tumor segmentation for validation set were 0.86, 0.77 and 0.71 for the whole tumor, core and enhancing tumor, respectively. The corresponding results for the challenge test dataset were 0.82, 0.72, and 0.70, respectively. The overall accuracy of the proposed model for the survival prediction task is 55% for the validation and 49% for the test dataset.

Imaging Biomarker Ontology (IBO): A Biomedical Ontology to Annotate and Share Imaging Biomarker Data

  • Amdouni, Emna
  • Gibaud, Bernard
Journal on Data Semantics 2018 Journal Article, cited 0 times
Website

Kidney Tumor Detection and Classification Based on Deep Learning Approaches: A New Dataset in CT Scans

  • Alzu’bi, Dalia
  • Abdullah, Malak
  • Hmeidi, Ismail
  • AlAzab, Rami
  • Gharaibeh, Maha
  • El-Heis, Mwaffaq
  • Almotairi, Khaled H.
  • Forestiero, Agostino
  • Hussein, Ahmad MohdAziz
  • Abualigah, Laith
  • Kumar, Senthil
Journal of Healthcare Engineering 2022 Journal Article, cited 0 times
Website
Kidney tumor (KT) is one of the diseases that have affected our society and is the seventh most common tumor in both men and women worldwide. The early detection of KT has significant benefits in reducing death rates, producing preventive measures that reduce effects, and overcoming the tumor. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of deep learning (DL) can save diagnosis time, improve test accuracy, reduce costs, and reduce the radiologist’s workload. In this paper, we present detection models for diagnosing the presence of KTs in computed tomography (CT) scans. Toward detecting and classifying KT, we proposed 2D-CNN models; three models are concerning KT detection such as a 2D convolutional neural network with six layers (CNN-6), a ResNet50 with 50 layers, and a VGG16 with 16 layers. The last model is for KT classification as a 2D convolutional neural network with four layers (CNN-4). In addition, a novel dataset from the King Abdullah University Hospital (KAUH) has been collected that consists of 8,400 images of 120 adult patients who have performed CT scans for suspected kidney masses. The dataset was divided into 80% for the training set and 20% for the testing set. The accuracy results for the detection models of 2D CNN-6 and ResNet50 reached 97%, 96%, and 60%, respectively. At the same time, the accuracy results for the classification model of the 2D CNN-4 reached 92%. Our novel models achieved promising results; they enhance the diagnosis of patient conditions with high accuracy, reducing radiologist’s workload and providing them with a tool that can automatically assess the condition of the kidneys, reducing the risk of misdiagnosis. Furthermore, increasing the quality of healthcare service and early detection can change the disease’s track and preserve the patient’s life.

Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation

  • AlZu'bi, Shadi
  • AlQatawneh, Sokyna
  • ElBes, Mohammad
  • Alsmirat, Mohammad
Concurrency and Computation: Practice and Experience 2019 Journal Article, cited 0 times
Website
Acceptable error rate, low quality assessment, and time complexity are the major problems in image segmentation, which needed to be discovered. A variety of acceleration techniques have been applied and achieve real time results, but still limited in 3D. HMM is one of the best statistical techniques that played a significant rule recently. The problem associated with HMM is time complexity, which has been resolved using different accelerator. In this research, we propose a methodology for transferring HMM matrices from image to another skipping the training time for the rest of the 3D volume. One HMM train is generated and generalized to the whole volume. The concepts behind multi‐orientation geometrical segmentation has been employed here to improve the quality of HMM segmentation. Axial, saggital, and coronal orientations have been considered individually and together to achieve accurate segmentation results in less processing time and superior quality in the detection accuracy.

Comparative Analysis of Lossless Image Compression Algorithms based on Different Types of Medical Images

  • Alzahrani, Mona
  • Albinali, Mona
2021 Conference Paper, cited 0 times
Website
In the medical field, there is a demand for highspeed transmission and efficient storage of medical images between healthcare organizations. Therefore, image compression techniques are essential in that field. In this study, we conducted an experimental comparison between two famous lossless algorithms: lossless Discrete Cosine Transform (DCT) and lossless Haar Wavelet Transform (HWT). Covering three different datasets that contain different types of medical images: MRI, CT, and gastrointestinal endoscopic images; with different image formats PNG, JPG and TIF. According to the conducted experiments, in terms of compressed image size and compression ratio, we found that DCT outperforms HWT regarding PNG and TIF format which represent CT-grey and MRI-color images. And regarding JPG format which represents the gastrointestinal endoscopic color images, DCT performs well when grey-scale images are used; where HWT outperforms DCT when color images are used. However, HWT outperforms DCT in compression time regarding all the image types and formats.

Fully Automatic Deep Learning Framework for Pancreatic Ductal Adenocarcinoma Detection on Computed Tomography

  • Alves, N.
  • Schuurmans, M.
  • Litjens, G.
  • Bosma, J. S.
  • Hermans, J.
  • Huisman, H.
Cancers (Basel) 2022 Journal Article, cited 0 times
Website
Early detection improves prognosis in pancreatic ductal adenocarcinoma (PDAC), but is challenging as lesions are often small and poorly defined on contrast-enhanced computed tomography scans (CE-CT). Deep learning can facilitate PDAC diagnosis; however, current models still fail to identify small (<2 cm) lesions. In this study, state-of-the-art deep learning models were used to develop an automatic framework for PDAC detection, focusing on small lesions. Additionally, the impact of integrating the surrounding anatomy was investigated. CE-CT scans from a cohort of 119 pathology-proven PDAC patients and a cohort of 123 patients without PDAC were used to train a nnUnet for automatic lesion detection and segmentation (nnUnet_T). Two additional nnUnets were trained to investigate the impact of anatomy integration: (1) segmenting the pancreas and tumor (nnUnet_TP), and (2) segmenting the pancreas, tumor, and multiple surrounding anatomical structures (nnUnet_MS). An external, publicly available test set was used to compare the performance of the three networks. The nnUnet_MS achieved the best performance, with an area under the receiver operating characteristic curve of 0.91 for the whole test set and 0.88 for tumors <2 cm, showing that state-of-the-art deep learning can detect small PDAC and benefits from anatomy information.

Robust Detection of Circles in the Vessel Contours and Application to Local Probability Density Estimation

  • Alvarez, Luis
  • González, Esther
  • Esclarín, Julio
  • Gomez, Luis
  • Alemán-Flores, Miguel
  • Trujillo, Agustín
  • Cuenca, Carmelo
  • Mazorra, Luis
  • Tahoces, Pablo G
  • Carreira, José M
2017 Book Section, cited 3 times
Website

Leukemia Classification Using EfficientNetB5: A Deep Learning Approach

  • Alshoraihy, Aseel
  • Ibrahim, Anagheem
  • Issa, Housam Hasan Bou
2024 Conference Paper, cited 0 times
Leukemia is a critical disease that requires early and accurate diagnosis. Leukemia is a type of blood cancer mainly occurring when bone marrow builds extra white blood cells in the human body. This disease affects adults and is a common cancer type among children. This paper presents a deep-learning approach using EfficientNetB5 to classify Leukemia using The Cancer Imaging Archive (TCIA) with more than 10,000 images from 118 patients. The achieved confusion matrix will contribute to improving the research in diagnosing cancer.

GLCM and CNN Deep Learning Model for Improved MRI Breast Tumors Detection

  • Alsalihi, Aya A
  • Aljobouri, Hadeel K
  • ALTameemi, Enam Azez Khalel
2022 Journal Article, cited 0 times
Website

Nakagami-Fuzzy imaging framework for precise lesion segmentation in MRI

  • Alpar, Orcan
  • Dolezal, Rafael
  • Ryska, Pavel
  • Krejcar, Ondrej
Pattern Recognition 2022 Journal Article, cited 0 times
Website

Predicting methylation class from diffusely infiltrating adult gliomas using multi-modality MRI data

  • Alom, Zahangir
  • Tran, Quynh T.
  • Bag, Asim K.
  • Lucas, John T.
  • Orr, Brent A.
Neuro-oncology advances 2023 Journal Article, cited 0 times
Website
Background Radiogenomic studies of adult-type diffuse gliomas have used magnetic resonance imaging (MRI) data to infer tumor attributes, including abnormalities such as IDH-mutation status and 1p19q deletion. This approach is effective but doesn't generalize to tumor types that lack highly recurrent alterations. Tumors have intrinsic DNA methylation patterns and can be grouped into stable methylation classes even when lacking recurrent mutations or copy number changes. The purpose of this study was to prove the principle that a tumor's DNA-methylation class could be used as a predictive feature for radiogenomic modeling. Methods Using a custom DNA methylation-based classification model, molecular classes were assigned to diffuse gliomas in The Cancer Genome Atlas (TCGA) dataset. We then constructed and validated machine learning models to predict a tumor’s methylation family or subclass from matched multisequence MRI data using either extracted radiomic features or directly from MRI images. Results For models using extracted radiomic features, we demonstrated top accuracies above 90% for predicting IDH-glioma and GBM-IDHwt methylation families, IDH-mutant tumor methylation subclasses, or GBM-IDHwt molecular subclasses. Classification models utilizing MRI images directly demonstrated average accuracies of 80.6 % for predicting methylation families, compared to 87.2% and 89.0% for differentiating IDH-mutated astrocytomas from oligodendrogliomas and glioblastoma molecular subclasses, respectively. Conclusions These findings demonstrate that MRI-based machine learning models can effectively predict the methylation class of brain tumors. Given appropriate datasets, this approach could generalize to most brain tumor types, expanding the number and types of tumors that could be used to develop radiomic or radiogenomic models.

Versatile Convolutional Networks Applied to Computed Tomography and Magnetic Resonance Image Segmentation

  • Almeida, Gonçalo
  • Tavares, João Manuel R. S.
Journal of Medical Systems 2021 Journal Article, cited 0 times
Website
Medical image segmentation has seen positive developments in recent years but remains challenging with many practical obstacles to overcome. The applications of this task are wide-ranging in many fields of medicine, and used in several imaging modalities which usually require tailored solutions. Deep learning models have gained much attention and have been lately recognized as the most successful for automated segmentation. In this work we show the versatility of this technique by means of a single deep learning architecture capable of successfully performing segmentation on two very different types of imaging: computed tomography and magnetic resonance. The developed model is fully convolutional with an encoder-decoder structure and high-resolution pathways which can process whole three-dimensional volumes at once, and learn directly from the data to find which voxels belong to the regions of interest and localize those against the background. The model was applied to two publicly available datasets achieving equivalent results for both imaging modalities, as well as performing segmentation of different organs in different anatomic regions with comparable success.

King Abdullah International Medical Research Center (KAIMRC)’s breast cancer big images data set

  • Almazroa, Ahmed A.
  • Bin Saleem, Ghaida
  • Alotaibi, Aljoharah
  • Almasloukh, Mudhi
  • Al Otaibi, Um Klthoum
  • Al Balawi, Wejdan
  • Alabdulmajeed, Ghufran
  • Alamri, Suhailah
  • Alsomaie, Barrak
  • Fahim, Mohammed
  • Alluhaydan, Najd
  • Almatar, Hessa
  • Park, Brian J.
  • Deserno, Thomas M.
2022 Conference Paper, cited 0 times
The purpose of this project is to prepare image data set to develop AI systems to serve screening and diagnosis of breast cancer research field. Whereas early detection could have a positive impact on decreasing mortality, as this could offer more options for successful intervention and therapies to reduce the chance of malignant and metastatic progression. Six students, one research technologist, and one consultant in radiology collected the images and the patients’ information. The images extracted from three imaging modalities: the Hologic 3D Mammography, Philips and Super Sonic ultrasound Machines, and GE and Philips machines for MRI. The cases were graded by a trained radiologist. A total of 3085 DICOM format images have collected for the period between 2008 – 2020 for 890 females patients ages 18 to 85. The largest portion in the data is dedicated for mammograms (51.3%), and then ultrasound (31.7%), and MRI exams (17%). There were 593 malignant cases while the benign cases were 2492 cases. The diagnosis was confirmed by biopsy technique after mammogram and ultrasound exams. The data will be continually collected in the future to serve the artificial intelligence research field and the public health community. The updated information about the data will be available on: https://kaimrc.med.sa/?page_id=11767072

Challenges in predicting glioma survival time in multi-modal deep networks

  • Abdulrhman Aljouie
  • Yunzhe Xue
  • Meiyan Xie
  • Uman Roshan
2020 Conference Paper, cited 0 times
Website
Prediction of cancer survival time is of considerable interest in medicine as it leads to better patient care and reduces health care costs. In this study, we propose a multi-path multimodal neural network that predicts Glioblastoma Multiforme (GBM) survival time at the 14 months threshold. We obtained image, gene expression, and SNP variants from whole-exome sequences all from the The Cancer Genome Atlas portal for a total of 126 patients. We perform a 10-fold cross-validation experiment on each of the data sources separately as well as the model with all data combined. From post-contrast Tl MRI data, we used 3D scans and 2D slices that we selected manually to show the tumor region. We find that the model with 2D MRI slices and genomic data combined gives the highest accuracies over individual sources but by a modest margin. We see considerable variation in accuracies across the 10 folds and that our model achieves 100% accuracy on the training data but lags behind in test accuracy. With dropout our training accuracy falls considerably. This shows that predicting glioma survival time is a challenging task but it is unclear if this is also a symptom of insufficient data. A clear direction here is to augment our data that we plan to explore with generative models. Overall we present a novel multi-modal network that incorporates SNP, gene expression, and MRI image data for glioma survival time prediction.

Automated apparent diffusion coefficient analysis for genotype prediction in lower grade glioma: association with the T2-FLAIR mismatch sign

  • Aliotta, E.
  • Dutta, S. W.
  • Feng, X.
  • Tustison, N. J.
  • Batchala, P. P.
  • Schiff, D.
  • Lopes, M. B.
  • Jain, R.
  • Druzgal, T. J.
  • Mukherjee, S.
  • Patel, S. H.
J Neurooncol 2020 Journal Article, cited 0 times
Website
PURPOSE: The prognosis of lower grade glioma (LGG) patients depends (in large part) on both isocitrate dehydrogenase (IDH) gene mutation and chromosome 1p/19q codeletion status. IDH-mutant LGG without 1p/19q codeletion (IDHmut-Noncodel) often exhibit a unique imaging appearance that includes high apparent diffusion coefficient (ADC) values not observed in other subtypes. The purpose of this study was to develop an ADC analysis-based approach that can automatically identify IDHmut-Noncodel LGG. METHODS: Whole-tumor ADC metrics, including fractional tumor volume with ADC > 1.5 x 10(-3)mm(2)/s (VADC>1.5), were used to identify IDHmut-Noncodel LGG in a cohort of N = 134 patients. Optimal threshold values determined in this dataset were then validated using an external dataset containing N = 93 cases collected from The Cancer Imaging Archive. Classifications were also compared with radiologist-identified T2-FLAIR mismatch sign and evaluated concurrently to identify added value from a combined approach. RESULTS: VADC>1.5 classified IDHmut-Noncodel LGG in the internal cohort with an area under the curve (AUC) of 0.80. An optimal threshold value of 0.35 led to sensitivity/specificity = 0.57/0.93. Classification performance was similar in the validation cohort, with VADC>1.5 >/= 0.35 achieving sensitivity/specificity = 0.57/0.91 (AUC = 0.81). Across both groups, 37 cases exhibited positive T2-FLAIR mismatch sign-all of which were IDHmut-Noncodel. Of these, 32/37 (86%) also exhibited VADC>1.5 >/= 0.35, as did 23 additional IDHmut-Noncodel cases which were negative for T2-FLAIR mismatch sign. CONCLUSION: Tumor subregions with high ADC were a robust indicator of IDHmut-Noncodel LGG, with VADC>1.5 achieving > 90% classification specificity in both internal and validation cohorts. VADC>1.5 exhibited strong concordance with the T2-FLAIR mismatch sign and the combination of both parameters improved sensitivity in detecting IDHmut-Noncodel LGG.

Glioma Segmentation Using Ensemble of 2D/3D U-Nets and Survival Prediction Using Multiple Features Fusion

  • Ali, Muhammad Junaid
  • Akram, Muhammad Tahir
  • Saleem, Hira
  • Raza, Basit
  • Shahid, Ahmad Raza
2021 Book Section, cited 10 times
Website
Automatic segmentation of gliomas from brain Magnetic Resonance Imaging (MRI) volumes is an essential step for tumor detection. Various 2D Convolutional Neural Network (2D-CNN) and its 3D variant, known as 3D-CNN based architectures, have been proposed in previous studies, which are used to capture contextual information. The 3D models capture depth information, making them an automatic choice for glioma segmentation from 3D MRI images. However, the 2D models can be trained in a relatively shorter time, making their parameter tuning relatively easier. Considering these facts, we tried to propose an ensemble of 2D and 3D models to utilize their respective benefits better. After segmentation, prediction of Overall Survival (OS) time was performed on segmented tumor sub-regions. For this task, multiple radiomic and image-based features were extracted from MRI volumes and segmented sub-regions. In this study, radiomic and image-based features were fused to predict the OS time of patients. Experimental results on BraTS 2020 testing dataset achieved a dice score of 0.79 on Enhancing Tumor (ET), 0.87 on Whole Tumor (WT), and 0.83 on Tumor Core (TC). For OS prediction task, results on BraTS 2020 testing leaderboard achieved an accuracy of 0.57, Mean Square Error (MSE) of 392,963.189, Median SE of 162,006.3, and Spearman R correlation score of −0.084.

Prediction of glioma-subtypes: comparison of performance on a DL classifier using bounding box areas versus annotated tumors

  • Ali, M. B.
  • Gu, I. Y.
  • Lidemar, A.
  • Berger, M. S.
  • Widhalm, G.
  • Jakola, A. S.
2022 Journal Article, cited 0 times
Website
BACKGROUND: For brain tumors, identifying the molecular subtypes from magnetic resonance imaging (MRI) is desirable, but remains a challenging task. Recent machine learning and deep learning (DL) approaches may help the classification/prediction of tumor subtypes through MRIs. However, most of these methods require annotated data with ground truth (GT) tumor areas manually drawn by medical experts. The manual annotation is a time consuming process with high demand on medical personnel. As an alternative automatic segmentation is often used. However, it does not guarantee the quality and could lead to improper or failed segmented boundaries due to differences in MRI acquisition parameters across imaging centers, as segmentation is an ill-defined problem. Analogous to visual object tracking and classification, this paper shifts the paradigm by training a classifier using tumor bounding box areas in MR images. The aim of our study is to see whether it is possible to replace GT tumor areas by tumor bounding box areas (e.g. ellipse shaped boxes) for classification without a significant drop in performance. METHOD: In patients with diffuse gliomas, training a deep learning classifier for subtype prediction by employing tumor regions of interest (ROIs) using ellipse bounding box versus manual annotated data. Experiments were conducted on two datasets (US and TCGA) consisting of multi-modality MRI scans where the US dataset contained patients with diffuse low-grade gliomas (dLGG) exclusively. RESULTS: Prediction rates were obtained on 2 test datasets: 69.86% for 1p/19q codeletion status on US dataset and 79.50% for IDH mutation/wild-type on TCGA dataset. Comparisons with that of using annotated GT tumor data for training showed an average of 3.0% degradation (2.92% for 1p/19q codeletion status and 3.23% for IDH genotype). CONCLUSION: Using tumor ROIs, i.e., ellipse bounding box tumor areas to replace annotated GT tumor areas for training a deep learning scheme, cause only a modest decline in performance in terms of subtype prediction. With more data that can be made available, this may be a reasonable trade-off where decline in performance may be counteracted with more data.

A Feasibility Study on Deep Learning Based Brain Tumor Segmentation Using 2D Ellipse Box Areas

  • Ali, Muhaddisa Barat
  • Bai, Xiaohan
  • Gu, Irene Yu-Hua
  • Berger, Mitchel S.
  • Jakola, Asgeir Store
Sensors 2022 Journal Article, cited 2 times
Website
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS’17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts’ time annotating tumors and a small drop in segmentation performance.

Applying Deep Transfer Learning to Assess the Impact of Imaging Modalities on Colon Cancer Detection

  • Alhazmi, Wael
  • Turki, Turki
Diagnostics 2023 Journal Article, cited 1 times
Website

Radiogenomics in renal cell carcinoma

  • Alessandrino, Francesco
  • Shinagare, Atul B
  • Bossé, Dominick
  • Choueiri, Toni K
  • Krajewski, Katherine M
Abdominal Radiology 2018 Journal Article, cited 0 times
Website

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
Eur Radiol 2020 Journal Article, cited 1 times
Website
OBJECTIVE: To present a deep learning-based approach for semi-automatic prostate cancer classification based on multi-parametric magnetic resonance (MR) imaging using a 3D convolutional neural network (CNN). METHODS: Two hundred patients with a total of 318 lesions for which histological correlation was available were analyzed. A novel CNN was designed, trained, and validated using different combinations of distinct MRI sequences as input (e.g., T2-weighted, apparent diffusion coefficient (ADC), diffusion-weighted images, and K-trans) and the effect of different sequences on the network's performance was tested and discussed. The particular choice of modeling approach was justified by testing all relevant data combinations. The model was trained and validated using eightfold cross-validation. RESULTS: In terms of detection of significant prostate cancer defined by biopsy results as the reference standard, the 3D CNN achieved an area under the curve (AUC) of the receiver operating characteristics ranging from 0.89 (88.6% and 90.0% for sensitivity and specificity respectively) to 0.91 (81.2% and 90.5% for sensitivity and specificity respectively) with an average AUC of 0.897 for the ADC, DWI, and K-trans input combination. The other combinations scored less in terms of overall performance and average AUC, where the difference in performance was significant with a p value of 0.02 when using T2w and K-trans; and 0.00025 when using T2w, ADC, and DWI. Prostate cancer classification performance is thus comparable to that reported for experienced radiologists using the prostate imaging reporting and data system (PI-RADS). Lesion size and largest diameter had no effect on the network's performance. CONCLUSION: The diagnostic performance of the 3D CNN in detecting clinically significant prostate cancer is characterized by a good AUC and sensitivity and high specificity. KEY POINTS: * Prostate cancer classification using a deep learning model is feasible and it allows direct processing of MR sequences without prior lesion segmentation. * Prostate cancer classification performance as measured by AUC is comparable to that of an experienced radiologist. * Perfusion MR images (K-trans), followed by DWI and ADC, have the highest effect on the overall performance; whereas T2w images show hardly any improvement.

Automatic intensity windowing of mammographic images based on a perceptual metric

  • Albiol, Alberto
  • Corbi, Alberto
  • Albiol, Francisco
Medical Physics 2017 Journal Article, cited 0 times
Website
PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.

Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification

  • Alberts, Esther
2019 Thesis, cited 0 times
Website
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.

Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments

  • Albalooshi, Fatema A
2015 Thesis, cited 0 times
Website
Boundary extraction for object region segmentation is one of the most challenging tasks in image processing and computer vision areas. The complexity of large variations in the appearance of the object and the background in a typical image causes the performance degradation of existing segmentation algorithms. One of the goals of computer vision studies is to produce algorithms to segment object regions to produce accurate object boundaries that can be utilized in feature extraction and classification. This dissertation research considers the incorporation of prior knowledge of intensity/color of objects of interest within segmentation framework to enhance the performance of object region and boundary extraction of targets in unconstrained environments. The information about intensity/color of object of interest is taken from small patches as seeds that are fed to learn a neural network. The main challenge is accounting for the projection transformation between the limited amount of prior information and the appearance of the real object of interest in the testing data. We address this problem by the use of a Self-organizing Map (SOM) which is an unsupervised learning neural network. The segmentation process is achieved by the construction of a local fitted image level-set cost function, in which, the dynamic variable is a Best Matching Unit (BMU) coming from the SOM map. The proposed method is demonstrated on the PASCAL 2011 challenging dataset, in which, images contain objects with variations of illuminations, shadows, occlusions and clutter. In addition, our method is tested on different types of imagery including thermal, hyperspectral, and medical imagery. Metrics illustrate the effectiveness and accuracy of the proposed algorithm in improving the efficiency of boundary extraction and object region detection. In order to reduce computational time, a lattice Boltzmann Method (LBM) convergence criteria is used along with the proposed self-organized active contour model for producing faster and effective segmentation. The lattice Boltzmann method is utilized to evolve the level-set function rapidly and terminate the evolution of the curve at the most optimum region. Experiments performed on our test datasets show promising results in terms of time and quality of the segmentation when compared to other state-of-the-art learning-based active contour model approaches. Our method is more than 53% faster than other state-of-the-art methods. Research is in progress to employ Time Adaptive Self- Organizing Map (TASOM) for improved segmentation and utilize the parallelization property of the LBM to achieve real-time segmentation.

Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing

  • AlBadawy, E. A.
  • Saha, A.
  • Mazurowski, M. A.
Med Phys 2018 Journal Article, cited 5 times
Website
BACKGROUND AND PURPOSE: Convolutional neural networks (CNNs) are commonly used for segmentation of brain tumors. In this work, we assess the effect of cross-institutional training on the performance of CNNs. METHODS: We selected 44 glioblastoma (GBM) patients from two institutions in The Cancer Imaging Archive dataset. The images were manually annotated by outlining each tumor component to form ground truth. To automatically segment the tumors in each patient, we trained three CNNs: (a) one using data for patients from the same institution as the test data, (b) one using data for the patients from the other institution and (c) one using data for the patients from both of the institutions. The performance of the trained models was evaluated using Dice similarity coefficients as well as Average Hausdorff Distance between the ground truth and automatic segmentations. The 10-fold cross-validation scheme was used to compare the performance of different approaches. RESULTS: Performance of the model significantly decreased (P < 0.0001) when it was trained on data from a different institution (dice coefficients: 0.68 +/- 0.19 and 0.59 +/- 0.19) as compared to training with data from the same institution (dice coefficients: 0.72 +/- 0.17 and 0.76 +/- 0.12). This trend persisted for segmentation of the entire tumor as well as its individual components. CONCLUSIONS: There is a very strong effect of selecting data for training on performance of CNNs in a multi-institutional setting. Determination of the reasons behind this effect requires additional comprehensive investigation.

Quantitative assessment of colorectal morphology: Implications for robotic colonoscopy

  • Alazmani, A
  • Hood, A
  • Jayne, D
  • Neville, A
  • Culmer, P
Medical engineering & physics 2016 Journal Article, cited 11 times
Website
This paper presents a method of characterizing the distribution of colorectal morphometrics. It uses three-dimensional region growing and topological thinning algorithms to determine and visualize the luminal volume and centreline of the colon, respectively. Total and segmental lengths, diameters, volumes, and tortuosity angles were then quantified. The effects of body orientations on these parameters were also examined. Variations in total length were predominately due to differences in the transverse colon and sigmoid segments, and did not significantly differ between body orientations. The diameter of the proximal colon was significantly larger than the distal colon, with the largest value at the ascending and cecum segments. The volume of the transverse colon was significantly the largest, while those of the descending colon and rectum were the smallest. The prone position showed a higher frequency of high angles and consequently found to be more torturous than the supine position. This study yielded a method for complete segmental measurements of healthy colorectal anatomy and its tortuosity. The transverse and sigmoid colons were the major determinant in tortuosity and morphometrics between body orientations. Quantitative understanding of these parameters may potentially help to facilitate colonoscopy techniques, accuracy of polyp spatial distribution detection, and design of novel endoscopic devices.

FEATURE EXTRACTION OF LUNG CANCER USING IMAGE ANALYSIS TECHNIQUES

  • ALAYUE, L.T.
  • GOSHU, B.S.
  • TAJU, ENDRIS
2022 Journal Article, cited 0 times
Website
Lung cancer is one of the most life-threatening diseases. It is a medical problem that needs accurate diagnosis and timely treatment by healthcare professionals. Although CT is preferred over other imaging modalities, visual interpretation of CT scan images may be subject to error and can cause a delay in lung cancer detection. Therefore, image processing techniques are widely used for early-stage detection of lung tumors. This study was conducted to perform pre-processing, segmentation, and feature extraction of lung CT images using image processing techniques. We used the MATLAB programming language to devise a stepwise approach that included image acquisition, pre-processing, segmentation, and features extraction. A total of 14 lung CT scan images in the age group of 55–75 years were downloaded from an open access repository. The analyzed images were grayscale, 8 bits, with a resolution ranging from 151 213 to 721 900, and Digital Imaging and Communications in Medicine (DICOM) format. In the pre-processing stage median filter was used to remove noise from the original image since it preserved the edges of the image, whereas segmentation was done through edge detection and threshold analysis. The results show that solid tumors were detected in three CT images corresponding to patients aged between 71 and 75 years old. Our study indicates that image processing plays a significant role in lung cancer recognition and early-stage treatment. Health professionals need to work closely with medical physicists to improve the accuracy of diagnosis.

SwarmDeepSurv: swarm intelligence advances deep survival network for prognostic radiomics signatures in four solid cancers

  • Al-Tashi, Qasem
  • Saad, Maliazurina B.
  • Sheshadri, Ajay
  • Wu, Carol C.
  • Chang, Joe Y.
  • Al-Lazikani, Bissan
  • Gibbons, Christopher
  • Vokes, Natalie I.
  • Zhang, Jianjun
  • Lee, J. Jack
  • Heymach, John V.
  • Jaffray, David
  • Mirjalili, Seyedali
  • Wu, Jia
Patterns 2023 Journal Article, cited 0 times
Website
Survival models exist to study relationships between biomarkers and treatment effects. Deep learning-powered survival models supersede the classical Cox proportional hazards (CoxPH) model, but substantial performance drops were observed on high-dimensional features because of irrelevant/redundant information. To fill this gap, we proposed SwarmDeepSurv by integrating swarm intelligence algorithms with the deep survival model. Furthermore, four objective functions were designed to optimize prognostic prediction while regularizing selected feature numbers. When testing on multicenter sets (n = 1,058) of four different cancer types, SwarmDeepSurv was less prone to overfitting and achieved optimal patient risk stratification compared with popular survival modeling algorithms. Strikingly, SwarmDeepSurv selected different features compared with classical feature selection algorithms, including the least absolute shrinkage and selection operator (LASSO), with nearly no feature overlapping across these models. Taken together, SwarmDeepSurv offers an alternative approach to model relationships between radiomics features and survival endpoints, which can further extend to study other input data types including genomics.

A hybrid approach based on multiple Eigenvalues selection (MES) for the automated grading of a brain tumor using MRI

  • Al-Saffar, Z. A.
  • Yildirim, T.
Comput Methods Programs Biomed 2021 Journal Article, cited 5 times
Website
BACKGROUND AND OBJECTIVE: The manual segmentation, identification, and classification of brain tumor using magnetic resonance (MR) images are essential for making a correct diagnosis. It is, however, an exhausting and time consuming task performed by clinical experts and the accuracy of the results is subject to their point of view. Computer aided technology has therefore been developed to computerize these procedures. METHODS: In order to improve the outcomes and decrease the complications involved in the process of analysing medical images, this study has investigated several methods. These include: a Local Difference in Intensity - Means (LDI-Means) based brain tumor segmentation, Mutual Information (MI) based feature selection, Singular Value Decomposition (SVD) based dimensionality reduction, and both Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) based brain tumor classification. Also, this study has presented a new method named Multiple Eigenvalues Selection (MES) to choose the most meaningful features as inputs to the classifiers. This combination between unsupervised and supervised techniques formed an effective system for the grading of brain glioma. RESULTS: The experimental results of the proposed method showed an excellent performance in terms of accuracy, recall, specificity, precision, and error rate. They are 91.02%,86.52%, 94.26%, 87.07%, and 0.0897 respectively. CONCLUSION: The obtained results prove the significance and effectiveness of the proposed method in comparison to other state-of-the-art techniques and it can have in the contribution to an early diagnosis of brain glioma.

A Novel Approach to Improving Brain Image Classification Using Mutual Information-Accelerated Singular Value Decomposition

  • Al-Saffar, Zahraa A
  • Yildirim, Tülay
IEEE Access 2020 Journal Article, cited 0 times
Website

Breast Cancer Diagnostic System Based on MR images Using KPCA-Wavelet Transform and Support Vector Machine

  • AL-Dabagh, Mustafa Zuhaer
  • AL-Mukhtar, Firas H
IJAERS 2017 Journal Article, cited 0 times
Website

Radiologist performance in the detection of lung cancer using CT

  • Al Mohammad, B
  • Hillis, SL
  • Reed, W
  • Alakhras, M
  • Brennan, PC
Clinical Radiology 2019 Journal Article, cited 2 times
Website

A review of lung cancer screening and the role of computer-aided detection

  • Al Mohammad, B
  • Brennan, PC
  • Mello-Thoms, C
Clinical Radiology 2017 Journal Article, cited 23 times
Website

Automatic Detection and Segmentation of Colorectal Cancer with Deep Residual Convolutional Neural Network

  • Akilandeswari, A.
  • Sungeetha, D.
  • Joseph, C.
  • Thaiyalnayaki, K.
  • Baskaran, K.
  • Jothi Ramalingam, R.
  • Al-Lohedan, H.
  • Al-Dhayan, D. M.
  • Karnan, M.
  • Meansbo Hadish, K.
Evid Based Complement Alternat Med 2022 Journal Article, cited 0 times
Website
Early and automatic detection of colorectal tumors is essential for cancer analysis, and the same is implemented using computer-aided diagnosis (CAD). A computerized tomography (CT) image of the colon is being used to identify colorectal carcinoma. Digital imaging and communication in medicine (DICOM) is a standard medical imaging format to process and analyze images digitally. Accurate detection of tumor cells in the complex digestive tract is necessary for optimal treatment. The proposed work is divided into two phases. The first phase involves the segmentation, and the second phase is the extraction of the colon lesions with the observed segmentation parameters. A deep convolutional neural network (DCNN) based residual network approach for the colon and polyps' segmentation from the CT images is applied over the 2D CT images. The residual stack block is being added to the hidden layers with short skip nuance, which helps to retain spatial information. ResNet-enabled CNN is employed in the current work to achieve complete boundary segmentation of the colon cancer region. The results obtained through segmentation serve as features for further extraction and classification of benign as well as malignant colon cancer. Performance evaluation metrics indicate that the proposed network model has effectively segmented and classified colorectal tumors with dice scores of 91.57% (on average), sensitivity = 98.28, specificity = 98.68, and accuracy = 98.82.

Map-Reduce based tipping point scheduler for parallel image processing

  • Akhtar, Mohammad Nishat
  • Saleh, Junita Mohamad
  • Awais, Habib
  • Bakar, Elmi Abu
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
Nowadays, Big Data image processing is very much in need due to its proven success in the field of business information system, medical science and social media. However, as the days are passing by, the computation of Big Data images is becoming more complex which ultimately results in complex resource management and higher task execution time. Researchers have been using a combination of CPU and GPU based computing to cut down the execution time, however, when it comes to scaling of compute nodes, then the combination of CPU and GPU based computing still remains a challenge due to the high communication cost factor. In order to tackle this issue, the Map-Reduce framework has come out to be a viable option as its workflow optimization could be enhanced by changing its underlying job scheduling mechanism. This paper presents a comparative study of job scheduling algorithms which could be deployed over various Big Data based image processing application and also proposes a tipping point scheduling algorithm to optimize the workflow for job execution on multiple nodes. The evaluation of the proposed scheduling algorithm is done by implementing parallel image segmentation algorithm to detect lung tumor for up to 3GB size of image dataset. In terms of performance comprising of task execution time and throughput, the proposed tipping point scheduler has come out to be the best scheduler followed by the Map-Reduce based Fair scheduler. The proposed tipping point scheduler is 1.14 times better than Map-Reduce based Fair scheduler and 1.33 times better than Map-Reduced based FIFO scheduler in terms of task execution time and throughput. In terms of speedup comparison between single node and multiple nodes, the proposed tipping point scheduler attained a speedup of 4.5 X for multi-node architecture. Keywords: Job scheduler; Workflow optimization; Map-Reduce; Tipping point scheduler; Parallel image segmentation; Lung tumor

Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment

  • Akbar, S.
  • Peikari, M.
  • Salama, S.
  • Panah, A. Y.
  • Nofech-Mozes, S.
  • Martel, A. L.
2019 Journal Article, cited 3 times
Website
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.

Unet3D with Multiple Atrous Convolutions Attention Block for Brain Tumor Segmentation

  • Akbar, Agus Subhan
  • Fatichah, Chastine
  • Suciati, Nanik
2022 Conference Paper, cited 0 times
Website
Brain tumor segmentation by computer computing is still an exciting challenge. UNet architecture has been widely used for medical image segmentation with several modifications. Attention blocks have been used to modify skip connections on the UNet architecture and result in improved performance. In this study, we propose the development of UNet for brain tumor image segmentation by modifying its contraction and expansion block by adding Attention, adding multiple atrous convolutions, and adding a residual pathway that we call Multiple Atrous convolutions Attention Block (MAAB). The expansion part is also added with the formation of pyramid features taken from each level to produce the final segmentation output. The architecture is trained using patches and batch 2 to save GPU memory usage. Online validation of the segmentation results from the BraTS 2021 validation dataset resulted in dice performance of 78.02, 80.73, and 89.07 for ET, TC, and WT. These results indicate that the proposed architecture is promising for further development.

Modified MobileNet for Patient Survival Prediction

  • Akbar, Agus Subhan
  • Fatichah, Chastine
  • Suciati, Nanik
2021 Book Section, cited 5 times
Website
Glioblastoma is a type of malignant tumor that varies significantly in size, shape, and location. The study of this type of tumor, one of which is about predicting the patient’s survival ability, is beneficial for the treatment of patients. However, the supporting data for the survival prediction model are minimal, so the best methods are needed for handling it. In this study, we propose an architecture for predicting patient survival using MobileNet combined with a linear survival prediction model (SPM). Several variations of MobileNet are tested to obtain the best results. Variations tested include modification of MobileNet V1 with freeze or unfreeze layers, and modification of MobileNet V2 with freeze or unfreeze layers connected to SPM. The dataset used for the trial came from BraTS 2020. A modification based on the MobileNet V2 architecture with the freezing layer was selected from the test results. The results of testing this proposed architecture with 95 training data and 23 validation data resulted in an MSE Loss of 78374.17. The online test results with the validation dataset 29 resulted in an MSE loss value of 149764.866 with an accuracy of 0.345. Testing with the testing dataset resulted in increased accuracy of 0.402. These results are promising for better architectural development.

ResMLP_GGR: Residual Multilayer Perceptrons- Based Genotype-Guided Recurrence Prediction of Non-small Cell Lung Cancer

  • Ai, Yang
  • Li, Yinhao
  • Chen, Yen-Wei
  • Aonpong, Panyanat
  • Han, Xianhua
Journal of Image and Graphics 2023 Journal Article, cited 1 times
Website
Non-small Cell Lung Cancer (NSCLC) is one of the malignant tumors with the highest morbidity and mortality. The postoperative recurrence rate in patients with NSCLC is high, which directly endangers the lives of patients. In recent years, many studies have used Computed Tomography (CT) images to predict NSCLC recurrence. Although this approach is inexpensive, it has low prediction accuracy. Gene expression data can achieve high accuracy. However, gene acquisition is expensive and invasive, and cannot meet the recurrence prediction requirements of all patients. In this study, a low-cost, high-accuracy residual multilayer perceptrons-based genotype-guided recurrence (ResMLP_GGR) prediction method is proposed that uses a gene estimation model to guide recurrence prediction. First, a gene estimation model is proposed to construct a mapping function of mixed features (handcrafted and deep features) and gene data to estimate the genetic information of tumor heterogeneity. Then, from gene estimation data obtained using a regression model, representations related to recurrence are learned to realize NSCLC recurrence prediction. In the testing phase, NSCLC recurrence prediction can be achieved with only CT images. The experimental results show that the proposed method has few parameters, strong generalization ability, and is suitable for small datasets. Compared with state-of-the-art methods, the proposed method significantly improves recurrence prediction accuracy by 3.39% with only 1% of parameters.

Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2020 Journal Article, cited 0 times
Website

Pharmacokinetic modeling of dynamic contrast-enhanced MRI using a reference region and input function tail

  • Ahmed, Z.
  • Levesque, I. R.
Magn Reson Med 2020 Journal Article, cited 0 times
Website
PURPOSE: Quantitative analysis of dynamic contrast-enhanced MRI (DCE-MRI) requires an arterial input function (AIF) which is difficult to measure. We propose the reference region and input function tail (RRIFT) approach which uses a reference tissue and the washout portion of the AIF. METHODS: RRIFT was evaluated in simulations with 100 parameter combinations at various temporal resolutions (5-30 s) and noise levels (sigma = 0.01-0.05 mM). RRIFT was compared against the extended Tofts model (ETM) in 8 studies from patients with glioblastoma multiforme. Two versions of RRIFT were evaluated: one using measured patient-specific AIF tails, and another assuming a literature-based AIF tail. RESULTS: RRIFT estimated the transfer constant K trans and interstitial volume v e with median errors within 20% across all simulations. RRIFT was more accurate and precise than the ETM at temporal resolutions slower than 10 s. The percentage error of K trans had a median and interquartile range of -9 +/- 45% with the ETM and -2 +/- 17% with RRIFT at a temporal resolution of 30 s under noiseless conditions. RRIFT was in excellent agreement with the ETM in vivo, with concordance correlation coefficients (CCC) of 0.95 for K trans , 0.96 for v e , and 0.73 for the plasma volume v p using a measured AIF tail. With the literature-based AIF tail, the CCC was 0.89 for K trans , 0.93 for v e and 0.78 for v p . CONCLUSIONS: Quantitative DCE-MRI analysis using the input function tail and a reference tissue yields absolute kinetic parameters with the RRIFT method. This approach was viable in simulation and in vivo for temporal resolutions as low as 30 s.

An extended reference region model for DCE‐MRI that accounts for plasma volume

  • Ahmed, Zaki
  • Levesque, Ives R
NMR in Biomedicine 2018 Journal Article, cited 0 times
Website

Increased robustness in reference region model analysis of DCE MRI using two‐step constrained approaches

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2016 Journal Article, cited 1 times
Website

Tumor Lesion Segmentation from 3D PET Using a Machine Learning Driven Active Surface

  • Ahmadvand, Payam
  • Duggan, Nóirín
  • Bénard, François
  • Hamarneh, Ghassan
2016 Conference Proceedings, cited 4 times
Website

Context Aware 3D UNet for Brain Tumor Segmentation

  • Ahmad, Parvez
  • Qamar, Saqib
  • Shen, Linlin
  • Saeed, Adnan
2021 Book Section, cited 0 times
Deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates features from both encoder and decoder paths to extract multi-contextual information from image data. The multi-scaled features play an essential role in brain tumor segmentation. However, the limited use of features can degrade the performance of the UNet approach for segmentation. In this paper, we propose a modified UNet architecture for brain tumor segmentation. In the proposed architecture, we used densely connected blocks in both encoder and decoder paths to extract multi-contextual information from the concept of feature reusability. In addition, residual-inception blocks (RIB) are used to extract the local and global information by merging features of different kernel sizes. We validate the proposed architecture on the multi-modal brain tumor segmentation challenge (BRATS) 2020 testing dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 89.12%, 84.74%, and 79.12%, respectively.

MS UNet: Multi-scale 3D UNet for Brain Tumor Segmentation

  • Ahmad, Parvez
  • Qamar, Saqib
  • Shen, Linlin
  • Rizvi, Syed Qasim Afser
  • Ali, Aamir
  • Chetty, Girija
2022 Book Section, cited 0 times
A deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates multi-scale features from image data. The multi-scaled features play an essential role in brain tumor segmentation. Researchers presented numerous multi-scale strategies that have been excellent for the segmentation task. This paper proposes a multi-scale strategy that can further improve the final segmentation accuracy. We propose three multi-scale strategies in MS UNet. Firstly, we utilize densely connected blocks in the encoder and decoder for multi-scale features. Next, the proposed residual-inception blocks extract local and global information by merging features of different kernel sizes. Lastly, we utilize the idea of deep supervision for multiple depths at the decoder. We validate the MS UNet on the BraTS 2021 validation dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 91.938%, 86.268%, and 82.409%, respectively.

Hybrid Labels for Brain Tumor Segmentation

  • Ahmad, Parvez
  • Qamar, Saqib
  • Hashemi, Seyed Raein
  • Shen, Linlin
2020 Book Section, cited 11 times
Website
The accurate automatic segmentation of brain tumors enhances the probability of survival rate. Convolutional Neural Network (CNN) is a popular automatic approach for image evaluations. CNN provides excellent results against classical machine learning algorithms. In this paper, we present a unique approach to incorporate contexual information from multiple brain MRI labels. To address the problems of brain tumor segmentation, we implement combined strategies of residual-dense connections, multiple rates of an atrous convolutional layer on popular 3D U-Net architecture. To train and validate our proposed algorithm, we used BRATS 2019 different datasets. The results are promising on the different evaluation metrics.

RD2A: densely connected residual networks using ASPP for brain tumor segmentation

  • Ahmad, Parvez
  • Jin, Hai
  • Qamar, Saqib
  • Zheng, Ran
  • Saeed, Adnan
Multimedia Tools and Applications 2021 Journal Article, cited 2 times
Website
The variations among shapes, sizes, and locations of tumors are obstacles for accurate automatic segmentation. U-Net is a simplified approach for automatic segmentation. Generally, the convolutional or the dilated convolutional layers are used for brain tumor segmentation. However, existing segmentation methods of the significant dilation rates degrade the final accuracy. Moreover, tuning parameters and imbalance ratio between the different tumor classes are the issues for segmentation. The proposed model, known as Residual-Dilated Dense Atrous-Spatial Pyramid Pooling (RD2A) 3D U-Net, is found adequate to solve these issues. The RD2A is the combination of the residual connections, dilation, and dense ASPP to preserve more contextual information of small sizes of tumors at each level encoder path. The multi-scale contextual information minimizes the ambiguities among the tissues of the white matter (WM) and gray matter (GM) of the infant’s brain MRI. The BRATS 2018, BRATS 2019, and iSeg-2019 datasets are used on different evaluation metrics to validate the RD2A. In the BRATS 2018 validation dataset, the proposed model achieves the average dice scores of 90.88, 84.46, and 78.18 for the whole tumor, the tumor core, and the enhancing tumor, respectively. We also evaluated on iSeg-2019 testing set, where the proposed approach achieves the average dice scores of 79.804, 77.925, and 80.569 for the cerebrospinal fluid (CSF), the gray matter (GM), and the white matter (WM), respectively. Furthermore, the presented work also obtains the mean dice scores of 90.35, 82.34, and 71.93 for the whole tumor, the tumor core, and the enhancing tumor, respectively on the BRATS 2019 validation dataset. Experimentally, it is found that the proposed approach is ideal for exploiting the full contextual information of the 3D brain MRI datasets.

Assessment of the global noise algorithm for automatic noise measurement in head CT examinations

  • Ahmad, M.
  • Tan, D.
  • Marisetty, S.
Med Phys 2021 Journal Article, cited 0 times
Website
PURPOSE: The global noise (GN) algorithm has been previously introduced as a method for automatic noise measurement in clinical CT images. The accuracy of the GN algorithm has been assessed in abdomen CT examinations, but not in any other body part until now. This work assesses the GN algorithm accuracy in automatic noise measurement in head CT examinations. METHODS: A publicly available image dataset of 99 head CT examinations was used to evaluate the accuracy of the GN algorithm in comparison to reference noise values. Reference noise values were acquired using a manual noise measurement procedure. The procedure used a consistent instruction protocol and multiple observers to mitigate the influence of intra- and interobserver variation, resulting in precise reference values. Optimal GN algorithm parameter values were determined. The GN algorithm accuracy and the corresponding statistical confidence interval were determined. The GN measurements were compared across the six different scan protocols used in this dataset. The correlation of GN to patient head size was also assessed using a linear regression model, and the CT scanner's X-ray beam quality was inferred from the model fit parameters. RESULTS: Across all head CT examinations in the dataset, the range of reference noise was 2.9-10.2 HU. A precision of +/-0.33 HU was achieved in the reference noise measurements. After optimization, the GN algorithm had a RMS error 0.34 HU corresponding to a percent RMS error of 6.6%. The GN algorithm had a bias of +3.9%. Statistically significant differences in GN were detected in 11 out of the 15 different pairs of scan protocols. The GN measurements were correlated with head size with a statistically significant regression slope parameter (p < 10(-7) ). The CT scanner X-ray beam quality estimated from the slope parameter was 3.5 cm water HVL (2.8-4.8 cm 95% CI). CONCLUSION: The GN algorithm was validated for application in head CT examinations. The GN algorithm was accurate in comparison to reference manual measurement, with errors comparable to interobserver variation in manual measurement. The GN algorithm can detect noise differences in examinations performed on different scanner models or using different scan protocols. The trend in GN across patients of different head sizes closely follows that predicted by a physical model of X-ray attenuation.

AATSN: Anatomy Aware Tumor Segmentation Network for PET-CT volumes and images using a lightweight fusion-attention mechanism

  • Ahmad, I.
  • Xia, Y.
  • Cui, H.
  • Islam, Z. U.
Comput Biol Med 2023 Journal Article, cited 1 times
Website
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) provides metabolic information, while Computed Tomography (CT) provides the anatomical context of the tumors. Combined PET-CT segmentation helps in computer-assisted tumor diagnosis, staging, and treatment planning. Current state-of-the-art models mainly rely on early or late fusion techniques. These methods, however, rarely learn PET-CT complementary features and cannot efficiently co-relate anatomical and metabolic features. These drawbacks can be removed by intermediate fusion; however, it produces inaccurate segmentations in the case of heterogeneous textures in the modalities. Furthermore, it requires massive computation. In this work, we propose AATSN (Anatomy Aware Tumor Segmentation Network), which extracts anatomical CT features, and then intermediately fuses with PET features through a fusion-attention mechanism. Our anatomy-aware fusion-attention mechanism fuses the selective useful CT and PET features instead of fusing the full features set. Thus this not only improves the network performance but also requires lesser resources. Furthermore, our model is scalable to 2D images and 3D volumes. The proposed model is rigorously trained, tested, evaluated, and compared to the state-of-the-art through several ablation studies on the largest available datasets. We have achieved a 0.8104 dice score and 2.11 median HD95 score in a 3D setup, while 0.6756 dice score in a 2D setup. We demonstrate that AATSN achieves a significant performance gain while being lightweight at the same time compared to the state-of-the-art methods. The implications of AATSN include improved tumor delineation for diagnosis, analysis, and radiotherapy treatment.

Convolutional Neural Network featuring VGG-16 Model for Glioma Classification

  • Agus, Minarno Eko
  • Bagas, Sasongko Yoni
  • Yuda, Munarko
  • Hanung, Nugroho Adi
  • Ibrahim, Zaidah
2022 Journal Article, cited 0 times
Website
Magnetic Resonance Imaging (MRI) is a body sensing technique that can produce detailed images of the condition of organs and tissues. Specifically related to brain tumors, the resulting images can be analyzed using image detection techniques so that tumor stages can be classified automatically. Detection of brain tumors requires a high level of accuracy because it is related to the effectiveness of medical actions and patient safety. So far, the Convolutional Neural Network (CNN) or its combination with GA has given good results. For this reason, in this study, we used a similar method but with a variant of the VGG-16 architecture. VGG-16 variant adds 16 layers by modifying the dropout layer (using softmax activation) to reduce overfitting and avoid using a lot of hyper-parameters. We also experimented with using augmentation techniques to anticipate data limitations. Experiment using data The Cancer Imaging Archive (TCIA) - The Repository of Molecular Brain Neoplasia Data (REMBRANDT) contains MRI images of 130 patients with different ailments, grades, races, and ages with 520 images. The tumor type was Glioma, and the images were divided into grades II, III, and IV, with the composition of 226, 101, and 193 images, respectively. The data is divided by 68% and 32% for training and testing purposes. We found that VGG-16 was more effective for brain tumor image classification, with an accuracy of up to 100%.

3D Semantic Segmentation of Brain Tumor for Overall Survival Prediction

  • Agravat, Rupal R.
  • Raval, Mehul S.
2021 Book Section, cited 17 times
Website
Glioma, a malignant brain tumor, requires immediate treatment to improve the survival of patients. The heterogeneous nature of Glioma makes the segmentation difficult, especially for sub-regions like necrosis, enhancing tumor, non-enhancing tumor, and edema. Deep neural networks like full convolution neural networks and an ensemble of fully convolution neural networks are successful for Glioma segmentation. The paper demonstrates the use of a 3D fully convolution neural network with a three-layer encoder-decoder approach. The dense connections within the layer help in diversified feature learning. The network takes 3D patches from T1, T2, T1c, and FLAIR modalities as input. The loss function combines dice loss and focal loss functions. The Dice similarity coefficient for training and validation set is 0.88, 0.83, 0.78 and 0.87, 0.75, 0.76 for the whole tumor, tumor core and enhancing tumor, respectively. The network achieves comparable performance with other state-of-the-art ensemble approaches. The random forest regressor trains on the shape, volumetric, and age features extracted from ground truth for overall survival prediction. The regressor achieves an accuracy of 56.8% and 51.7% on the training and validation sets.

Brain Tumor Segmentation and Survival Prediction

  • Agravat, Rupal R.
  • Raval, Mehul S.
2020 Book Section, cited 40 times
Website
The paper demonstrates the use of the fully convolutional neural network for glioma segmentation on the BraTS 2019 dataset. Three-layers deep encoder-decoder architecture is used along with dense connection at the encoder part to propagate the information from the coarse layers to deep layers. This architecture is used to train three tumor sub-components separately. Sub-component training weights are initialized with whole tumor weights to get the localization of the tumor within the brain. In the end, three segmentation results were merged to get the entire tumor segmentation. Dice Similarity of training dataset with focal loss implementation for whole tumor, tumor core, and enhancing tumor is 0.92, 0.90, and 0.79, respectively. Radiomic features from the segmentation results predict survival. Along with these features, age and statistical features are used to predict the overall survival of patients using random forest regressors. The overall survival prediction method outperformed the other methods for the validation dataset on the leaderboard with 58.6% accuracy. This finding is consistent with the performance on the test set of BraTS 2019 with 57.9% accuracy.

Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising

  • Agostinelli, Forest
  • Anderson, Michael R
  • Lee, Honglak
2013 Conference Proceedings, cited 118 times
Website
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.

Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)

  • Agnes, S Akila
  • Anitha, J
  • Peter, J Dinesh
Neural Computing and Applications 2018 Journal Article, cited 0 times
Website

Efficient multiscale fully convolutional UNet model for segmentation of 3D lung nodule from CT image

  • Agnes, S. A.
  • Anitha, J.
J Med Imaging (Bellingham) 2022 Journal Article, cited 0 times
Website
Purpose: Segmentation of lung nodules in chest CT images is essential for image-driven lung cancer diagnosis and follow-up treatment planning. Manual segmentation of lung nodules is subjective because the approach depends on the knowledge and experience of the specialist. We proposed a multiscale fully convolutional three-dimensional UNet (MF-3D UNet) model for automatic segmentation of lung nodules in CT images. Approach: The proposed model employs two strategies, fusion of multiscale features with Maxout aggregation and trainable downsampling, to improve the performance of nodule segmentation in 3D CT images. The fusion of multiscale (fine and coarse) features with the Maxout function allows the model to retain the most important features while suppressing the low-contribution features. The trainable downsampling process is used instead of fixed pooling-based downsampling. Results: The performance of the proposed MF-3D UNet model is examined by evaluating the model with CT scans obtained from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. A quantitative and visual comparative analysis of the proposed work with various customized UNet models is also presented. The comparative analysis shows that the proposed model yields reliable segmentation results compared with other methods. The experimental result of 3D MF-UNet shows encouraging results in the segmentation of different types of nodules, including juxta-pleural, solitary pulmonary, and non-solid nodules, with an average Dice similarity coefficient of 0.83 +/- 0.05 , and it outperforms other CNN-based segmentation models. Conclusions: The proposed model accurately segments the nodules using multiscale feature aggregation and trainable downsampling approaches. Also, 3D operations enable precise segmentation of complex nodules using inter-slice connections.

Patient-Wise Versus Nodule-Wise Classification of Annotated Pulmonary Nodules using Pathologically Confirmed Cases

  • Aggarwal, Preeti
  • Vig, Renu
  • Sardana, HK
Journal of Computers 2013 Journal Article, cited 5 times
Website
This paper presents a novel framework for combining well known shape, texture, size and resolution informatics descriptor of solitary pulmonary nodules (SPNs) detected using CT scan. The proposed methodology evaluates the performance of classifier in differentiating benign, malignant as well as metastasis SPNs with 246 chests CT scan of patients. Both patient-wise as well as nodule-wise available diagnostic report of 80 patients was used in differentiating the SPNs and the results were compared. For patient-wise data, generated a model with efficiency of 62.55% with labeled nodules and using semi-supervised approach, labels of rest of the unknown nodules were predicted and finally classification accuracy of 82.32% is achieved with all labeled nodules. For nodule-wise data, ground truth database of labeled nodules is expanded from a very small ground truth using content based image retrieval (CBIR) method and achieved a precision of 98%. Proposed methodology not only avoids unnecessary biopsies but also efficiently label unknown nodules using pre-diagnosed cases which can certainly help the physicians in diagnosis.

Automatic mass detection in mammograms using deep convolutional neural networks

  • Agarwal, Richa
  • Diaz, Oliver
  • Lladó, Xavier
  • Yap, Moi Hoon
  • Martí, Robert
Journal of Medical Imaging 2019 Journal Article, cited 0 times
Website
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.

An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk

  • Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET) 2019 Journal Article, cited 0 times
Website
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.

3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction

  • Afshar, Parnian
  • Oikonomou, Anastasia
  • Naderkhani, Farnoosh
  • Tyrrell, Pascal N
  • Plataniotis, Konstantinos N
  • Farahani, Keyvan
  • Mohammadi, Arash
2020 Journal Article, cited 1 times
Website
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.

Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach

  • Aerts, H. J.
  • Velazquez, E. R.
  • Leijenaar, R. T.
  • Parmar, C.
  • Grossmann, P.
  • Carvalho, S.
  • Bussink, J.
  • Monshouwer, R.
  • Haibe-Kains, B.
  • Rietveld, D.
  • Hoebers, F.
  • Rietbergen, M. M.
  • Leemans, C. R.
  • Dekker, A.
  • Quackenbush, J.
  • Gillies, R. J.
  • Lambin, P.
2014 Journal Article, cited 1029 times
Website
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost.

Defining a Radiomic Response Phenotype: A Pilot Study using targeted therapy in NSCLC

  • Aerts, Hugo JWL
  • Grossmann, Patrick
  • Tan, Yongqiang
  • Oxnard, Geoffrey G
  • Rizvi, Naiyer
  • Schwartz, Lawrence H
  • Zhao, Binsheng
Sci RepScientific reports 2016 Journal Article, cited 40 times
Website
Medical imaging plays a fundamental role in oncology and drug development, by providing a non-invasive method to visualize tumor phenotype. Radiomics can quantify this phenotype comprehensively by applying image-characterization algorithms, and may provide important information beyond tumor size or burden. In this study, we investigated if radiomics can identify a gefitinib response-phenotype, studying high-resolution computed-tomography (CT) imaging of forty-seven patients with early-stage non-small cell lung cancer before and after three weeks of therapy. On the baseline-scan, radiomic-feature Laws-Energy was significantly predictive for EGFR-mutation status (AUC = 0.67, p = 0.03), while volume (AUC = 0.59, p = 0.27) and diameter (AUC = 0.56, p = 0.46) were not. Although no features were predictive on the post-treatment scan (p > 0.08), the change in features between the two scans was strongly predictive (significant feature AUC-range = 0.74-0.91). A technical validation revealed that the associated features were also highly stable for test-retest (mean +/- std: ICC = 0.96 +/- 0.06). This pilot study shows that radiomic data before treatment is able to predict mutation status and associated gefitinib response non-invasively, demonstrating the potential of radiomics-based phenotyping to improve the stratification and response assessment between tyrosine kinase inhibitors (TKIs) sensitive and resistant patient populations.

A quantitative analysis of imaging features in lung CT images using the RW-T hybrid segmentation model

  • Adiraju, RamaVasantha
  • Elias, Susan
Multimedia Tools and Applications 2023 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer death worldwide. A lung nodule is the most common symptom of lung cancer. The analysis of lung cancer relies heavily on the segmentation of nodules, which aids in optimal treatment planning. However, because there are several lung nodules, accurate segmentation remains challenging. We propose an RW-T hybrid approach capable of segmenting all types of nodules, primarily externally attached nodules (juxta-pleural and juxta-vascular), and estimate the effect of nodule segmentation techniques to assess the quantitative Computer Tomography (CT) imaging features in lung adenocarcinoma. On 301 lung CT images from 40 patients with lung adenocarcinoma cases from the LungCT- Diagnosis dataset publicly available in The Cancer Imaging Archive, we used a random-walk strategy and a thresholding method to implement nodule segmentation (TCIA). We extracted two quantitative CT features from the segmented nodule using morphological techniques: convexity and entropy scores. The proposed method’s resultant segmented nodules are compared to the single-click ensemble segmentation method and validated using ground-truth segmented nodules. Our proposed segmentation approach had a high level of agreement with ground truth delineations, with a dice-similarity coefficient of 0.7884, compared to single-click ensemble segmentation, with a dice-similarity metric of 0.6407.

Classification and Segmentation of Brain Tumor Using EfficientNet-B7 and U-Net

  • Adinegoro, Antonius Fajar
  • Sutapa, Gusti Ngurah
  • Gunawan, Anak Agung Ngurah
  • Anggarani, Ni Kadek Nova
  • Suardana, Putu
  • Kasmawan, I. Gde Antha
2023 Journal Article, cited 0 times
Website
Tumors are caused by uncontrolled growth of abnormal cells. Magnetic Resonance Imaging (MRI) is modality that is widely used to produce highly detailed brain images. In addition, a surgical biopsy of the suspected tissue (tumor) is required to obtain more information about the type of tumor. Biopsy takes 10 to 15 days for laboratory testing. Based on a study conducted by Brady in 2016, errors in radiology practice are common, with an estimated daily error rate of 3-5%. Therefore, using the application of artificial intelligence, is expected to simplify and improve the accuracy of doctor's diagnose.

DNA-methylome-assisted classification of patients with poor prognostic subventricular zone associated IDH-wildtype glioblastoma

  • Adeberg, S.
  • Knoll, M.
  • Koelsche, C.
  • Bernhardt, D.
  • Schrimpf, D.
  • Sahm, F.
  • Konig, L.
  • Harrabi, S. B.
  • Horner-Rieber, J.
  • Verma, V.
  • Bewerunge-Hudler, M.
  • Unterberg, A.
  • Sturm, D.
  • Jungk, C.
  • Herold-Mende, C.
  • Wick, W.
  • von Deimling, A.
  • Debus, J.
  • Rieken, S.
  • Abdollahi, A.
Acta Neuropathol 2022 Journal Article, cited 0 times
Website
Glioblastoma (GBM) derived from the "stem cell" rich subventricular zone (SVZ) may constitute a therapy-refractory subgroup of tumors associated with poor prognosis. Risk stratification for these cases is necessary but is curtailed by error prone imaging-based evaluation. Therefore, we aimed to establish a robust DNA methylome-based classification of SVZ GBM and subsequently decipher underlying molecular characteristics. MRI assessment of SVZ association was performed in a retrospective training set of IDH-wildtype GBM patients (n = 54) uniformly treated with postoperative chemoradiotherapy. DNA isolated from FFPE samples was subject to methylome and copy number variation (CNV) analysis using Illumina Platform and cnAnalysis450k package. Deep next-generation sequencing (NGS) of a panel of 130 GBM-related genes was conducted (Agilent SureSelect/Illumina). Methylome, transcriptome, CNV, MRI, and mutational profiles of SVZ GBM were further evaluated in a confirmatory cohort of 132 patients (TCGA/TCIA). A 15 CpG SVZ methylation signature (SVZM) was discovered based on clustering and random forest analysis. One third of CpG in the SVZM were associated with MAB21L2/LRBA. There was a 14.8% (n = 8) discordance between SVZM vs. MRI classification. Re-analysis of these patients favored SVZM classification with a hazard ratio (HR) for OS of 2.48 [95% CI 1.35-4.58], p = 0.004 vs. 1.83 [1.0-3.35], p = 0.049 for MRI classification. In the validation cohort, consensus MRI based assignment was achieved in 62% of patients with an intraclass correlation (ICC) of 0.51 and non-significant HR for OS (2.03 [0.81-5.09], p = 0.133). In contrast, SVZM identified two prognostically distinct subgroups (HR 3.08 [1.24-7.66], p = 0.016). CNV alterations revealed loss of chromosome 10 in SVZM- and gains on chromosome 19 in SVZM- tumors. SVZM- tumors were also enriched for differentially mutated genes (p < 0.001). In summary, SVZM classification provides a novel means for stratifying GBM patients with poor prognosis and deciphering molecular mechanisms governing aggressive tumor phenotypes.

Automated lung tumor detection and diagnosis in CT Scans using texture feature analysis and SVM

  • Adams, Tim
  • Dörpinghaus, Jens
  • Jacobs, Marc
  • Steinhage, Volker
Communication Papers of the Federated Conference on Computer Science and Information Systems 2018 Journal Article, cited 0 times
Website

Adaptive Enhancement Technique for Cancerous Lung Nodule in Computed Tomography Images

  • AbuBaker, Ayman A
International Journal of Engineering and Technology 2016 Journal Article, cited 1 times
Website
Diagnosis the Computed Tomography Images (CT-Images) may take a lot of time by the radiologist. This will increase the radiologist fatigue and may miss some of the cancerous lung nodule lesions. Therefore, an adaptive local enhancement Computer Aided Diagnosis (CAD) system is proposed. The proposed technique is design to enhance the suspicious cancerous regions in the CT-Images. The visual characteristics of the cancerous lung nodules in the CT-Images was the main criteria in designing this technique. The new approach is divided into two phases, pre-processing phase and image enhancement phase. The image noise reduction, thresholding process, and extraction the lung regions are considered as a pre-processing phase. Whereas, the new adaptive local enhancement method for the CTImages were implemented as a second phase. The proposed algorithm is tested and evaluated on 42 normal and cancerous lung nodule CT-Images. As a result, this new approach can efficiently enhance the cancerous lung nodules by 25% comparing with the original images.

Repeatability of Automated Image Segmentation with BraTumIA in Patients with Recurrent Glioblastoma

  • Abu Khalaf, N.
  • Desjardins, A.
  • Vredenburgh, J. J.
  • Barboriak, D. P.
AJNR Am J Neuroradiol 2021 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Despite high interest in machine-learning algorithms for automated segmentation of MRIs of patients with brain tumors, there are few reports on the variability of segmentation results. The purpose of this study was to obtain benchmark measures of repeatability for a widely accessible software program, BraTumIA (Versions 1.2 and 2.0), which uses a machine-learning algorithm to segment tumor features on contrast-enhanced brain MR imaging. MATERIALS AND METHODS: Automatic segmentation of enhancing tumor, tumor edema, nonenhancing tumor, and necrosis was performed on repeat MR imaging scans obtained approximately 2 days apart in 20 patients with recurrent glioblastoma. Measures of repeatability and spatial overlap, including repeatability and Dice coefficients, are reported. RESULTS: Larger volumes of enhancing tumor were obtained on later compared with earlier scans (mean, 26.3 versus 24.2 mL for BraTumIA 1.2; P < .05; and 24.9 versus 22.9 mL for BraTumIA 2.0, P < .01). In terms of percentage change, repeatability coefficients ranged from 31% to 46% for enhancing tumor and edema components and from 87% to 116% for nonenhancing tumor and necrosis. Dice coefficients were highest (>0.7) for enhancing tumor and edema components, intermediate for necrosis, and lowest for nonenhancing tumor and did not differ between software versions. Enhancing tumor and tumor edema were smaller, and necrotic tumor larger using BraTumIA 2.0 rather than 1.2. CONCLUSIONS: Repeatability and overlap metrics varied by segmentation type, with better performance for segmentations of enhancing tumor and tumor edema compared with other components. Incomplete washout of gadolinium contrast agents could account for increasing enhancing tumor volumes on later scans.

A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM

  • Abu Baker, Ayman A.
  • Ghadi, Yazeed
International Journal of Electrical and Computer Engineering (IJECE) 2020 Journal Article, cited 0 times
Website
A novel cancerous nodules detection algorithm for computed tomography images (CT - images ) is presented in this paper. CT -images are large size images with high resolution. In some cases, number of cancerous lung nodule lesions may missed by the radiologist due to fatigue. A CAD system that is proposed in this paper can help the radiologist in detecting cancerous nodules in CT -images. The proposed algorithm is divided to four stages. In the first stage, an enhancement algorithm is implement to highlight the suspicious regions. Then in the second stage, the region of interest will be detected. The adaptive SVM and wavelet transform techniques are used to reduce the detected false positive regions. This algorithm is evaluated using 60 cases (normal and cancerous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 94.5% and with FP ratio 7 cluster/image.

Multimodal Segmentation with MGF-Net and the Focal Tversky Loss Function

  • Abraham, Nabila
  • Khan, Naimul Mefraz
2020 Book Section, cited 3 times
Website
In neuro-imaging, MRI is commonly used to acquire multiple sequences simultaneously, including T1, T2 and FLAIR. Multimodal image segmentation involves learning an optimal, joint representation of these sequences for accurate delineation of the region of interest. The most commonly utilized fusion scheme for multimodal segmentation is early fusion, where each modality sequence is treated as an independent channel. In this work, we propose a fusion architecture termed the Moment Gated Fusion (MGF) network which combines feature moments from individual modality sequences for the segmentation task. We supervise our network with a variant of the focal Tversky loss function. Our architecture promotes explain-ability, light-weight CNN design and has achieved 0.687, 0.843 and 0.751 DSC scores on the BraTs 2019 test cohort which is competitive with the commonly used vanilla U-Net.

Automated grading of prostate cancer using convolutional neural network and ordinal class classifier

  • Abraham, Bejoy
  • Nair, Madhu S.
Informatics in Medicine Unlocked 2019 Journal Article, cited 0 times
Website
Prostate Cancer (PCa) is one of the most prominent cancer among men. Early diagnosis and treatment planning are significant in reducing the mortality rate due to PCa. Accurate prediction of grade is required to ensure prompt treatment for cancer. Grading of prostate cancer can be considered as an ordinal class classification problem. This paper presents a novel method for the grading of prostate cancer from multiparametric magnetic resonance images using VGG-16 Convolutional Neural Network and Ordinal Class Classifier with J48 as the base classifier. Multiparametric magnetic resonance images of the PROSTATEx-2 2017 grand challenge dataset are employed for this work. The method achieved a moderate quadratic weighted kappa score of 0.4727 in the grading of PCa into 5 grade groups, which is higher than state-of-the-art methods. The method also achieved a positive predictive value of 0.9079 in predicting clinically significant prostate cancer.

Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder

  • Abraham, Bejoy
  • Nair, Madhu S
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 1 times
Website

Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier

  • Abraham, Bejoy
  • Nair, Madhu S
Biocybernetics and Biomedical Engineering 2018 Journal Article, cited 0 times
Website

Brain Tumor Segmentation Based on Deep Learning's Feature Representation

  • Aboussaleh, Ilyasse
  • Riffi, Jamal
  • Mahraz, Adnane Mohamed
  • Tairi, Hamid
2021 Journal Article, cited 0 times
Website
Brain tumor is considered as one of the most serious causes of death in the world. Thus, it is very important to detect it as early as possible. In order to predict and segment the tumor, many approaches have been proposed. However, they suffer from different problems such as the necessity of the intervention of a specialist, the long required run-time and the choice of the appropriate feature extractor. To address these issues, we proposed an approach based on convolution neural network architecture aiming at predicting and segmenting simultaneously a cerebral tumor. The proposal was divided into two phases. Firstly, aiming at avoiding the use of the labeled image that implies a subject intervention of the specialist, we used a simple binary annotation that reflects the existence of the tumor or not. Secondly, the prepared image data were fed into our deep learning model in which the final classification was obtained; if the classification indicated the existence of the tumor, the brain tumor was segmented based on the feature representations generated by the convolutional neural network architectures. The proposed method was trained on the BraTS 2017 dataset with different types of gliomas. The achieved results show the performance of the proposed approach in terms of accuracy, precision, recall and Dice similarity coefficient. Our model showed an accuracy of 91% in tumor classification and a Dice similarity coefficient of 82.35% in tumor segmentation.

Clinical implementation of artificial intelligence in neuroradiology with development of a novel workflow-efficient picture archiving and communication system-based automated brain tumor segmentation and radiomic feature extraction

  • Aboian, M.
  • Bousabarah, K.
  • Kazarian, E.
  • Zeevi, T.
  • Holler, W.
  • Merkaj, S.
  • Cassinelli Petersen, G.
  • Bahar, R.
  • Subramanian, H.
  • Sunku, P.
  • Schrickel, E.
  • Bhawnani, J.
  • Zawalich, M.
  • Mahajan, A.
  • Malhotra, A.
  • Payabvash, S.
  • Tocino, I.
  • Lin, M.
  • Westerhoff, M.
Front Neurosci 2022 Journal Article, cited 0 times
Website
Purpose: Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction. Materials and methods: An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations. Results: UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 +/- 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study. Conclusion: Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.

Comparison of MR Preprocessing Strategies and Sequences for Radiomics-Based MGMT Prediction

  • Abler, Daniel
  • Andrearczyk, Vincent
  • Oreiller, Valentin
  • Garcia, Javier Barranco
  • Vuong, Diem
  • Tanadini-Lang, Stephanie
  • Guckenberger, Matthias
  • Reyes, Mauricio
  • Depeursinge, Adrien
2022 Book Section, cited 0 times
Website
Hypermethylation of the O6-methylguanine-DNA-methyltransferase (MGMT) promoter in glioblastoma (GBM) is a predictive biomarker associated with improved treatment outcome. In clinical practice, MGMT methylation status is determined by biopsy or after surgical removal of the tumor. This study aims to investigate the feasibility of non-invasive medical imaging based “radio-genomic” surrogate markers of MGMT methylation status. The imaging dataset of the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) challenge allows exploring radiomics strategies for MGMT prediction in a large and very heterogeneous dataset that represents a variety of real-world imaging conditions including different imaging protocols and devices. To characterize and optimize MGMT prediction strategies under these conditions, we examined different image preprocessing approaches and their effect on the average prediction performance of simple radiomics models. We found features derived from FLAIR images to be most informative for MGMT prediction, particularly if aggregated over the entire (enhancing and non-enhancing) tumor with or without inclusion of the edema. Our results also indicate that the imaging characteristics of the tumor region can distort MR-bias-field correction in a way that negatively affects the prediction performance of the derived models.

Automated classification of acute leukemia on a heterogeneous dataset using machine learning and deep learning techniques

  • Abhishek, Arjun
  • Jha, Rajib Kumar
  • Sinha, Ruchi
  • Jha, Kamlesh
Biomedical Signal Processing and Control 2022 Journal Article, cited 2 times
Website
Today, artificial intelligence and deep learning techniques constitute a prominent part in the area of medical sciences. These techniques help doctors detect diseases early and reduce their burden as well as chances of errors. However, experiments based on deep learning techniques require large and well-annotated dataset. This paper introduces a novel dataset of 500 peripheral blood smear images, containing normal, Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia images. The dataset comprises almost 1700 cancerous blood cells. The size of the dataset is increased by adding images of a publicly available dataset and forming a heterogeneous dataset. The heterogeneous dataset is used for the automated binary classification task, which is one of the major tasks of the proposed work. The proposed work perform binary as well as three-class classification tasks involving state-of-the-art techniques based on machine learning and deep learning. For binary classification, the proposed work achieved an accuracy of 97% when fully connected layers along with the last three convolutional layers of VGG16 are fine tuned and 98% for DenseNet121 along with support vector machine. For three-class classification task, an accuracy of 95% is obtained for ResNet50 along with support vector machine. The preparation of the novel dataset is done under the opinion of various expertise that will help the scientific community for medical research supported by machine learning models.

A generalized framework for medical image classification and recognition

  • Abedini, M
  • Codella, NCF
  • Connell, JH
  • Garnavi, R
  • Merler, M
  • Pankanti, S
  • Smith, JR
  • Syeda-Mahmood, T
IBM Journal of Research and DevelopmentIbm J Res Dev 2015 Journal Article, cited 19 times
Website
In this work, we study the performance of a two-stage ensemble visual machine learning framework for classification of medical images. In the first stage, models are built for subsets of features and data, and in the second stage, models are combined. We demonstrate the performance of this framework in four contexts: 1) The public ImageCLEF (Cross Language Evaluation Forum) 2013 medical modality recognition benchmark, 2) echocardiography view and mode recognition, 3) dermatology disease recognition across two datasets, and 4) a broad medical image dataset, merged from multiple data sources into a collection of 158 categories covering both general and specific medical concepts-including modalities, body regions, views, and disease states. In the first context, the presented system achieves state-of-art performance of 82.2% multiclass accuracy. In the second context, the system attains 90.48% multiclass accuracy. In the third, state-of-art performance of 90% specificity and 90% sensitivity is obtained on a small standardized dataset of 200 images using a leave-one-out strategy. For a larger dataset of 2,761 images, 95% specificity and 98% sensitivity is obtained on a 20% held-out test set. Finally, in the fourth context, the system achieves sensitivity and specificity of 94.7% and 98.4%, respectively, demonstrating the ability to generalize over domains.

Robust Computer-Aided Detection of Pulmonary Nodules from Chest Computed Tomography

  • Abduh, Zaid
  • Wahed, Manal Abdel
  • Kadah, Yasser M
2016 Journal Article, cited 5 times
Website
Detection of pulmonary nodules in chest computed tomography scans play an important role in the early diagnosis of lung cancer. A simple yet effective computer-aided detection system is developed to distinguish pulmonary nodules in chest CT scans. The proposed system includes feature extraction, normalization, selection and classification steps. One hundred forty-nine gray level statistical features are extracted from selected regions of interest. A min-max normalization method is used followed by sequential forward feature selection technique with logistic regression model used as criterion function that selected an optimal set of five features for classification. The classification step was done using nearest neighbor and support vector machine (SVM) classifiers with separate training and testing sets. Several measures to evaluate the system performance were used including the area under ROC curve (AUC), sensitivity, specificity, precision, accuracy, F1 score and Cohen-k factor. Excellent performance with high sensitivity and specificity is reported using data from two reference datasets as compared to previous work.

Malignancy Classification of Lung Nodule Based on Accumulated Multi Planar Views and Canonical Correlation Analysis

  • Abdelrahman, Shimaa A.
  • Abdelwahab, Moataz M.
  • Sayed, Mohammed S.
2018 Conference Paper, cited 1 times
Website
Appearance of a small round or oval shaped in a Computed Tomography (CT) scan of lung is an alarm to suspicion of lung cancer. In order to avoid the misdiagnose of lung cancer at early stage, Computer Aided Diagnosis (CAD) assists oncologists to classify pulmonary nodules as malignant (cancerous) or benign (noncancerous). This paper introduces a novel approach for pulmonary nodules classification employing three accumulated views (top, front, and side) of CT slices and Canonical Correlation Analysis (CCA). Nodule is extracted from 2D CT slice to obtain the Region of Interest (ROI) patch. All patches from sequential slices are accumulated from three different views. Vector representation of each view is correlated with two training sets, malignant and benign sets, employing CCA in spatial and Radon Transform (RT) domain. According to the correlation coefficients, each view is classified and the final classification decision is taken based on the priority decision. For training and testing, 1010 patients are downloaded from Lung Image Database Consortium (LIDC). The final results show that the proposed method achieved the best performance with an accuracy of 90.93% compared with existing methods.

Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection

  • Abdelazeem, R. M.
  • Youssef, D.
  • El-Azab, J.
  • Hassab-Elnaby, S.
  • Agour, M.
PLoS One 2020 Journal Article, cited 0 times
Website
We propose a new optical method based on comparative holographic projection for visual comparison between two abnormal follow-up magnetic resonance (MR) exams of glioblastoma patients to effectively visualize and assess tumor progression. First, the brain tissue and tumor areas are segmented from the MR exams using the fast marching method (FMM). The FMM approach is implemented on a computed pixel weight matrix based on an automated selection of a set of initialized target points. Thereafter, the associated phase holograms are calculated for the segmented structures based on an adaptive iterative Fourier transform algorithm (AIFTA). Within this approach, a spatial multiplexing is applied to reduce the speckle noise. Furthermore, hologram modulation is performed to represent two different reconstruction schemes. In both schemes, all calculated holograms are superimposed into a single two-dimensional (2D) hologram which is then displayed on a reflective phase-only spatial light modulator (SLM) for optical reconstruction. The optical reconstruction of the first scheme displays a 3D map of the tumor allowing to visualize the volume of the tumor after treatment and at the progression. Whereas, the second scheme displays the follow-up exams in a side-by-side mode highlighting tumor areas, so the assessment of each case can be fast achieved. The proposed system can be used as a valuable tool for interpretation and assessment of the tumor progression with respect to the treatment method providing an improvement in diagnosis and treatment planning.

Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks

  • Abd-Ellah, Mahmoud Khaled
  • Awad, Ali Ismail
  • Khalaf, Ashraf AM
  • Hamed, Hesham FA
EURASIP Journal on Image and Video Processing 2018 Journal Article, cited 0 times
Website

Brain Tumor Detection and Classification on MR Images by a Deep Wavelet Auto-Encoder Model

  • Abd El Kader, Isselmou
  • Xu, Guizhi
  • Shuai, Zhang
  • Saminu, Sani
  • Javaid, Imran
  • Ahmad, Isah Salim
  • Kamhi, Souha
Diagnostics 2021 Journal Article, cited 16 times
Website
The process of diagnosing brain tumors is very complicated for many reasons, including the brain’s synaptic structure, size, and shape. Machine learning techniques are employed to help doctors to detect brain tumor and support their decisions. In recent years, deep learning techniques have made a great achievement in medical image analysis. This paper proposed a deep wavelet autoencoder model named “DWAE model”, employed to divide input data slice as a tumor (abnormal) or no tumor (normal). This article used a high pass filter to show the heterogeneity of the MRI images and their integration with the input images. A high median filter was utilized to merge slices. We improved the output slices’ quality through highlight edges and smoothened input MR brain images. Then, we applied the seed growing method based on 4-connected since the thresholding cluster equal pixels with input MR data. The segmented MR image slices provide two two-layer using the proposed deep wavelet auto-encoder model. We then used 200 hidden units in the first layer and 400 hidden units in the second layer. The softmax layer testing and training are performed for the identification of the MR image normal and abnormal. The contribution of the deep wavelet auto-encoder model is in the analysis of pixel pattern of MR brain image and the ability to detect and classify the tumor with high accuracy, short time, and low loss validation. To train and test the overall performance of the proposed model, we utilized 2500 MR brain images from BRATS2012, BRATS2013, BRATS2014, BRATS2015, 2015 challenge, and ISLES, which consists of normal and abnormal images. The experiments results show that the proposed model achieved an accuracy of 99.3%, loss validation of 0.1, low FPR and FNR values. This result demonstrates that the proposed DWAE model can facilitate the automatic detection of brain tumors.

NS-HGlio: A Generalizable and Repeatable HGG Segmentation and Volumetric measurement AI Algorithm for the Longitudinal MRI Assessment to Inform RANO in Trials and Clinics

  • Abayazeed, Aly H.
  • Abbassy, Ahmed
  • Mueller, Michael
  • Hill, Michael
  • Qayati, Mohamed
  • Mohamed, Shady
  • Mekhaimar, Mahmoud
  • Raymond, Catalina
  • Dubey, Prachi
  • Nael, Kambiz
  • Rohatgi, Saurabh
  • Kapare, Vaishali
  • Kulkarni, Ashwini
  • Shiang, Tina
  • Kumar, Atul
  • Andratschke, Nicolaus
  • Willmann, Jonas
  • Brawanski, Alexander
  • De Jesus, Reordan
  • Tuna, Ibrahim
  • Fung, Steve H.
  • Landolfi, Joseph C.
  • Ellingson, Benjamin M.
  • Reyes, Mauricio
Neuro-oncology advances 2022 Journal Article, cited 0 times
Website
Background Accurate and repeatable measurement of high-grade glioma (HGG) enhancing (Enh.) and T2/FLAIR hyperintensity/edema (Ed.) is required for monitoring treatment response. 3D measurements can be used to inform the modified Response Assessment in Neuro-oncology criteria (mRANO). We aim to develop an HGG volumetric measurement and visualisation AI algorithm that is generalizable and repeatable. Material and methods A single 3D-Convoluted Neural Network (CNN), NS-HGlio, to analyse HGG on MRIs using 5-fold cross validation was developed using retrospective (557 MRIs), multicentre (38 sites) and multivendor (32 scanners) dataset divided into training (70%), validation (20%) and testing (10%). Six neuroradiologists created the ground truth (GT). Additional Internal validation (IV, three institutions) using 70 MRIs, External validation (EV, single institution) using 40 MRIs through the Dice Similarity Coefficient (DSC) of Enh., Ed. and Enh. + Ed. (WholeLesion/WL) labels and repeatability testing on 14 subjects from the TCIA MGH-QIN-GBM dataset using volume correlations between timepoints were performed. Results IV Preoperative median DSC Enh. 0.89 (SD 0.11), Ed. 0.88 (0.28), WL 0.88 (0.11). EV Preoperative median DSC Enh. 0.82 (0.09), Ed. 0.83 (0.11), WL 0.86 (0.06). IV Postoperative median DSC Enh. 0.77 (SD 0.20), Ed 0.78. (SD 0.09), WL 0.78 (SD 0.11). EV Postoperative median DSC Enh. 0.75 (0.21), Ed 0.74 (0.12), WL 0.79 (0.07). Repeatability testing; Intraclass Correlation Coefficient (ICC) of 0.95 Enh. and 0.92 Ed. Conclusion NS-HGlio is accurate, repeatable, and generalizable. The output can be used for visualization, documentation, treatment response monitoring, radiation planning, intra-operative targeting, and estimation of Residual Tumor Volume (RTV) among others.

LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images

  • Aatresh, A. A.
  • Alabhya, K.
  • Lal, S.
  • Kini, J.
  • Saxena, P. U. P.
Int J Comput Assist Radiol Surg 2021 Journal Article, cited 0 times
Website
PURPOSE: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. METHOD: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets-a novel KMC dataset and the TCGA dataset. RESULTS: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of [Formula: see text] in accuracy and F1-score on the KMC and TCGA-LIHC datasets. CONCLUSION: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of [Formula: see text] on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second.