The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping

  • Zwanenburg, Alex
  • Vallieres, Martin
  • Abdalah, Mahmoud A
  • Aerts, Hugo J W L
  • Andrearczyk, Vincent
  • Apte, Aditya
  • Ashrafinia, Saeed
  • Bakas, Spyridon
  • Beukinga, Roelof J
  • Boellaard, Ronald
  • Bogowicz, Marta
  • Boldrini, Luca
  • Buvat, Irene
  • Cook, Gary J R
  • Davatzikos, Christos
  • Depeursinge, Adrien
  • Desseroit, Marie-Charlotte
  • Dinapoli, Nicola
  • Dinh, Cuong Viet
  • Echegaray, Sebastian
  • El Naqa, Issam
  • Fedorov, Andriy Y
  • Gatta, Roberto
  • Gillies, Robert J
  • Goh, Vicky
  • Gotz, Michael
  • Guckenberger, Matthias
  • Ha, Sung Min
  • Hatt, Mathieu
  • Isensee, Fabian
  • Lambin, Philippe
  • Leger, Stefan
  • Leijenaar, Ralph T H
  • Lenkowicz, Jacopo
  • Lippert, Fiona
  • Losnegard, Are
  • Maier-Hein, Klaus H
  • Morin, Olivier
  • Muller, Henning
  • Napel, Sandy
  • Nioche, Christophe
  • Orlhac, Fanny
  • Pati, Sarthak
  • Pfaehler, Elisabeth A G
  • Rahmim, Arman
  • Rao, Arvind U K
  • Scherer, Jonas
  • Siddique, Muhammad Musib
  • Sijtsema, Nanna M
  • Socarras Fernandez, Jairo
  • Spezi, Emiliano
  • Steenbakkers, Roel J H M
  • Tanadini-Lang, Stephanie
  • Thorwarth, Daniela
  • Troost, Esther G C
  • Upadhaya, Taman
  • Valentini, Vincenzo
  • van Dijk, Lisanne V
  • van Griethuysen, Joost
  • van Velden, Floris H P
  • Whybra, Philip
  • Richter, Christian
  • Lock, Steffen
RadiologyRadiology 2020 Journal Article, cited 247 times
Website

Assessing robustness of radiomic features by image perturbation

  • Zwanenburg, Alex
  • Leger, Stefan
  • Agolli, Linda
  • Pilz, Karoline
  • Troost, Esther G C
  • Richter, Christian
  • Löck, Steffen
2019 Journal Article, cited 0 times
Website
Image features need to be robust against differences in positioning, acquisition and segmentation to ensure reproducibility. Radiomic models that only include robust features can be used to analyse new images, whereas models with non-robust features may fail to predict the outcome of interest accurately. Test-retest imaging is recommended to assess robustness, but may not be available for the phenotype of interest. We therefore investigated 18 combinations of image perturbations to determine feature robustness, based on noise addition (N), translation (T), rotation (R), volume growth/shrinkage (V) and supervoxel-based contour randomisation (C). Test-retest and perturbation robustness were compared for combined total of 4032 morphological, statistical and texture features that were computed from the gross tumour volume in two cohorts with computed tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19 head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was determined using the 95% confidence interval (CI) of the intraclass correlation coefficient (1, 1). Features with CI >/= 0:90 were considered robust. The NTCV, TCV, RNCV and RCV perturbation chain produced similar results and identified the fewest false positive robust features (NSCLC: 0.2-0.9%; HNSCC: 1.7-1.9%). Thus, these perturbation chains may be used as an alternative to test-retest imaging to assess feature robustness.

Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network

  • Zuo, Wangxia
  • Zhou, Fuqiang
  • He, Yuzhu
  • Li, Xiaosong
Med Phys 2019 Journal Article, cited 0 times
Website
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.

Prognostic value of baseline [18F]-fluorodeoxyglucose positron emission tomography parameters MTV, TLG and asphericity in an international multicenter cohort of nasopharyngeal carcinoma patients

  • Zschaeck, S.
  • Li, Y.
  • Lin, Q.
  • Beck, M.
  • Amthauer, H.
  • Bauersachs, L.
  • Hajiyianni, M.
  • Rogasch, J.
  • Ehrhardt, V. H.
  • Kalinauskaite, G.
  • Weingartner, J.
  • Hartmann, V.
  • van den Hoff, J.
  • Budach, V.
  • Stromberger, C.
  • Hofheinz, F.
PLoS One 2020 Journal Article, cited 1 times
Website
PURPOSE: [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET) parameters have shown prognostic value in nasopharyngeal carcinomas (NPC), mostly in monocenter studies. The aim of this study was to assess the prognostic impact of standard and novel PET parameters in a multicenter cohort of patients. METHODS: The established PET parameters metabolic tumor volume (MTV), total lesion glycolysis (TLG) and maximal standardized uptake value (SUVmax) as well as the novel parameter tumor asphericity (ASP) were evaluated in a retrospective multicenter cohort of 114 NPC patients with FDG-PET staging, treated with (chemo)radiation at 8 international institutions. Uni- and multivariable Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), event-free survival (EFS), distant metastases-free survival (FFDM), and locoregional control (LRC) was performed for clinical and PET parameters. RESULTS: When analyzing metric PET parameters, ASP showed a significant association with EFS (p = 0.035) and a trend for OS (p = 0.058). MTV was significantly associated with EFS (p = 0.026), OS (p = 0.008) and LRC (p = 0.012) and TLG with LRC (p = 0.019). TLG and MTV showed a very high correlation (Spearman's rho = 0.95), therefore TLG was subesequently not further analysed. Optimal cutoff values for defining high and low risk groups were determined by maximization of the p-value in univariate Cox regression considering all possible cutoff values. Generation of stable cutoff values was feasible for MTV (p<0.001), ASP (p = 0.023) and combination of both (MTV+ASP = occurrence of one or both risk factors, p<0.001) for OS and for MTV regarding the endpoints OS (p<0.001) and LRC (p<0.001). In multivariable Cox (age >55 years + one binarized PET parameter), MTV >11.1ml (hazard ratio (HR): 3.57, p<0.001) and ASP > 14.4% (HR: 3.2, p = 0.031) remained prognostic for OS. MTV additionally remained prognostic for LRC (HR: 4.86 p<0.001) and EFS (HR: 2.51 p = 0.004). Bootstrapping analyses showed that a combination of high MTV and ASP improved prognostic value for OS compared to each single variable significantly (p = 0.005 and p = 0.04, respectively). When using the cohort from China (n = 57 patients) for establishment of prognostic parameters and all other patients for validation (n = 57 patients), MTV could be successfully validated as prognostic parameter regarding OS, EFS and LRC (all p-values <0.05 for both cohorts). CONCLUSIONS: In this analysis, PET parameters were associated with outcome of NPC patients. MTV showed a robust association with OS, EFS and LRC. Our data suggest that combination of MTV and ASP may potentially further improve the risk stratification of NPC patients.

Combination of tumor asphericity and an extracellular matrix-related prognostic gene signature in non-small cell lung cancer patients

  • Zschaeck, S.
  • Klinger, B.
  • van den Hoff, J.
  • Cegla, P.
  • Apostolova, I.
  • Kreissl, M. C.
  • Cholewinski, W.
  • Kukuk, E.
  • Strobel, H.
  • Amthauer, H.
  • Bluthgen, N.
  • Zips, D.
  • Hofheinz, F.
2023 Journal Article, cited 0 times
Website
One important aim of precision oncology is a personalized treatment of patients. This can be achieved by various biomarkers, especially imaging parameters and gene expression signatures are commonly used. So far, combination approaches are sparse. The aim of the study was to independently validate the prognostic value of the novel positron emission tomography (PET) parameter tumor asphericity (ASP) in non small cell lung cancer (NSCLC) patients and to investigate associations between published gene expression profiles and ASP. This was a retrospective evaluation of PET imaging and gene expression data from three public databases and two institutional datasets. The whole cohort comprised 253 NSCLC patients, all treated with curative intent surgery. Clinical parameters, standard PET parameters and ASP were evaluated in all patients. Additional gene expression data were available for 120 patients. Univariate Cox regression and Kaplan-Meier analysis was performed for the primary endpoint progression-free survival (PFS) and additional endpoints. Furthermore, multivariate cox regression testing was performed including clinically significant parameters, ASP, and the extracellular matrix-related prognostic gene signature (EPPI). In the whole cohort, a significant association with PFS was observed for ASP (p < 0.001) and EPPI (p = 0.012). Upon multivariate testing, EPPI remained significantly associated with PFS (p = 0.018) in the subgroup of patients with additional gene expression data, while ASP was significantly associated with PFS in the whole cohort (p = 0.012). In stage II patients, ASP was significantly associated with PFS (p = 0.009), and a previously published cutoff value for ASP (19.5%) was successfully validated (p = 0.008). In patients with additional gene expression data, EPPI showed a significant association with PFS, too (p = 0.033). The exploratory combination of ASP and EPPI showed that the combinatory approach has potential to further improve patient stratification compared to the use of only one parameter. We report the first successful validation of EPPI and ASP in stage II NSCLC patients. The combination of both parameters seems to be a very promising approach for improvement of risk stratification in a group of patients with urgent need for a more personalized treatment approach.

Glioma Segmentation with 3D U-Net Backed with Energy-Based Post-Processing

  • Zsamboki, Richard
  • Takacs, Petra
  • Deak-Karancsi, Borbala
2021 Book Section, cited 0 times
This paper proposes a glioma segmentation method based on neural networks. The base of the network is a UNet, expanded by residual blocks. Several preprocessing steps were applied before training, such as intensity normalization, high intensity cutting, cropping, and random flips. 2D and 3D solutions are implemented and tested, and results show that the 3D network outperforms 2D directions, therefore we stayed with 3D directions. The novelty of the method is the energy-based post-processing. Snakes [10], and conditional random fields (CRF) [11] were applied to the neural network’s predictions. Snake or active contour needs an initial outline around the object – e.g. the network’s prediction outline - and it can correct the contours of the tumor based on calculating the energy minimum, based on the intensity values at a given area. CRF is a specific type of graphical model, it uses the network’s prediction and the raw image features to estimate the posterior distribution (the tumor contour) using energy function minimization. The proposed methods are evaluated within the framework of the BRATS 2020 challenge. Measured on the test dataset the mean dice scores of the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) are 86.9%, 83.2% and 81.8% respectively. The results show high performance and promising future work in tumor segmentation, even outside of the brain.

Comparison of Active Learning Strategies Applied to Lung Nodule Segmentation in CT Scans

  • Zotova, Daria
  • Lisowska, Aneta
  • Anderson, Owen
  • Dilys, Vismantas
  • O’Neil, Alison
2019 Book Section, cited 0 times
Supervised machine learning techniques require large amounts of annotated training data to attain good performance. Active learning aims to ease the data collection process by automatically detecting which instances an expert should annotate in order to train a model as quickly and effectively as possible. Such strategies have been previously reported for medical imaging, but for other tasks than focal pathologies where there is high class imbalance and heterogeneous background appearance. In this study we evaluate different data selection approaches (random, uncertain, and representative sampling) and a semi-supervised model training procedure (pseudo-labelling), in the context of lung nodule segmentation in CT volumes from the publicly available LIDC-IDRI dataset. We find that active learning strategies allow us to train a model with equal performance but less than half of the annotation effort; data selection by uncertainty sampling offers the most gain, with the incorporation of representativeness or the addition of pseudo-labelling giving further small improvements. We conclude that active learning is a valuable tool and that further development of these strategies can play a key role in making diagnostic algorithms viable.

Generative Adversarial Networks for Brain MRI Synthesis: Impact of Training Set Size on Clinical Application

  • Zoghby, M. M.
  • Erickson, B. J.
  • Conte, G. M.
2024 Journal Article, cited 0 times
Website
We evaluated the impact of training set size on generative adversarial networks (GANs) to synthesize brain MRI sequences. We compared three sets of GANs trained to generate pre-contrast T1 (gT1) from post-contrast T1 and FLAIR (gFLAIR) from T2. The baseline models were trained on 135 cases; for this study, we used the same model architecture but a larger cohort of 1251 cases and two stopping rules, an early checkpoint (early models) and one after 50 epochs (late models). We tested all models on an independent dataset of 485 newly diagnosed gliomas. We compared the generated MRIs with the original ones using the structural similarity index (SSI) and mean squared error (MSE). We simulated scenarios where either the original T1, FLAIR, or both were missing and used their synthesized version as inputs for a segmentation model with the original post-contrast T1 and T2. We compared the segmentations using the dice similarity coefficient (DSC) for the contrast-enhancing area, non-enhancing area, and the whole lesion. For the baseline, early, and late models on the test set, for the gT1, median SSI was .957, .918, and .947; median MSE was .006, .014, and .008. For the gFLAIR, median SSI was .924, .908, and .915; median MSE was .016, .016, and .019. The range DSC was .625-.955, .420-.952, and .610-.954. Overall, GANs trained on a relatively small cohort performed similarly to those trained on a cohort ten times larger, making them a viable option for rare diseases or institutions with limited resources.

Upright walking has driven unique vascular specialization of the hominin ilium

  • Zirkle, Dexter
  • Meindl, Richard S
  • Lovejoy, C Owen
PeerJ 2021 Journal Article, cited 0 times
Website

New Diagnostics for Bipedality: The hominin ilium displays landmarks of a modified growth trajectory

  • Zirkle, Dexter
2022 Thesis, cited 0 times
Website

Distinct Radiomic Phenotypes Define Glioblastoma TP53-PTEN-EGFR Mutational Landscape

  • Zinn, Pascal O
  • Singh, Sanjay K
  • Kotrotsou, Aikaterini
  • Abrol, Srishti
  • Thomas, Ginu
  • Mosley, Jennifer
  • Elakkad, Ahmed
  • Hassan, Islam
  • Kumar, Ashok
  • Colen, Rivka R
Neurosurgery 2017 Journal Article, cited 3 times
Website

A novel volume-age-KPS (VAK) glioblastoma classification identifies a prognostic cognate microRNA-gene signature

  • Zinn, Pascal O
  • Sathyan, Pratheesh
  • Mahajan, Bhanu
  • Bruyere, John
  • Hegi, Monika
  • Majumder, Sadhan
  • Colen, Rivka R
PLoS One 2012 Journal Article, cited 63 times
Website
BACKGROUND: Several studies have established Glioblastoma Multiforme (GBM) prognostic and predictive models based on age and Karnofsky Performance Status (KPS), while very few studies evaluated the prognostic and predictive significance of preoperative MR-imaging. However, to date, there is no simple preoperative GBM classification that also correlates with a highly prognostic genomic signature. Thus, we present for the first time a biologically relevant, and clinically applicable tumor Volume, patient Age, and KPS (VAK) GBM classification that can easily and non-invasively be determined upon patient admission. METHODS: We quantitatively analyzed the volumes of 78 GBM patient MRIs present in The Cancer Imaging Archive (TCIA) corresponding to patients in The Cancer Genome Atlas (TCGA) with VAK annotation. The variables were then combined using a simple 3-point scoring system to form the VAK classification. A validation set (N = 64) from both the TCGA and Rembrandt databases was used to confirm the classification. Transcription factor and genomic correlations were performed using the gene pattern suite and Ingenuity Pathway Analysis. RESULTS: VAK-A and VAK-B classes showed significant median survival differences in discovery (P = 0.007) and validation sets (P = 0.008). VAK-A is significantly associated with P53 activation, while VAK-B shows significant P53 inhibition. Furthermore, a molecular gene signature comprised of a total of 25 genes and microRNAs was significantly associated with the classes and predicted survival in an independent validation set (P = 0.001). A favorable MGMT promoter methylation status resulted in a 10.5 months additional survival benefit for VAK-A compared to VAK-B patients. CONCLUSIONS: The non-invasively determined VAK classification with its implication of VAK-specific molecular regulatory networks, can serve as a very robust initial prognostic tool, clinical trial selection criteria, and important step toward the refinement of genomics-based personalized therapy for GBM patients.

Radiogenomic mapping of edema/cellular invasion MRI-phenotypes in glioblastoma multiforme

  • Zinn, Pascal O
  • Majadan, Bhanu
  • Sathyan, Pratheesh
  • Singh, Sanjay K
  • Majumder, Sadhan
  • Jolesz, Ferenc A
  • Colen, Rivka R
PLoS One 2011 Journal Article, cited 192 times
Website
BACKGROUND: Despite recent discoveries of new molecular targets and pathways, the search for an effective therapy for Glioblastoma Multiforme (GBM) continues. A newly emerged field, radiogenomics, links gene expression profiles with MRI phenotypes. MRI-FLAIR is a noninvasive diagnostic modality and was previously found to correlate with cellular invasion in GBM. Thus, our radiogenomic screen has the potential to reveal novel molecular determinants of invasion. Here, we present the first comprehensive radiogenomic analysis using quantitative MRI volumetrics and large-scale gene- and microRNA expression profiling in GBM. METHODS: Based on The Cancer Genome Atlas (TCGA), discovery and validation sets with gene, microRNA, and quantitative MR-imaging data were created. Top concordant genes and microRNAs correlated with high FLAIR volumes from both sets were further characterized by Kaplan Meier survival statistics, microRNA-gene correlation analyses, and GBM molecular subtype-specific distribution. RESULTS: The top upregulated gene in both the discovery (4 fold) and validation (11 fold) sets was PERIOSTIN (POSTN). The top downregulated microRNA in both sets was miR-219, which is predicted to bind to POSTN. Kaplan Meier analysis demonstrated that above median expression of POSTN resulted in significantly decreased survival and shorter time to disease progression (P<0.001). High POSTN and low miR-219 expression were significantly associated with the mesenchymal GBM subtype (P<0.0001). CONCLUSION: Here, we propose a novel diagnostic method to screen for molecular cancer subtypes and genomic correlates of cellular invasion. Our findings also have potential therapeutic significance since successful molecular inhibition of invasion will improve therapy and patient survival in GBM.

Diffusion Weighted Magnetic Resonance Imaging Radiophenotypes and Associated Molecular Pathways in Glioblastoma

  • Zinn, Pascal O
  • Hatami, Masumeh
  • Youssef, Eslam
  • Thomas, Ginu A
  • Luedi, Markus M
  • Singh, Sanjay K
  • Colen, Rivka R
Neurosurgery 2016 Journal Article, cited 2 times
Website

The Utilization of Consignable Multi-Model in Detection and Classification of Pulmonary Nodules

  • Zia, Muhammad Bilal
  • Juan, Zhao Juan
  • Rehman, Zia Ur
  • Javed, Kamran
  • Rauf, Saad Abdul
  • Khan, Arooj
International Journal of Computer Applications 2019 Journal Article, cited 2 times
Website
Early stage Detection and Classification of pulmonary nodule diagnostics from CT images is a complicated task. The risk assessment for malignancy is usually used to assist the physician in assessing the cancer stage and creating a follow-up prediction strategy. Due to the difference in size, structure, and location of the nodules, the classification of nodules in the computer-assisted diagnostic system has been a great challenge. While deep learning is currently the most effective solution in terms of image detection and classification, there are many training information required, typically not readily accessible in most routine frameworks of medical imaging. Though, it is complicated for radiologists to recognize the inexplicability of deep neural networks. In this paper, a Consignable Multi-Model (CMM) is proposed for the detection and classification of a lung nodule, which first detect the lung nodule from CT images by different detection algorithms and then classify the lung nodules using Multi-Output DenseNet (MOD) technique. In order to enhance the interpretability of the proposed CMM, two inputs with multiple early outputs have been introduced in dense blocks. MOD accepts the detect patches into its two inputs which were identified from the detection phase and then classified it between benign and malignant using early outputs to gain more knowledge of a tumor. In addition, the experimental results on the LIDC-IDRI dataset demonstrate a 92.10% accuracy of CMM for the lung nodule classification, respectively. CMM made substantial progress in the diagnosis of nodules in contrast to the existing methods.

A Prediction Model for Deciphering Intratumoral Heterogeneity Derived from the Microglia/Macrophages of Glioma Using Non-Invasive Radiogenomics

  • Zhu, Yunyang
  • Song, Zhaoming
  • Wang, Zhong
Brain Sciences 2023 Journal Article, cited 0 times
Microglia and macrophages play a major role in glioma immune responses within the glioma microenvironment. We aimed to construct a prognostic prediction model for glioma based on microglia/macrophage-correlated genes. Additionally, we sought to develop a non-invasive radiogenomics approach for risk stratification evaluation. Microglia/macrophage-correlated genes were identified from four single-cell datasets. Hub genes were selected via lasso–Cox regression, and risk scores were calculated. The immunological characteristics of different risk stratifications were assessed, and radiomics models were constructed using corresponding MRI imaging to predict risk stratification. We identified eight hub genes and developed a relevant risk score formula. The risk score emerged as a significant prognostic predictor correlated with immune checkpoints, and a relevant nomogram was drawn. High-risk groups displayed an active microenvironment associated with microglia/macrophages. Furthermore, differences in somatic mutation rates, such as IDH1 missense variant and TP53 missense variant, were observed between high- and low-risk groups. Lastly, a radiogenomics model utilizing five features from magnetic resonance imaging (MRI) T2 fluid-attenuated inversion recovery (Flair) effectively predicted the risk groups under a random forest model. Our findings demonstrate that risk stratification based on microglia/macrophages can effectively predict prognosis and immune functions in glioma. Moreover, we have shown that risk stratification can be non-invasively predicted using an MRI-T2 Flair-based radiogenomics model.

Deciphering Genomic Underpinnings of Quantitative MRI-based Radiomic Phenotypes of Invasive Breast Carcinoma

  • Zhu, Yitan
  • Li, Hui
  • Guo, Wentian
  • Drukker, Karen
  • Lan, Li
  • Giger, Maryellen L
  • Ji, Yuan
Sci RepScientific reports 2015 Journal Article, cited 52 times
Website
Magnetic Resonance Imaging (MRI) has been routinely used for the diagnosis and treatment of breast cancer. However, the relationship between the MRI tumor phenotypes and the underlying genetic mechanisms remains under-explored. We integrated multi-omics molecular data from The Cancer Genome Atlas (TCGA) with MRI data from The Cancer Imaging Archive (TCIA) for 91 breast invasive carcinomas. Quantitative MRI phenotypes of tumors (such as tumor size, shape, margin, and blood flow kinetics) were associated with their corresponding molecular profiles (including DNA mutation, miRNA expression, protein expression, pathway gene expression and copy number variation). We found that transcriptional activities of various genetic pathways were positively associated with tumor size, blurred tumor margin, and irregular tumor shape and that miRNA expressions were associated with the tumor size and enhancement texture, but not with other types of radiomic phenotypes. We provide all the association findings as a resource for the research community (available at http://compgenome.org/Radiogenomics/). These findings pave potential paths for the discovery of genetic mechanisms regulating specific tumor phenotypes and for improving MRI techniques as potential non-invasive approaches to probe the cancer molecular status.

AnatomyNet: Deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy

  • Zhu, Wentao
  • Huang, Yufang
  • Zeng, Liang
  • Chen, Xuming
  • Liu, Yong
  • Qian, Zhen
  • Du, Nan
  • Fan, Wei
  • Xie, Xiaohui
Medical Physics 2018 Journal Article, cited 4 times
Website

Deep Learning for Automated Medical Image Analysis

  • Wentao Zhu
2019 Thesis, cited 0 times
Website
Medical imaging is an essential tool in many areas of medical applications, used for both diagnosis and treatment. However, reading medical images and making diagnosis or treatment recommendations require specially trained medical specialists. The current practice of reading medical images is labor-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations. Recent advances in deep learning enable us to rethink the ways of clinician diagnosis based on medical images. Early detection has proven to be critical to give patients the best chance of recovery and survival. Advanced computer-aided diagnosis systems are expected to have high sensitivities and small low positive rates. How to provide accurate diagnosis results and explore different types of clinical data is an important topic in the current computer-aided diagnosis research. In this thesis, we will introduce 1) mammograms for detecting breast cancers, the most frequently diagnosed solid cancer for U.S. women, 2) lung Computed Tomography (CT) images for detecting lung cancers, the most frequently diagnosed malignant cancer, and 3) head and neck CT images for automated delineation of organs at risk in radiotherapy. First, we will show how to employ the adversarial concept to generate the hard examples improving mammogram mass segmentation. Second, we will demonstrate how to use the weakly labelled data for the mammogram breast cancer diagnosis by efficiently design deep learning for multiinstance learning. Third, the thesis will walk through DeepLung system which combines deep 3D ConvNets and Gradient Boosting Machine (GBM) for automated lung nodule detection and classification. Fourth, we will show how to use weakly labelled data to improve existing lung nodule detection system by integrating deep learning with a probabilistic graphic model. Lastly, we will demonstrate the AnatomyNet which is thousands of times faster and more accurate than previous methods on automated anatomy segmentation.

Multi-task Learning-Driven Volume and Slice Level Contrastive Learning for 3D Medical Image Classification

  • Zhu, Jiayuan
  • Wang, Shujun
  • He, Jinzheng
  • Schönlieb, Carola-Bibiane
  • Yu, Lequan
2022 Conference Proceedings, cited 0 times
Website

Preliminary Clinical Study of the Differences Between Interobserver Evaluation and Deep Convolutional Neural Network-Based Segmentation of Multiple Organs at Risk in CT Images of Lung Cancer

  • Zhu, Jinhan
  • Liu, Yimei
  • Zhang, Jun
  • Wang, Yixuan
  • Chen, Lixin
Frontiers in Oncology 2019 Journal Article, cited 0 times
Website
Background: In this study, publicly datasets with organs at risk (OAR) structures were used as reference data to compare the differences of several observers. Convolutional neural network (CNN)-based auto-contouring was also used in the analysis. We evaluated the variations among observers and the effect of CNN-based auto-contouring in clinical applications. Materials and methods: A total of 60 publicly available lung cancer CT with structures were used; 48 cases were used for training, and the other 12 cases were used for testing. The structures of the datasets were used as reference data. Three observers and a CNN-based program performed contouring for 12 testing cases, and the 3D dice similarity coefficient (DSC) and mean surface distance (MSD) were used to evaluate differences from the reference data. The three observers edited the CNN-based contours, and the results were compared to those of manual contouring. A value of P<0.05 was considered statistically significant. Results: Compared to the reference data, no statistically significant differences were observed for the DSCs and MSDs among the manual contouring performed by the three observers at the same institution for the heart, esophagus, spinal cord, and left and right lungs. The 95% confidence interval (CI) and P-values of the CNN-based auto-contouring results comparing to the manual results for the heart, esophagus, spinal cord, and left and right lungs were as follows: the DSCs were CNN vs. A: 0.914~0.939(P = 0.004), 0.746~0.808(P = 0.002), 0.866~0.887(P = 0.136), 0.952~0.966(P = 0.158) and 0.960~0.972 (P = 0.136); CNN vs. B: 0.913~0.936 (P = 0.002), 0.745~0.807 (P = 0.005), 0.864~0.894 (P = 0.239), 0.952~0.964 (P = 0.308), and 0.959~0.971 (P = 0.272); and CNN vs. C: 0.912~0.933 (P = 0.004), 0.748~0.804(P = 0.002), 0.867~0.890 (P = 0.530), 0.952~0.964 (P = 0.308), and 0.958~0.970 (P = 0.480), respectively. The P-values of MSDs are similar to DSCs. The P-values of heart and esophagus is smaller than 0.05. No significant differences were found between the edited CNN-based auto-contouring results and the manual results. Conclusion: For the spinal cord, both lungs, no statistically significant differences were found between CNN-based auto-contouring and manual contouring. Further modifications to contouring of the heart and esophagus are necessary. Overall, editing based on CNN-based auto-contouring can effectively shorten the contouring time without affecting the results. CNNs have considerable potential for automatic contouring applications.

Identifying molecular genetic features and oncogenic pathways of clear cell renal cell carcinoma through the anatomical (PADUA) scoring system

  • Zhu, H
  • Chen, H
  • Lin, Z
  • Shi, G
  • Lin, X
  • Wu, Z
  • Zhang, X
  • Zhang, X
OncotargetOncotarget 2016 Journal Article, cited 3 times
Website
Although the preoperative aspects and dimensions used for the PADUA scoring system were successfully applied in macroscopic clinical practice for renal tumor, the relevant molecular genetic basis remained unclear. To uncover meaningful correlations between the genetic aberrations and radiological features, we enrolled 112 patients with clear cell renal cell carcinoma (ccRCC) whose clinicopathological data, genomics data and CT data were obtained from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). Overall PADUA score and several radiological features included in the PADUA system were assigned for each ccRCC. Despite having observed no significant association between the gene mutation frequency and the overall PADUA score, correlations between gene mutations and a few radiological features (tumor rim location and tumor size) were identified. A significant association between rim location and miRNA molecular subtypes was also observed. Survival analysis revealed that tumor size > 7 cm was significantly associated with poor survival. In addition, Gene Set Enrichment Analysis (GSEA) on mRNA expression revealed that the high PADUA score was related to numerous cancer-related networks, especially epithelial to mesenchymal transition (EMT) related pathways. This preliminary analysis of ccRCC revealed meaningful correlations between PADUA anatomical features and molecular basis including genomic aberrations and molecular subtypes.

Data sharing in clinical trials: An experience with two large cancer screening trials

  • Zhu, Claire S
  • Pinsky, Paul F
  • Moler, James E
  • Kukwa, Andrew
  • Mabie, Jerome
  • Rathmell, Joshua M
  • Riley, Tom
  • Prorok, Philip C
  • Berg, Christine D
PLoS medicine 2017 Journal Article, cited 1 times
Website

Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation

  • Zhou, Yuyin
  • Li, Zhe
  • Bai, Song
  • Wang, Chong
  • Chen, Xinlei
  • Han, Mei
  • Fishman, Elliot
  • Yuille, Alan L.
2019 Conference Paper, cited 0 times
Website
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computeraided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the “background” usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent, we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”, a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97%, surpassing the prior art by a large margin of 3.27%.

MRLA-Net: A tumor segmentation network embedded with a multiple receptive-field lesion attention module in PET-CT images

  • Zhou, Y.
  • Jiang, H.
  • Diao, Z.
  • Tong, G.
  • Luan, Q.
  • Li, Y.
  • Li, X.
Comput Biol Med 2023 Journal Article, cited 0 times
Website
The tumor image segmentation is an important basis for doctors to diagnose and formulate treatment planning. PET-CT is an extremely important technology for recognizing the systemic situation of diseases due to the complementary advantages of their respective modal information. However, current PET-CT tumor segmentation methods generally focus on the fusion of PET and CT features. The fusion of features will weaken the characteristics of the modality itself. Therefore, enhancing the modal features of the lesions can obtain optimized feature sets, which is extremely necessary to improve the segmentation results. This paper proposed an attention module that integrates the PET-CT diagnostic visual field and the modality characteristics of the lesion, that is, the multiple receptive-field lesion attention module. This paper made full use of the spatial domain, frequency domain, and channel attention, and proposed a large receptive-field lesion localization module and a small receptive-field lesion enhancement module, which together constitute the multiple receptive-field lesion attention module. In addition, a network embedded with a multiple receptive-field lesion attention module has been proposed for tumor segmentation. This paper conducted experiments on a private liver tumor dataset as well as two publicly available datasets, the soft tissue sarcoma dataset, and the head and neck tumor segmentation dataset. The experimental results showed that the proposed method achieves excellent performance on multiple datasets, and has a significant improvement compared with DenseUNet, and the tumor segmentation results on the above three PET/CT datasets were improved by 7.25%, 6.5%, 5.29% in Dice per case. Compared with the latest PET-CT liver tumor segmentation research, the proposed method improves by 8.32%.

Improving Classification with CNNs using Wavelet Pooling with Nesterov-Accelerated Adam

  • Zhou, Wenjin
  • Rossetto, Allison
2019 Conference Proceedings, cited 0 times
Website
Wavelet pooling methods can improve the classification accuracy of Convolutional Neural Networks (CNNs). Combining wavelet pooling with the Nesterov-accelerated Adam (NAdam) gradient calculation method can improve both the accuracy of the CNN. We have implemented wavelet pooling with NAdam in this work using both a Haar wavelet (WavPool-NH) and a Shannon wavelet (WavPool-NS). The WavPool-NH and WavPool- NS methods are most accurate of the methods we considered for the MNIST and LIDC- IDRI lung tumor data-sets. The WavPool-NH and WavPool-NS implementations have an accuracy of 95.92% and 95.52%, respectively, on the LIDC-IDRI data-set. This is an improvement from the 92.93% accuracy obtained on this data-set with the max pooling method. The WavPool methods also avoid overfitting which is a concern with max pool- ing. We also found WavPool performed fairly well on the CIFAR-10 data-set, however, overfitting was an issue with all the methods we considered. Wavelet pooling, especially when combined with an adaptive gradient and wavelets chosen specifically for the data, has the potential to outperform current methods.

Multiple-instance ensemble for construction of deep heterogeneous committees for high-dimensional low-sample-size data

  • Zhou, Q.
  • Wang, S.
  • Zhu, H.
  • Zhang, X.
  • Zhang, Y.
2023 Journal Article, cited 0 times
Website
Deep ensemble learning, where we combine knowledge learned from multiple individual neural networks, has been widely adopted to improve the performance of neural networks in deep learning. This field can be encompassed by committee learning, which includes the construction of neural network cascades. This study focuses on the high-dimensional low-sample-size (HDLS) domain and introduces multiple instance ensemble (MIE) as a novel stacking method for ensembles and cascades. In this study, our proposed approach reformulates the ensemble learning process as a multiple-instance learning problem. We utilise the multiple-instance learning solution of pooling operations to associate feature representations of base neural networks into joint representations as a method of stacking. This study explores various attention mechanisms and proposes two novel committee learning strategies with MIE. In addition, we utilise the capability of MIE to generate pseudo-base neural networks to provide a proof-of-concept for a "growing" neural network cascade that is unbounded by the number of base neural networks. We have shown that our approach provides (1) a class of alternative ensemble methods that performs comparably with various stacking ensemble methods and (2) a novel method for the generation of high-performing "growing" cascades. The approach has also been verified across multiple HDLS datasets, achieving high performance for binary classification tasks in the low-sample size regime.

WVALE: Weak variational autoencoder for localisation and enhancement of COVID-19 lung infections

  • Zhou, Q.
  • Wang, S.
  • Zhang, X.
  • Zhang, Y. D.
Comput Methods Programs Biomed 2022 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: The COVID-19 pandemic is a major global health crisis of this century. The use of neural networks with CT imaging can potentially improve clinicians' efficiency in diagnosis. Previous studies in this field have primarily focused on classifying the disease on CT images, while few studies targeted the localisation of disease regions. Developing neural networks for automating the latter task is impeded by limited CT images with pixel-level annotations available to the research community. METHODS: This paper proposes a weakly-supervised framework named "Weak Variational Autoencoder for Localisation and Enhancement" (WVALE) to address this challenge for COVID-19 CT images. This framework includes two components: anomaly localisation with a novel WVAE model and enhancement of supervised segmentation models with WVALE. RESULTS: The WVAE model have been shown to produce high-quality post-hoc attention maps with fine borders around infection regions, while weak supervision segmentation shows results comparable to conventional supervised segmentation models. The WVALE framework can enhance the performance of a range of supervised segmentation models, including state-of-art models for the segmentation of COVID-19 lung infection. CONCLUSIONS: Our study provides a proof-of-concept for weakly supervised segmentation and an alternative approach to alleviate the lack of annotation, while its independence from classification & segmentation frameworks makes it easily integrable with existing systems.

Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches

  • Zhou, M
  • Scott, J
  • Chaudhury, B
  • Hall, L
  • Goldgof, D
  • Yeom, KW
  • Iv, M
  • Ou, Y
  • Kalpathy-Cramer, J
  • Napel, S
American Journal of Neuroradiology 2017 Journal Article, cited 20 times
Website

HLA-DQA1 expression is associated with prognosis and predictable with radiomics in breast cancer

  • Zhou, J.
  • Xie, T.
  • Shan, H.
  • Cheng, G.
Radiat Oncol 2023 Journal Article, cited 0 times
Website
BACKGROUND: High HLA-DQA1 expression is associated with a better prognosis in many cancers. However, the association between HLA-DQA1 expression and prognosis of breast cancer and the noninvasive assessment of HLA-DQA1 expression are still unclear. This study aimed to reveal the association and investigate the potential of radiomics to predict HLA-DQA1 expression in breast cancer. METHODS: In this retrospective study, transcriptome sequencing data, medical imaging data, clinical and follow-up data were downloaded from the TCIA ( https://www.cancerimagingarchive.net/ ) and TCGA ( https://portal.gdc.cancer.gov/ ) databases. The clinical characteristic differences between the high HLA-DQA1 expression group (HHD group) and the low HLA-DQA1 expression group were explored. Gene set enrichment analysis, Kaplan‒Meier survival analysis and Cox regression were performed. Then, 107 dynamic contrast-enhanced magnetic resonance imaging features were extracted, including size, shape and texture. Using recursive feature elimination and gradient boosting machine, a radiomics model was established to predict HLA-DQA1 expression. Receiver operating characteristic (ROC) curves, precision-recall curves, calibration curves, and decision curves were used for model evaluation. RESULTS: The HHD group had better survival outcomes. The differentially expressed genes in the HHD group were significantly enriched in oxidative phosphorylation (OXPHOS) and estrogen response early and late signalling pathways. The radiomic score (RS) output from the model was associated with HLA-DQA1 expression. The area under the ROC curves (95% CI), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the radiomic model were 0.866 (0.775-0.956), 0.825, 0.939, 0.7, 0.775, and 0.913 in the training set and 0.780 (0.629-0.931), 0.659, 0.81, 0.5, 0.63, and 0.714 in the validation set, respectively, showing a good prediction effect. CONCLUSIONS: High HLA-DQA1 expression is associated with a better prognosis in breast cancer. Quantitative radiomics as a noninvasive imaging biomarker has potential value for predicting HLA-DQA1 expression.

MRI features predict survival and molecular markers in diffuse lower-grade gliomas

  • Zhou, Hao
  • Vallieres, Martin
  • Bai, Harrison X
  • Su, Chang
  • Tang, Haiyun
  • Oldridge, Derek
  • Zhang, Zishu
  • Xiao, Bo
  • Liao, Weihua
  • Tao, Yongguang
  • Zhou, Jianhua
  • Zhang, Paul
  • Yang, Li
2017 Journal Article, cited 41 times
Website
Background: Previous studies have shown that MR imaging features can be used to predict survival and molecular profile of glioblastoma. However, no study of a similar type has been performed on lower-grade gliomas (LGGs). Methods: Presurgical MRIs of 165 patients with diffuse low- and intermediate-grade gliomas (histological grades II and III) were scored according to the Visually Accessible Rembrandt Images (VASARI) annotations. Radiomic models using automated texture analysis and VASARI features were built to predict isocitrate dehydrogenase 1 (IDH1) mutation, 1p/19q codeletion status, histological grade, and tumor progression. Results: Interrater analysis showed significant agreement in all imaging features scored (k = 0.703-1.000). On multivariate Cox regression analysis, no enhancement and a smooth non-enhancing margin were associated with longer progression-free survival (PFS), while a smooth non-enhancing margin was associated with longer overall survival (OS) after taking into account age, grade, tumor location, histology, extent of resection, and IDH1 1p/19q subtype. Using logistic regression and bootstrap testing evaluations, texture models were found to possess higher prediction potential for IDH1 mutation, 1p/19q codeletion status, histological grade, and progression of LGGs than VASARI features, with areas under the receiver-operating characteristic curves of 0.86 +/- 0.01, 0.96 +/- 0.01, 0.86 +/- 0.01, and 0.80 +/- 0.01, respectively. Conclusion: No enhancement and a smooth non-enhancing margin on MRI were predictive of longer PFS, while a smooth non-enhancing margin was a significant predictor of longer OS in LGGs. Textural analyses of MR imaging data predicted IDH1 mutation, 1p/19q codeletion, histological grade, and tumor progression with high accuracy.

Machine learning reveals multimodal MRI patterns predictive of isocitrate dehydrogenase and 1p/19q status in diffuse low-and high-grade gliomas.

  • Zhou, H.
  • Chang, K.
  • Bai, H. X.
  • Xiao, B.
  • Su, C.
  • Bi, W. L.
  • Zhang, P. J.
  • Senders, J. T.
  • Vallieres, M.
  • Kavouridis, V. K.
  • Boaro, A.
  • Arnaout, O.
  • Yang, L.
  • Huang, R. Y.
Journal of Neuro-Oncology 2019 Journal Article, cited 0 times
Website
PURPOSE: Isocitrate dehydrogenase (IDH) and 1p19q codeletion status are importantin providing prognostic information as well as prediction of treatment response in gliomas. Accurate determination of the IDH mutation status and 1p19q co-deletion prior to surgery may complement invasive tissue sampling and guide treatment decisions. METHODS: Preoperative MRIs of 538 glioma patients from three institutions were used as a training cohort. Histogram, shape, and texture features were extracted from preoperative MRIs of T1 contrast enhanced and T2-FLAIR sequences. The extracted features were then integrated with age using a random forest algorithm to generate a model predictive of IDH mutation status and 1p19q codeletion. The model was then validated using MRIs from glioma patients in the Cancer Imaging Archive. RESULTS: Our model predictive of IDH achieved an area under the receiver operating characteristic curve (AUC) of 0.921 in the training cohort and 0.919 in the validation cohort. Age offered the highest predictive value, followed by shape features. Based on the top 15 features, the AUC was 0.917 and 0.916 for the training and validation cohort, respectively. The overall accuracy for 3 group prediction (IDH-wild type, IDH-mutant and 1p19q co-deletion, IDH-mutant and 1p19q non-codeletion) was 78.2% (155 correctly predicted out of 198). CONCLUSION: Using machine-learning algorithms, high accuracy was achieved in the prediction of IDH genotype in gliomas and moderate accuracy in a three-group prediction including IDH genotype and 1p19q codeletion.

Deep Learning for Prediction of N2 Metastasis and Survival for Clinical Stage I Non-Small Cell Lung Cancer

  • Zhong, Y.
  • She, Y.
  • Deng, J.
  • Chen, S.
  • Wang, T.
  • Yang, M.
  • Ma, M.
  • Song, Y.
  • Qi, H.
  • Wang, Y.
  • Shi, J.
  • Wu, C.
  • Xie, D.
  • Chen, C.
  • Multi-omics Classifier for Pulmonary Nodules (MISSION) Collaborative Group
RadiologyRadiology 2022 Journal Article, cited 0 times
Website
Background Preoperative mediastinal staging is crucial for the optimal management of clinical stage I non-small cell lung cancer (NSCLC). Purpose To develop a deep learning signature for N2 metastasis prediction and prognosis stratification in clinical stage I NSCLC. Materials and Methods In this retrospective study conducted from May 2020 to October 2020 in a population with clinical stage I NSCLC, an internal cohort was adopted to establish a deep learning signature. Subsequently, the predictive efficacy and biologic basis of the proposed signature were investigated in an external cohort. A multicenter diagnostic trial (registration number: ChiCTR2000041310) was also performed to evaluate its clinical utility. Finally, on the basis of the N2 risk scores, the instructive significance of the signature in prognostic stratification was explored. The diagnostic efficiency was quantified with the area under the receiver operating characteristic curve (AUC), and the survival outcomes were assessed using the Cox proportional hazards model. Results A total of 3096 patients (mean age +/- standard deviation, 60 years +/- 9; 1703 men) were included in the study. The proposed signature achieved AUCs of 0.82, 0.81, and 0.81 in an internal test set (n = 266), external test cohort (n = 133), and prospective test cohort (n = 300), respectively. In addition, higher deep learning scores were associated with a lower frequency of EGFR mutation (P = .04), higher rate of ALK fusion (P = .02), and more activation of pathways of tumor proliferation (P < .001). Furthermore, in the internal test set and external cohort, higher deep learning scores were predictive of poorer overall survival (adjusted hazard ratio, 2.9; 95% CI: 1.2, 6.9; P = .02) and recurrence-free survival (adjusted hazard ratio, 3.2; 95% CI: 1.4, 7.4; P = .007). Conclusion The deep learning signature could accurately predict N2 disease and stratify prognosis in clinical stage I non-small cell lung cancer. (c) RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Park and Lee in this issue.

Prediction of Human Papillomavirus (HPV) Status in Oropharyngeal Squamous Cell Carcinoma Based on Radiomics and Machine Learning Algorithms: A Multi-Cohort Study

  • Zhinan, Liang
  • Wei, Zhang
  • Yudi, You
  • Yabing, Dong
  • Yuanzhe, Xiao
  • Xiulan, Liu
2022 Journal Article, cited 0 times
Website
Background: Human Papillomavirus status has significant implications for prognostic evaluation and clinical decision-making for Oropharyngeal Squamous Cell Carcinoma patients. As a novel method, radiomics provides a possibility for non-invasive diagnosis. The aim of this study was to examine whether Computed Tomography (CT) radiomics and machine learning classifiers can effectively predict Human Papillomavirus types and be validated in external data in patients with Oropharyngeal Squamous Cell Carcinoma based on imaging data from multi-institutional and multi-national cohorts. Materials and methods: 651 patients from three multi-institutional and multi-national cohorts are collected in this retrospective study: OPC-Radiomics cohort (n=497), MAASTRO cohort (n=74), and SNPH cohort (n=80). OPC-Radiomics cohort was randomized into training cohort and validation cohort with a ratio of 2:1. MAASTRO cohort and SNPH cohort were used as independent external testing cohorts. 1316 quantitative features were extracted from the Computed Tomography images of primary tumors. After feature selection by using Logistic Regression and Recursive Feature Elimination algorithms, 10 different machine- learning classifiers were trained and compared in different cohorts. Results: By comparing 10 kinds of machine-learning classifiers, we found that the best performance was achieved when using a Random Forest-based model, with the Area Under the Receiver Operating Characteristic (ROC) Curves(AUCs) of 0.97, 0.72, 0.63, and 0.78 in the training cohort, validation cohort, testing cohort 1 (MAASTRO cohort), and testing cohort 2 (SNPH cohort), respectively. Conclusion: The Random Forest-based radiomics model was effective in differentiating Human Papillomavirus status of Oropharyngeal Squamous Cell Carcinoma in multi-national population, which provides the possibility for this non-invasive method to be widely applied in clinical practice.

Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model

  • Zheng, Y.
  • Zhang, J.
  • Huang, D.
  • Hao, X.
  • Qin, W.
  • Liu, Y.
2024 Journal Article, cited 0 times
Website
BACKGROUND: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas. METHODS: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (>/=7) from known systematic biopsy results. RESULTS: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy. CONCLUSIONS: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.

Spatial cellular architecture predicts prognosis in glioblastoma

  • Zheng, Y.
  • Carrillo-Perez, F.
  • Pizurica, M.
  • Heiland, D. H.
  • Gevaert, O.
2023 Journal Article, cited 0 times
Website
Intra-tumoral heterogeneity and cell-state plasticity are key drivers for the therapeutic resistance of glioblastoma. Here, we investigate the association between spatial cellular organization and glioblastoma prognosis. Leveraging single-cell RNA-seq and spatial transcriptomics data, we develop a deep learning model to predict transcriptional subtypes of glioblastoma cells from histology images. Employing this model, we phenotypically analyze 40 million tissue spots from 410 patients and identify consistent associations between tumor architecture and prognosis across two independent cohorts. Patients with poor prognosis exhibit higher proportions of tumor cells expressing a hypoxia-induced transcriptional program. Furthermore, a clustering pattern of astrocyte-like tumor cells is associated with worse prognosis, while dispersion and connection of the astrocytes with other transcriptional subtypes correlate with decreased risk. To validate these results, we develop a separate deep learning model that utilizes histology images to predict prognosis. Applying this model to spatial transcriptomics data reveal survival-associated regional gene expression programs. Overall, our study presents a scalable approach to unravel the transcriptional heterogeneity of glioblastoma and establishes a critical connection between spatial cellular architecture and clinical outcomes.

Identification of Novel Transcriptome Signature as a Potential Prognostic Biomarker for Anti-Angiogenic Therapy in Glioblastoma Multiforme

  • Zheng, S.
  • Tao, W.
Cancers (Basel) 2021 Journal Article, cited 3 times
Website
Glioblastoma multiforme (GBM) is the most common and devastating type of primary brain tumor, with a median survival time of only 15 months. Having a clinically applicable genetic biomarker would lead to a paradigm shift in precise diagnosis, personalized therapeutic decisions, and prognostic prediction for GBM. Radiogenomic profiling connecting radiological imaging features with molecular alterations will offer a noninvasive method for genomic studies of GBM. To this end, we analyzed over 3800 glioma and GBM cases across four independent datasets. The Chinese Glioma Genome Atlas (CGGA) and The Cancer Genome Atlas (TCGA) databases were employed for RNA-Seq analysis, whereas the Ivy Glioblastoma Atlas Project (Ivy-GAP) and The Cancer Imaging Archive (TCIA) provided clinicopathological data. The Clinical Proteomic Tumor Analysis Consortium Glioblastoma Multiforme (CPTAC-GBM) was used for proteomic analysis. We identified a simple three-gene transcriptome signature-SOCS3, VEGFA, and TEK-that can connect GBM's overall prognosis with genes' expression and simultaneously correlate radiographical features of perfusion imaging with SOCS3 expression levels. More importantly, the rampant development of neovascularization in GBM offers a promising target for therapeutic intervention. However, treatment with bevacizumab failed to improve overall survival. We identified SOCS3 expression levels as a potential selection marker for patients who may benefit from early initiation of angiogenesis inhibitors.

Age-related copy number variations and expression levels of F-box protein FBXL20 predict ovarian cancer prognosis

  • Zheng, S.
  • Fu, Y.
Translational oncologyTransl Oncol 2020 Journal Article, cited 0 times
Website
About 70% of ovarian cancer (OvCa) cases are diagnosed at advanced stages (stage III/IV) with only 20-40% of them survive over 5years after diagnosis. A reliably screening marker could enable a paradigm shift in OvCa early diagnosis and risk stratification. Age is one of the most significant risk factors for OvCa. Older women have much higher rates of OvCa diagnosis and poorer clinical outcomes. In this article, we studied the correlation between aging and genetic alterations in The Cancer Genome Atlas Ovarian Cancer dataset. We demonstrated that copy number variations (CNVs) and expression levels of the F-Box and Leucine-Rich Repeat Protein 20 (FBXL20), a substrate recognizing protein in the SKP1-Cullin1-F-box-protein E3 ligase, can predict OvCa overall survival, disease-free survival and progression-free survival. More importantly, FBXL20 copy number loss predicts the diagnosis of OvCa at a younger age, with over 60% of patients in that subgroup have OvCa diagnosed at age less than 60years. Clinicopathological studies further demonstrated malignant histological and radiographical features associated with elevated FBXL20 expression levels. This study has thus identified a potential biomarker for OvCa prognosis.

Topology guided demons registration with local rigidity preservation

  • Zheng, Chaojie
  • Wang, Xiuying
  • Feng, Dagan
2016 Conference Proceedings, cited 1 times
Website

A statistical method for lung tumor segmentation uncertainty in PET images based on user inference

  • Zheng, Chaojie
  • Wang, Xiuying
  • Feng, Dagan
2015 Conference Proceedings, cited 0 times
Website

Bag of Tricks for 3D MRI Brain Tumor Segmentation

  • Zhao, Yuan-Xing
  • Zhang, Yan-Ming
  • Liu, Cheng-Lin
2020 Book Section, cited 0 times
3D brain tumor segmentation is essential for the diagnosis, monitoring, and treatment planning of brain diseases. In recent studies, the Deep Convolution Neural Network (DCNN) is one of the most potent methods for medical image segmentation. In this paper, we review the different kinds of tricks applied to 3D brain tumor segmentation with DNN. We divide such tricks into three main categories: data processing methods including data sampling, random patch-size training, and semi-supervised learning, model devising methods including architecture devising and result fusing, and optimizing processes including warming-up learning and multi-task learning. Most of these approaches are not particular to brain tumor segmentation, but applicable to other medical image segmentation problems as well. Evaluated on the BraTS 2019 online testing set, we obtain Dice scores of 0.810, 0.883 and 0.861, and Hausdorff Distances (95th percentile) of 2.447, 4.792, and 5.581 for enhanced tumor core, whole tumor, and tumor core, respectively. Our method won the second place of the BraTS 2019 Challenge for the tumor segmentation.

Recurrent Multi-Fiber Network for 3D MRI Brain Tumor Segmentation

  • Zhao, Yue
  • Ren, Xiaoqiang
  • Hou, Kun
  • Li, Wentao
Symmetry 2021 Journal Article, cited 0 times
Website
Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder-decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model's ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.

Agile convolutional neural network for pulmonary nodule classification using CT images

  • Zhao, X.
  • Liu, L.
  • Qi, S.
  • Teng, Y.
  • Li, J.
  • Qian, W.
Int J Comput Assist Radiol Surg 2018 Journal Article, cited 6 times
Website
OBJECTIVE: To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. METHODS: A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. RESULTS: After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. CONCLUSIONS: This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

Bronchus Segmentation and Classification by Neural Networks and Linear Programming

  • Zhao, Tianyi
  • Yin, Zhaozheng
  • Wang, Jiao
  • Gao, Dashan
  • Chen, Yunqiang
  • Mao, Yunxiang
2019 Book Section, cited 0 times
Airway segmentation is a critical problem for lung disease analysis. However, building a complete airway tree is still a challenging problem because of the complex tree structure, and tracing the deep bronchi is not trivial in CT images because there are numerous small airways with various directions. In this paper, we develop two-stage 2D+3D neural networks and a linear programming based tracking algorithm for airway segmentation. Furthermore, we propose a bronchus classification algorithm based on the segmentation results. Our algorithm is evaluated on a dataset collected from 4 resources. We achieved the dice coefficient of 0.94 and F1 score of 0.86 by a centerline based evaluation metric, compared to the ground-truth manually labeled by our radiologists.

Airway Anomaly Detection by Prototype-Based Graph Neural Network

  • Zhao, Tianyi
  • Yin, Zhaozheng
2021 Conference Proceedings, cited 0 times
Website

Two-stage fusion set selection in multi-atlas-based image segmentation

  • Zhao, Tingting
  • Ruan, Dan
2015 Conference Proceedings, cited 0 times
Website
Conventional multi-atlas-based segmentation demands pairwise full-fledged registration between each atlas image and the target image, which leads to high computational cost and poses great challenge in the new era of big data. On the other hand, only the most relevant atlases should contribute to final label fusion. In this work, we introduce a two-stage fusion set selection method by first trimming the atlas collection into an augmented subset based on a low-cost registration and the preliminary relevance metric, followed by a further refinement based on a full-fledged registration and the corresponding relevance metric. A statistical inference model is established to relate the preliminary and the refined relevance metrics, and a proper augmented subset size is derived based on it. Empirical evidence supported the inference model, and end-to-end performance assessment demonstrated the proposed scheme to be computationally efficient without compromising segmentation accuracy.

Improving Brain Tumor Segmentation in Multi-sequence MR Images Using Cross-Sequence MR Image Generation

  • Zhao, Guojing
  • Zhang, Jianpeng
  • Xia, Yong
2020 Book Section, cited 0 times
Accurate brain tumor segmentation using multi-sequence magnetic resonance (MR) imaging plays a pivotal role in clinical practice and research settings. Despite their prevalence, deep learning-based segmentation methods, which usually use multiple MR sequences as input, still have limited performance, partly due to their insufficient ability to image representation. In this paper, we propose a brain tumor segmentation (BraTSeg) model, which uses cross-sequence MR image generation as a self-supervision tool to improve the segmentation accuracy. This model is an ensemble of three image segmentation and generation (ImgSG) models, which are designed for simultaneous segmentation of brain tumors and generation of T1, T2, and Flair sequences, respectively. We evaluated the proposed BraTSeg model on the BraTS 2019 dataset and achieved an average Dice similarity coefficient (DSC) of 81.93%, 87.80%, and 83.44% in the segmentation of enhancing tumor, whole tumor, and tumor score on the testing set, respectively. Our results suggest that using cross-sequence MR image generation is an effective self-supervision method that can improve the accuracy of brain tumor segmentation and the proposed BraTSeg model can produce satisfactory segmentation of brain tumors and intra-tumor structures.

Segmentation then Prediction: A Multi-task Solution to Brain Tumor Segmentation and Survival Prediction

  • Zhao, Guojing
  • Jiang, Bowen
  • Zhang, Jianpeng
  • Xia, Yong
2021 Book Section, cited 0 times
Accurate brain tumor segmentation and survival prediction are two fundamental but challenging tasks in the computer aided diagnosis of gliomas. Traditionally, these two tasks were performed independently, without considering the correlation between them. We believe that both tasks should be performed under a unified framework so as to enable them mutually benefit each other. In this paper, we propose a multi-task deep learning model called segmentation then prediction (STP), to segment brain tumors and predict patient overall survival time. The STP model is composed of a segmentation module and a survival prediction module. The former uses 3D U-Net as its backbone, and the latter uses both local and global features. The local features are extracted by the last layer of the segmentation encoder, while the global features are produced by a global branch, which uses 3D ResNet-50 as its backbone. The STP model is jointly optimized for two tasks. We evaluated the proposed STP model on the BraTS 2020 validation dataset and achieved an average Dice similarity coefficient (DSC) of 0.790, 0.910, 0.851 for the segmentation of enhanced tumor core, whole tumor, and tumor core, respectively, and an accuracy of 65.5% for survival prediction.

MVP U-Net: Multi-View Pointwise U-Net for Brain Tumor Segmentation

  • Zhao, Changchen
  • Zhao, Zhiming
  • Zeng, Qingrun
  • Feng, Yuanjing
2021 Book Section, cited 0 times
It is a challenging task to segment brain tumors from multi-modality MRI scans. How to segment and reconstruct brain tumors more accurately and faster remains an open question. The key is to effectively model spatial-temporal information that resides in the input volumetric data. In this paper, we propose Multi-View Pointwise U-Net (MVP U-Net) for brain tumor segmentation. Our segmentation approach follows encoder-decoder based 3D U-Net architecture, among which, the 3D convolution is replaced by three 2D multi-view convolutions in three orthogonal views (axial, sagittal, coronal) of the input data to learn spatial features and one pointwise convolution to learn channel features. Further, we modify the Squeeze-and-Excitation (SE) block properly and introduce it into our original MVP U-Net after the concatenation section. In this way, the generalization ability of the model can be improved while the number of parameters can be reduced. In BraTS 2020 testing dataset, the mean Dice scores of the proposed method were 0.715, 0.839, and 0.768 for enhanced tumor, whole tumor, and tumor core, respectively. The results show the effectiveness of the proposed MVP U-Net with the SE block for multi-modal brain tumor segmentation.

Contour interpolation by deep learning approach

  • Zhao, C.
  • Duan, Y.
  • Yang, D.
J Med Imaging (Bellingham) 2022 Journal Article, cited 0 times
Website
PURPOSE: Contour interpolation is an important tool for expediting manual segmentation of anatomical structures. The process allows users to manually contour on discontinuous slices and then automatically fill in the gaps, therefore saving time and efforts. The most used conventional shape-based interpolation (SBI) algorithm, which operates on shape information, often performs suboptimally near the superior and inferior borders of organs and for the gastrointestinal structures. In this study, we present a generic deep learning solution to improve the robustness and accuracy for contour interpolation, especially for these historically difficult cases. APPROACH: A generic deep contour interpolation model was developed and trained using 16,796 publicly available cases from 5 different data libraries, covering 15 organs. The network inputs were a 128 x 128 x 5 image patch and the two-dimensional contour masks for the top and bottom slices of the patch. The outputs were the organ masks for the three middle slices. The performance was evaluated on both dice scores and distance-to-agreement (DTA) values. RESULTS: The deep contour interpolation model achieved a dice score of 0.95 +/- 0.05 and a mean DTA value of 1.09 +/- 2.30 mm , averaged on 3167 testing cases of all 15 organs. In a comparison, the results by the conventional SBI method were 0.94 +/- 0.08 and 1.50 +/- 3.63 mm , respectively. For the difficult cases, the dice score and DTA value were 0.91 +/- 0.09 and 1.68 +/- 2.28 mm by the deep interpolator, compared with 0.86 +/- 0.13 and 3.43 +/- 5.89 mm by SBI. The t-test results confirmed that the performance improvements were statistically significant ( p < 0.05 ) for all cases in dice scores and for small organs and difficult cases in DTA values. Ablation studies were also performed. CONCLUSIONS: A deep learning method was developed to enhance the process of contour interpolation. It could be useful for expediting the tasks of manual segmentation of organs and structures in the medical images.

Reproducibility of radiomics for deciphering tumor phenotype with imaging

  • Zhao, Binsheng
  • Tan, Yongqiang
  • Tsai, Wei-Yann
  • Qi, Jing
  • Xie, Chuanmiao
  • Lu, Lin
  • Schwartz, Lawrence H
Sci RepScientific reports 2016 Journal Article, cited 91 times
Website

CNN-Based Fully Automatic Glioma Classification with Multi-modal Medical Images

  • Zhao, Bingchao
  • Huang, Jia
  • Liang, Changhong
  • Liu, Zaiyi
  • Han, Chu
2021 Book Section, cited 0 times
The accurate classification of gliomas is essential in clinical practice. It is valuable for clinical practitioners and patients to choose the appropriate management accordingly, promoting the development of personalized medicine. In the MICCAI 2020 Combined Radiology and Pathology Classification Challenge, 4 MRI sequences and a WSI image are provided for each patient. Participants are required to use the multi-modal images to predict the subtypes of glioma. In this paper, we proposed a fully automated pipeline for glioma classification. Our proposed model consists of two parts: feature extraction and feature fusion, which are respectively responsible for extracting representative features of images and making prediction. In specific, we proposed a segmentation-free self-supervised feature extraction network for 3D MRI volume. And a feature extraction model is designed for the H&E stained WSI by associating traditional image processing methods with convolutional neural network. Finally, we fuse the extracted features from multi-modal images and use a densely connected neural network to predict the final classification results. We evaluate the proposed model with F1-Score, Cohen’s Kappa, and Balanced Accuracy on the validation set, which achieves 0.943, 0.903, and 0.889 respectively.

Improving the fidelity of CT image colorization based on pseudo-intensity model and tumor metabolism enhancement

  • Zhang, Z.
  • Jiang, H.
  • Liu, J.
  • Shi, T.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
BACKGROUND: Subject to the principle of imaging, most medical images are gray-scale images. Human eyes are more sensitive to color images compared to gray-scale images. The state-of-the-art medical image colorization results are unnatural and unrealistic, especially in some organs, such as the lung field. METHOD: We propose a CT image colorization network that consists of a pseudo-intensity model, tumor metabolic enhancement, and MemoPainter-cGAN colorization network. First, the distributions of both the density of CT images and the intensity of anatomical images are analyzed with the aim of building a pseudo-intensity model. Then, the PET images, which are sensitive to tumor metabolism, are used to highlight the tumor regions. Finally, the MemoPainter-cGAN is used to generate colorized anatomical images. RESULTS: Our experiment verified that the mean structural similarity between the colorized images and the original color images is 0.995, which indicates that the colorized image maintains the features of the original images enormously. The average image information entropy is 6.62, which is 13.4% higher than that of the images before metabolism enhancement and colorization. It indicates that the image fidelity is significantly improved. CONCLUSIONS: Our method can generate vivid and fresh anatomical images based on prior knowledge of tissue or organ intensity. The colorized PET/CT images with abundant anatomical knowledge and high sensitivity of metabolic information provide radiologists with access to a new modality that offers additional reference information.

Utility of Brain Parcellation in Enhancing Brain Tumor Segmentation and Survival Prediction

  • Zhang, Yue
  • Wu, Jiewei
  • Huang, Weikai
  • Chen, Yifan
  • Wu, Ed X.
  • Tang, Xiaoying
2021 Book Section, cited 0 times
In this paper, we proposed a UNet-based brain tumor segmentation method and a linear model-based survival prediction method. The effectiveness of UNet has been validated in automatically segmenting brain tumors from multimodal magnetic resonance (MR) images. Rather than network architecture, we focused more on making use of additional information (brain parcellation), training and testing strategy (coarse-to-fine), and ensemble technique to improve the segmentation performance. We then developed a linear classification model for survival prediction. Different from previous studies that mainly employ features from brain tumor segmentation, we also extracted features from brain parcellation, which further improved the prediction accuracy. On the challenge testing dataset, the proposed approach yielded average Dice scores of 88.43%, 84.51%, and 78.93% for the whole tumor, tumor core, and enhancing tumor in the segmentation task and an overall accuracy of 0.533 in the survival prediction task.

GenU-Net++: An Automatic Intracranial Brain Tumors Segmentation Algorithm on 3D Image Series with High Performance

  • Zhang, Yan
  • Liu, Xi
  • Wa, Shiyun
  • Liu, Yutong
  • Kang, Jiali
  • Lv, Chunli
Symmetry 2021 Journal Article, cited 0 times
Website
Automatic segmentation of intracranial brain tumors in three-dimensional (3D) image series is critical in screening and diagnosing related diseases. However, there are various challenges in intracranial brain tumor images: (1) Multiple brain tumor categories hold particular pathological features. (2) It is a thorny issue to locate and discern brain tumors from other non-brain regions due to their complicated structure. (3) Traditional segmentation requires a noticeable difference in the brightness of the interest target relative to the background. (4) Brain tumor magnetic resonance images (MRI) have blurred boundaries, similar gray values, and low image contrast. (5) Image information details would be dropped while suppressing noise. Existing methods and algorithms do not perform satisfactorily in overcoming these obstacles mentioned above. Most of them share an inadequate accuracy in brain tumor segmentation. Considering that the image segmentation task is a symmetric process in which downsampling and upsampling are performed sequentially, this paper proposes a segmentation algorithm based on U-Net++, aiming to address the aforementioned problems. This paper uses the BraTS 2018 dataset, which contains MR images of 245 patients. We suggest the generative mask sub-network, which can generate feature maps. This paper also uses the BiCubic interpolation method for upsampling to obtain segmentation results different from U-Net++. Subsequently, pixel-weighted fusion is adopted to fuse the two segmentation results, thereby, improving the robustness and segmentation performance of the model. At the same time, we propose an auto pruning mechanism in terms of the architectural features of U-Net++ itself. This mechanism deactivates the sub-network by zeroing the input. It also automatically prunes GenU-Net++ during the inference process, increasing the inference speed and improving the network performance by preventing overfitting. Our algorithm's PA, MIoU, P, and R are tested on the validation dataset, reaching 0.9737, 0.9745, 0.9646, and 0.9527, respectively. The experimental results demonstrate that the proposed model outperformed the contrast models. Additionally, we encapsulate the model and develop a corresponding application based on the MacOS platform to make the model further applicable.

DDTNet: A dense dual-task network for tumor-infiltrating lymphocyte detection and segmentation in histopathological images of breast cancer

  • Zhang, Xiaoxuan
  • Zhu, Xiongfeng
  • Tang, Kai
  • Zhao, Yinghua
  • Lu, Zixiao
  • Feng, Qianjin
Med Image Anal 2022 Journal Article, cited 1 times
Website
The morphological evaluation of tumor-infiltrating lymphocytes (TILs) in hematoxylin and eosin (H& E)-stained histopathological images is the key to breast cancer (BCa) diagnosis, prognosis, and therapeutic response prediction. For now, the qualitative assessment of TILs is carried out by pathologists, and computer-aided automatic lymphocyte measurement is still a great challenge because of the small size and complex distribution of lymphocytes. In this paper, we propose a novel dense dual-task network (DDTNet) to simultaneously achieve automatic TIL detection and segmentation in histopathological images. DDTNet consists of a backbone network (i.e., feature pyramid network) for extracting multi-scale morphological characteristics of TILs, a detection module for the localization of TIL centers, and a segmentation module for the delineation of TIL boundaries, where a boundary-aware branch is further used to provide a shape prior to segmentation. An effective feature fusion strategy is utilized to introduce multi-scale features with lymphocyte location information from highly correlated branches for precise segmentation. Experiments on three independent lymphocyte datasets of BCa demonstrate that DDTNet outperforms other advanced methods in detection and segmentation metrics. As part of this work, we also propose a semi-automatic method (TILAnno) to generate high-quality boundary annotations for TILs in H& E-stained histopathological images. TILAnno is used to produce a new lymphocyte dataset that contains 5029 annotated lymphocyte boundaries, which have been released to facilitate computational histopathology in the future.

Spline curve deformation model with prior shapes for identifying adhesion boundaries between large lung tumors and tissues around lungs in CT images

  • Zhang, Xin
  • Wang, Jie
  • Yang, Ying
  • Wang, Bing
  • Gu, Lixu
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Automated segmentation of lung tumors attached to anatomic structures such as the chest wall or mediastinum remains a technical challenge because of the similar Hounsfield units of these structures. To address this challenge, we propose herein a spline curve deformation model that combines prior shapes to correct large spatially contiguous errors (LSCEs) in input shapes derived from image-appearance cues.The model is then used to identify the adhesion boundaries between large lung tumors and tissue around the lungs. METHODS: The deformation of the whole curve is driven by the transformation of the control points (CPs) of the spline curve, which are influenced by external and internal forces. The external force drives the model to fit the positions of the non-LSCEs of the input shapes while the internal force ensures the local similarity of the displacements of the neighboring CPs. The proposed model corrects the gross errors in the lung input shape caused by large lung tumors, where the initial lung shape for the model is inferred from the training shapes by shape group-based sparse prior information and the input lung shape is inferred by adaptive-thresholding-based segmentation followed by morphological refinement. RESULTS: The accuracy of the proposed model is verified by applying it to images of lungs with either moderate large-sized (ML) tumors or giant large-sized (GL) tumors. The quantitative results in terms of the averages of the dice similarity coefficient (DSC) and the Jaccard similarity index (SI) are 0.982 +/- 0.006 and 0.965 +/- 0.012 for segmentation of lungs adhered by ML tumors, and 0.952 +/- 0.048 and 0.926 +/- 0.059 for segmentation of lungs adhered by GL tumors, which give 0.943 +/- 0.021 and 0.897 +/- 0.041 for segmentation of the ML tumors, and 0.907 +/- 0.057 and 0.888 +/- 0.091 for segmentation of the GL tumors, respectively. In addition, the bidirectional Hausdorff distances are 5.7 +/- 1.4 and 11.3 +/- 2.5 mm for segmentation of lungs with ML and GL tumors, respectively. CONCLUSIONS: When combined with prior shapes, the proposed spline curve deformation can deal with large spatially consecutive errors in object shapes obtained from image-appearance information. We verified this method by applying it to the segmentation of lungs with large tumors adhered to the tissue around the lungs and the large tumors. Both the qualitative and quantitative results are more accurate and repeatable than results obtained with current state-of-the-art techniques.

Radiomics Strategy for Molecular Subtype Stratification of Lower‐Grade Glioma: Detecting IDH and TP53 Mutations Based on Multimodal MRI

  • Zhang, Xi
  • Tian, Qiang
  • Wang, Liang
  • Liu, Yang
  • Li, Baojuan
  • Liang, Zhengrong
  • Gao, Peng
  • Zheng, Kaizhong
  • Zhao, Bofeng
  • Lu, Hongbing
Journal of Magnetic Resonance Imaging 2018 Journal Article, cited 5 times
Website

A radiomics nomogram based on multiparametric MRI might stratify glioblastoma patients according to survival

  • Zhang, Xi
  • Lu, Hongbing
  • Tian, Qiang
  • Feng, Na
  • Yin, Lulu
  • Xu, Xiaopan
  • Du, Peng
  • Liu, Yang
European Radiology 2019 Journal Article, cited 0 times

Magnetic resonance imaging-based radiomic features for extrapolating infiltration levels of immune cells in lower-grade gliomas

  • Zhang, X.
  • Liu, S.
  • Zhao, X.
  • Shi, X.
  • Li, J.
  • Guo, J.
  • Niedermann, G.
  • Luo, R.
  • Zhang, X.
Strahlentherapie und Onkologie 2020 Journal Article, cited 3 times
Website
PURPOSE: To extrapolate the infiltration levels of immune cells in patients with lower-grade gliomas (LGGs) using magnetic resonance imaging (MRI)-based radiomic features. METHODS: A retrospective dataset of 516 patients with LGGs from The Cancer Genome Atlas (TCGA) database was analysed for the infiltration levels of six types of immune cells using Tumor IMmune Estimation Resource (TIMER) based on RNA sequencing data. Radiomic features were extracted from 107 patients whose pre-operative MRI data are available in The Cancer Imaging Archive; 85 and 22 of these patients were assigned to the training and testing cohort, respectively. The least absolute shrinkage and selection operator (LASSO) was applied to select optimal radiomic features to build the radiomic signatures for extrapolating the infiltration levels of immune cells in the training cohort. The developed radiomic signatures were examined in the testing cohort using Pearson's correlation. RESULTS: The infiltration levels of B cells, CD4+ T cells, CD8+ T cells, macrophages, neutrophils and dendritic cells negatively correlated with overall survival in the 516 patient cohort when using univariate Cox's regression. Age, Karnofsky Performance Scale, WHO grade, isocitrate dehydrogenase mutant status and the infiltration of neutrophils correlated with survival using multivariate Cox's regression analysis. The infiltration levels of the 6 cell types could be estimated by radiomic features in the training cohort, and their corresponding radiomic signatures were built. The infiltration levels of B cells, CD8+ T cells, neutrophils and macrophages estimated by radiomics correlated with those estimated by TIMER in the testing cohort. Combining clinical/genomic features with the radiomic signatures only slightly improved the prediction of immune cell infiltrations. CONCLUSION: We developed MRI-based radiomic models for extrapolating the infiltration levels of immune cells in LGGs. Our results may have implications for treatment planning.

Ability of (18)F-FDG Positron Emission Tomography Radiomics and Machine Learning in Predicting KRAS Mutation Status in Therapy-Naive Lung Adenocarcinoma

  • Zhang, R.
  • Shi, K.
  • Hohenforst-Schmidt, W.
  • Steppert, C.
  • Sziklavari, Z.
  • Schmidkonz, C.
  • Atzinger, A.
  • Hartmann, A.
  • Vieth, M.
  • Forster, S.
Cancers (Basel) 2023 Journal Article, cited 0 times
Website
OBJECTIVE: Considering the essential role of KRAS mutation in NSCLC and the limited experience of PET radiomic features in KRAS mutation, a prediction model was built in our current analysis. Our model aims to evaluate the status of KRAS mutants in lung adenocarcinoma by combining PET radiomics and machine learning. METHOD: Patients were retrospectively selected from our database and screened from the NSCLC radiogenomic dataset from TCIA. The dataset was randomly divided into three subgroups. Two open-source software programs, 3D Slicer and Python, were used to segment lung tumours and extract radiomic features from (18)F-FDG-PET images. Feature selection was performed by the Mann-Whitney U test, Spearman's rank correlation coefficient, and RFE. Logistic regression was used to build the prediction models. AUCs from ROCs were used to compare the predictive abilities of the models. Calibration plots were obtained to examine the agreements of observed and predictive values in the validation and testing groups. DCA curves were performed to check the clinical impact of the best model. Finally, a nomogram was obtained to present the selected model. RESULTS: One hundred and nineteen patients with lung adenocarcinoma were included in our study. The whole group was divided into three datasets: a training set (n = 96), a validation set (n = 11), and a testing set (n = 12). In total, 1781 radiomic features were extracted from PET images. One hundred sixty-three predictive models were established according to each original feature group and their combinations. After model comparison and selection, one model, including wHLH_fo_IR, wHLH_glrlm_SRHGLE, wHLH_glszm_SAHGLE, and smoking habits, was validated with the highest predictive value. The model obtained AUCs of 0.731 (95% CI: 0.619~0.843), 0.750 (95% CI: 0.248~1.000), and 0.750 (95% CI: 0.448~1.000) in the training set, the validation set and the testing set, respectively. Results from calibration plots in validation and testing groups indicated that there was no departure between observed and predictive values in the two datasets (p = 0.377 and 0.861, respectively). CONCLUSIONS: Our model combining (18)F-FDG-PET radiomics and machine learning indicated a good predictive ability of KRAS status in lung adenocarcinoma. It may be a helpful non-invasive method to screen the KRAS mutation status of heterogenous lung adenocarcinoma before selected biopsy sampling.

A region-adaptive non-local denoising algorithm for low-dose computed tomography images

  • Zhang, Pengcheng
  • Liu, Yi
  • Gui, Zhiguo
  • Chen, Yang
  • Jia, Lina
Mathematical Biosciences and Engineering 2022 Journal Article, cited 0 times
Low-dose computed tomography (LDCT) can effectively reduce radiation exposure in patients. However, with such dose reductions, large increases in speckled noise and streak artifacts occur, resulting in seriously degraded reconstructed images. The non-local means (NLM) method has shown potential for improving the quality of LDCT images. In the NLM method, similar blocks are obtained using fixed directions over a fixed range. However, the denoising performance of this method is limited. In this paper, a region-adaptive NLM method is proposed for LDCT image denoising. In the proposed method, pixels are classified into different regions according to the edge information of the image. Based on the classification results, the adaptive searching window, block size and filter smoothing parameter could be modified in different regions. Furthermore, the candidate pixels in the searching window could be filtered based on the classification results. In addition, the filter parameter could be adjusted adaptively based on intuitionistic fuzzy divergence (IFD). The experimental results showed that the proposed method performed better in LDCT image denoising than several of the related denoising methods in terms of numerical results and visual quality.

Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation

  • Zhang, Ling
  • Xu, Daguang
  • Xu, Ziyue
  • Wang, Xiaosong
  • Yang, Dong
  • Sanford, Thomas
  • Harmon, Stephanie
  • Turkbey, Baris
  • Wood, Bradford J
  • Roth, Holger
  • Myronenko, Andriy
IEEE Trans Med Imaging 2020 Journal Article, cited 0 times
Website
Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the “expected” domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented “big” data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n = 9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n=10~32 volumes, depending on the size of the available datasets) from a single source domain: (i) BigAug models degrade an average of 11% (Dice score change) from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than “shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n=465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.

Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model

  • Zhang, L.
  • Ren, Z.
Clin Radiol 2019 Journal Article, cited 0 times
Website
AIM: To realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model. MATERIALS AND METHODS: MRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model. RESULTS: For STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis. CONCLUSION: MRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment.

A Deep Generative Model-Integrated Framework for Three-Dimensional Time-Difference Electrical Impedance Tomography

  • Zhang, Ke
  • Wang, Lu
  • Guo, Rui
  • Lin, Zhichao
  • Li, Maokun
  • Yang, Fan
  • Xu, Shenheng
  • Abubakar, Aria
2022 Journal Article, cited 0 times
The time-difference image reconstruction problem of electrical impedance tomography (EIT) refers to reconstructing the conductivity change in a human body part between two time points using the boundary impedance measurements. Conventionally, the problem can be formulated as a linear inverse problem. However, due to the physical property of the forward process, the inverse problem is seriously ill-posed. As a result, traditional regularized least-squares-based methods usually produce low-resolution images that are difficult to interpret. This work proposes a framework that uses a deep generative model to constrain the unknown conductivity. Specifically, this framework allows the inclusion of a constraint that describes a mathematical relationship between the generative model and the unknown conductivity. The resultant constrained minimization problem is solved using an extended alternating direction method of multipliers (ADMM). The effectiveness of the framework is demonstrated by the example of three-dimensional time-difference chest EIT imaging. Numerical experiment shows a significant improvement of the image quality compared with total variation-regularized least-squares method (PSNR is improved by 4.3% for 10% noise and 4.6% for 30% noise; SSIM is improved by 4.8% for 10% noise and 6.0% for 30% noise). Human experiments show improved correlation between the reconstructed images and images from reference techniques.

AResU-Net: Attention Residual U-Net for Brain Tumor Segmentation

  • Zhang, J. X.
  • Lv, X. G.
  • Zhang, H. B.
  • Liu, B.
2020 Journal Article, cited 0 times
Automatic segmentation of brain tumors from magnetic resonance imaging (MRI) is a challenging task due to the uneven, irregular and unstructured size and shape of tumors. Recently, brain tumor segmentation methods based on the symmetric U-Net architecture have achieved favorable performance. Meanwhile, the effectiveness of enhancing local responses for feature extraction and restoration has also been shown in recent works, which may encourage the better performance of the brain tumor segmentation problem. Inspired by this, we try to introduce the attention mechanism into the existing U-Net architecture to explore the effects of local important responses on this task. More specifically, we propose an end-to-end 2D brain tumor segmentation network, i.e., attention residual U-Net (AResU-Net), which simultaneously embeds attention mechanism and residual units into U-Net for the further performance improvement of brain tumor segmentation. AResU-Net adds a series of attention units among corresponding down-sampling and up-sampling processes, and it adaptively rescales features to effectively enhance local responses of down-sampling residual features utilized for the feature recovery of the following up-sampling process. We extensively evaluate AResU-Net on two MRI brain tumor segmentation benchmarks of BraTS 2017 and BraTS 2018 datasets. Experiment results illustrate that the proposed AResU-Net outperforms its baselines and achieves comparable performance with typical brain tumor segmentation methods.

Comparing effectiveness of image perturbation and test retest imaging in improving radiomic model reliability

  • Zhang, J.
  • Teng, X.
  • Zhang, X.
  • Lam, S. K.
  • Lin, Z.
  • Liang, Y.
  • Yu, H.
  • Siu, S. W. K.
  • Chang, A. T. Y.
  • Zhang, H.
  • Kong, F. M.
  • Yang, R.
  • Cai, J.
2023 Journal Article, cited 0 times
Website
Image perturbation is a promising technique to assess radiomic feature repeatability, but whether it can achieve the same effect as test-retest imaging on model reliability is unknown. This study aimed to compare radiomic model reliability based on repeatable features determined by the two methods using four different classifiers. A 191-patient public breast cancer dataset with 71 test-retest scans was used with pre-determined 117 training and 74 testing samples. We collected apparent diffusion coefficient images and manual tumor segmentations for radiomic feature extraction. Random translations, rotations, and contour randomizations were performed on the training images, and intra-class correlation coefficient (ICC) was used to filter high repeatable features. We evaluated model reliability in both internal generalizability and robustness, which were quantified by training and testing AUC and prediction ICC. Higher testing performance was found at higher feature ICC thresholds, but it dropped significantly at ICC = 0.95 for the test-retest model. Similar optimal reliability can be achieved with testing AUC = 0.7-0.8 and prediction ICC > 0.9 at the ICC threshold of 0.9. It is recommended to include feature repeatability analysis using image perturbation in any radiomic study when test-retest is not feasible, but care should be taken when deciding the optimal feature repeatability criteria.

A fully automatic extraction of magnetic resonance image features in glioblastoma patients

  • Zhang, Jing
  • Barboriak, Daniel P
  • Hobbs, Hasan
  • Mazurowski, Maciej A
Medical Physics 2014 Journal Article, cited 21 times
Website
PURPOSE: Glioblastoma is the most common malignant brain tumor. It is characterized by low median survival time and high survival variability. Survival prognosis for glioblastoma is very important for optimized treatment planning. Imaging features observed in magnetic resonance (MR) images were shown to be a good predictor of survival. However, manual assessment of MR features is time-consuming and can be associated with a high inter-reader variability as well as inaccuracies in the assessment. In response to this limitation, the authors proposed and evaluated a computer algorithm that extracts important MR image features in a fully automatic manner. METHODS: The algorithm first automatically segmented the available volumes into a background region and four tumor regions. Then, it extracted ten features from the segmented MR imaging volumes, some of which were previously indicated as predictive of clinical outcomes. To evaluate the algorithm, the authors compared the extracted features for 73 glioblastoma patients to the reference standard established by manual segmentation of the tumors. RESULTS: The experiments showed that their algorithm was able to extract most of the image features with moderate to high accuracy. High correlation coefficients between the automatically extracted value and reference standard were observed for the tumor location, minor and major axis length as well as tumor volume. Moderately high correlation coefficients were also observed for proportion of enhancing tumor, proportion of necrosis, and thickness of enhancing margin. The correlation coefficients for all these features were statistically significant (p < 0.0001). CONCLUSIONS: The authors proposed and evaluated an algorithm that, given a set of MR volumes of a glioblastoma patient, is able to extract MR image features that correlate well with their reference standard. Future studies will evaluate how well the computer-extracted features predict survival.

DDU-Nets: Distributed Dense Model for 3D MRI Brain Tumor Segmentation

  • Zhang, Hanxiao
  • Li, Jingxiong
  • Shen, Mali
  • Wang, Yaqi
  • Yang, Guang-Zhong
2020 Book Section, cited 0 times
Segmentation of brain tumors and their subregions remains a challenging task due to their weak features and deformable shapes. In this paper, three patterns (cross-skip, skip-1 and skip-2) of distributed dense connections (DDCs) are proposed to enhance feature reuse and propagation of CNNs by constructing tunnels between key layers of the network. For better detecting and segmenting brain tumors from multi-modal 3D MR images, CNN-based models embedded with DDCs (DDU-Nets) are trained efficiently from pixel to pixel with a limited number of parameters. Postprocessing is then applied to refine the segmentation results by reducing the false-positive samples. The proposed method is evaluated on the BraTS 2019 dataset with results demonstrating the effectiveness of the DDU-Nets while requiring less computational cost.

Automatic lung tumor segmentation from CT images using improved 3D densely connected UNet

  • Zhang, G.
  • Yang, Z.
  • Jiang, S.
2022 Journal Article, cited 0 times
Website
Accurate lung tumor segmentation has great significance in the treatment planning of lung cancer. However, robust lung tumor segmentation becomes challenging due to the heterogeneity of tumors and the similar visual characteristics between tumors and surrounding tissues. Hence, we developed an improved 3D dense connected UNet (I-3D DenseUNet) to segment various lung tumors from CT images. The nested dense skip connection adopted in the I-3D DenseUNet aims to contribute similar feature maps between encoder and decoder sub-networks. The dense connection used in encoder-decoder blocks also encourages feature propagation and reuse. A robust data augmentation strategy was employed to alleviate over-fitting based on a 3D thin plate spline (TPS) algorithm. We evaluated our method on 938 lung tumors from three datasets consisting of 421 tumors from the Cancer Imaging Archive (TCIA), 450 malignant tumors from the Lung Image Database Consortium (LIDC), and 67 tumors from the private dataset. Experiment results showed an excellent Dice similarity coefficients (DSC) of 0.8316 for the TCIA and LIDC and 0.8167 for the private dataset. The proposed method presents a strong ability in lung tumor segmentation, and it has the potential to help radiologists in lung cancer treatment planning. Framework of the proposed lung tumor segmentation method.

Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net

  • Zhang, G.
  • Yang, Z.
  • Huo, B.
  • Chai, S.
  • Jiang, S.
Comput Methods Programs Biomed 2021 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Accurately and reliably defining organs at risk (OARs) and tumors are the cornerstone of radiation therapy (RT) treatment planning for lung cancer. Almost all segmentation networks based on deep learning techniques rely on fully annotated data with strong supervision. However, existing public imaging datasets encountered in the RT domain frequently include singly labelled tumors or partially labelled organs because annotating full OARs and tumors in CT images is both rigorous and tedious. To utilize labelled data from different sources, we proposed a dual-path semi-supervised conditional nnU-Net for OARs and tumor segmentation that is trained on a union of partially labelled datasets. METHODS: The framework employs the nnU-Net as the base model and introduces a conditioning strategy by incorporating auxiliary information as an additional input layer into the decoder. The conditional nnU-Net efficiently leverages prior conditional information to classify the target class at the pixelwise level. Specifically, we employ the uncertainty-aware mean teacher (UA-MT) framework to assist in OARs segmentation, which can effectively leverage unlabelled data (images from a tumor labelled dataset) by encouraging consistent predictions of the same input under different perturbations. Furthermore, we individually design different combinations of loss functions to optimize the segmentation of OARs (Dice loss and cross-entropy loss) and tumors (Dice loss and focal loss) in a dual path. RESULTS: The proposed method is evaluated on two publicly available datasets of the spinal cord, left and right lung, heart, esophagus, and lung tumor, in which satisfactory segmentation performance has been achieved in term of both the region-based Dice similarity coefficient (DSC) and the boundary-based Hausdorff distance (HD). CONCLUSIONS: The proposed semi-supervised conditional nnU-Net breaks down the barriers between nonoverlapping labelled datasets and further alleviates the problem of "data hunger" and "data waste" in multi-class segmentation. The method has the potential to help radiologists with RT treatment planning in clinical practice.

AML leukocyte classification method for small samples based on ACGAN

  • Zhang, C.
  • Zhu, J.
2024 Journal Article, cited 0 times
Website
Leukemia is a class of hematologic malignancies, of which acute myeloid leukemia (AML) is the most common. Screening and diagnosis of AML are performed by microscopic examination or chemical testing of images of the patient's peripheral blood smear. In smear-microscopy, the ability to quickly identify, count, and differentiate different types of blood cells is critical for disease diagnosis. With the development of deep learning (DL), classification techniques based on neural networks have been applied to the recognition of blood cells. However, DL methods have high requirements for the number of valid datasets. This study aims to assess the applicability of the auxiliary classification generative adversarial network (ACGAN) in the classification task for small samples of white blood cells. The method is trained on the TCIA dataset, and the classification accuracy is compared with two classical classifiers and the current state-of-the-art methods. The results are evaluated using accuracy, precision, recall, and F1 score. The accuracy of the ACGAN on the validation set is 97.1 % and the precision, recall, and F1 scores on the validation set are 97.5 , 97.3, and 97.4 %, respectively. In addition, ACGAN received a higher score in comparison with other advanced methods, which can indicate that it is competitive in classification accuracy.

A deep learning reconstruction framework for low dose phase contrast computed tomography via inter-contrast enhancement

  • Zhang, Changsheng
  • Zhu, Guogang
  • Fu, Jian
  • Zhao, Gang
Measurement 2023 Journal Article, cited 0 times
Website
Phase contrast computed tomography (PCCT) offers excellent imaging contrast on soft tissue while it generate absorption, phase and dark-field contrast tomographic images. It has shown a great potential in clinical diagnosis. However, existing PCCT methods require high radiation doses. Reducing tube current is a universal low dose approach while it will introduce quantum noise in projections. In this paper, we report a deep learning (DL) framework for low dose PCCT based on inter-contrast enhancement. It utilizes the multi-contrast feature of PCCT and the varying effects of noise on each contrast. The missing structure in the contrasts that are more affected by noise can be recovered by those that are less affected. Considering the grating-based PCCT as example, the proposed framework is validated with experiments and a dramatic quality improvement of multi-contrast tomographic images has been obtained. This study shows potential of DL techniques in the field of low dose PCCT.

PanelNet: A Novel Deep Neural Network for Predicting Collective Diagnostic Ratings by a Panel of Radiologists for Pulmonary Nodules

  • Zhang, Chunyan
  • Xu, Songhua
  • Li, Zongfang
2020 Conference Paper, cited 0 times
Website
Reducing misdiagnosis rate is a central concern in modern medicine. In clinical practice, group-based collective diagnosis is frequently exercised to curb the misdiagnosis rate. However, little effort has been dedicated to emulating the collective intelligence behind the group-based decision making practice in computer-aided diagnosis research to this day. To fill the overlooked gap, this study introduces a novel deep neural network, titled PanelNet, that is able to computationally model and reproduce the aforesaid collective diagnosis capability demonstrated by a group of medical experts. To experimentally explore the validity of the new solution, we apply the proposed PanelNet to one of the key tasks in radiology---assessing malignant ratings of pulmonary nodules. For each nodule and a given panel, PanelNet is able to predict statistical distribution of malignant ratings collectively judged by the panel of radiologists. Extensive experimental results consistently demonstrate PanelNet outperforms multiple state-of-the-art computer-aided diagnosis methods applicable to the collective diagnostic task. To our best knowledge, no other collective computer-aided diagnosis method grounded on modern machine learning technologies has been previously proposed. By its design, PanelNet can also be easily applied to model collective diagnosis processes employed for other diseases.

A semantic fidelity interpretable-assisted decision model for lung nodule classification

  • Zhan, X.
  • Long, H.
  • Gou, F.
  • Wu, J.
Int J Comput Assist Radiol Surg 2023 Journal Article, cited 1 times
Website
PURPOSE: Early diagnosis of lung nodules is important for the treatment of lung cancer patients, existing capsule network-based assisted diagnostic models for lung nodule classification have shown promising prospects in terms of interpretability. However, these models lack the ability to draw features robustly at shallow networks, which in turn limits the performance of the models. Therefore, we propose a semantic fidelity capsule encoding and interpretable (SFCEI)-assisted decision model for lung nodule multi-class classification. METHODS: First, we propose multilevel receptive field feature encoding block to capture multi-scale features of lung nodules of different sizes. Second, we embed multilevel receptive field feature encoding blocks in the residual code-and-decode attention layer to extract fine-grained context features. Integrating multi-scale features and contextual features to form semantic fidelity lung nodule attribute capsule representations, which consequently enhances the performance of the model. RESULTS: We implemented comprehensive experiments on the dataset (LIDC-IDRI) to validate the superiority of the model. The stratified fivefold cross-validation results show that the accuracy (94.17%) of our method exceeds existing advanced approaches in the multi-class classification of malignancy scores for lung nodules. CONCLUSION: The experiments confirm that the methodology proposed can effectively capture the multi-scale features and contextual features of lung nodules. It enhances the capability of shallow structure drawing features in capsule networks, which in turn improves the classification performance of malignancy scores. The interpretable model can support the physicians' confidence in clinical decision-making.

Convection enhanced delivery of anti-angiogenic and cytotoxic agents in combination therapy against brain tumour

  • Zhan, W.
Eur J Pharm Sci 2020 Journal Article, cited 0 times
Website
Convection enhanced delivery is an effective alternative to routine delivery methods to overcome the blood brain barrier. However, its treatment efficacy remains disappointing in clinic owing to the rapid drug elimination in tumour tissue. In this study, multiphysics modelling is employed to investigate the combination delivery of anti-angiogenic and cytotoxic drugs from the perspective of intratumoural transport. Simulations are based on a 3-D realistic brain tumour model that is reconstructed from patient magnetic resonance images. The tumour microvasculature is targeted by bevacizumab, and six cytotoxic drugs are included, as doxorubicin, carmustine, cisplatin, fluorouracil, methotrexate and paclitaxel. The treatment efficacy is evaluated in terms of the distribution volume where the drug concentration is above the corresponding LD90. Results demonstrate that the infusion of bevacizumab can slightly improve interstitial fluid flow, but is significantly efficient in reducing the fluid loss from the blood circulatory system to inhibit the concentration dilution. As the transport of bevacizumab is dominated by convection, its spatial distribution and anti-angiogenic effectiveness present high sensitivity to the directional interstitial fluid flow. Infusing bevacizumab could enhance the delivery outcomes of all the six drugs, however, the degree of enhancement differs. The delivery of doxorubicin can be improved most, whereas, the impacts on methotrexate and paclitaxel are limited. Fluorouracil could cover the comparable distribution volume as paclitaxel in the combination therapy for effective cell killing. Results obtain in this study could be a guide for the design of this co-delivery treatment.

Effects of Focused-Ultrasound-and-Microbubble-Induced Blood-Brain Barrier Disruption on Drug Transport under Liposome-Mediated Delivery in Brain Tumour: A Pilot Numerical Simulation Study

  • Zhan, Wenbo
Pharmaceutics 2020 Journal Article, cited 0 times
Website

A multimodal radiomic machine learning approach to predict the LCK expression and clinical prognosis in high-grade serous ovarian cancer

  • Zhan, F.
  • He, L.
  • Yu, Y.
  • Chen, Q.
  • Guo, Y.
  • Wang, L.
2023 Journal Article, cited 0 times
Website
We developed and validated a multimodal radiomic machine learning approach to noninvasively predict the expression of lymphocyte cell-specific protein-tyrosine kinase (LCK) expression and clinical prognosis of patients with high-grade serous ovarian cancer (HGSOC). We analyzed gene enrichment using 343 HGSOC cases extracted from The Cancer Genome Atlas. The corresponding biomedical computed tomography images accessed from The Cancer Imaging Archive were used to construct the radiomic signature (Radscore). A radiomic nomogram was built by combining the Radscore and clinical and genetic information based on multimodal analysis. We compared the model performances and clinical practicability via area under the curve (AUC), Kaplan-Meier survival, and decision curve analyses. LCK mRNA expression was associated with the prognosis of HGSOC patients, serving as a significant prognostic marker of the immune response and immune cells infiltration. Six radiomic characteristics were chosen to predict the expression of LCK and overall survival (OS) in HGSOC patients. The logistic regression (LR) radiomic model exhibited slightly better predictive abilities than the support vector machine model, as assessed by comparing combined results. The performance of the LR radiomic model for predicting the level of LCK expression with five-fold cross-validation achieved AUCs of 0.879 and 0.834, respectively, in the training and validation sets. Decision curve analysis at 60 months demonstrated the high clinical utility of our model within thresholds of 0.25 and 0.7. The radiomic nomograms were robust and displayed effective calibration. Abnormally high expression of LCK in HGSOC patients is significantly correlated with the tumor immune microenvironment and can be used as an essential indicator for predicting the prognosis of HGSOC. The multimodal radiomic machine learning approach can capture the heterogeneity of HGSOC, noninvasively predict the expression of LCK, and replace LCK for predictive analysis, providing a new idea for predicting the clinical prognosis of HGSOC and formulating a personalized treatment plan.

The prognostic value of CT radiomic features from primary tumours and pathological lymphnodes in head and neck cancer patients

  • Zhai, Tiantian
2020 Thesis, cited 0 times
Website
Head and neck cancer (HNC) is responsible for about 0.83 million new cancer cases and 0.43 million cancer deaths worldwide every year. Around 30%-50% of patients with locally advanced HNC experience treatment failures, predominantly occurring at the site of the primary tumor, followed by regional failures and distant metastases. In order to optimize treatment strategy, the overall aim of this thesis is to identify the patients who are at high risk of treatment failures. We developed and externally validated a series of models on the different patterns of failure to predict the risk of local failures, regional failures, distant metastasis and individual nodal failures in HNC patients. New type of radiomic features based on the CT image were included in our modelling analysis, and we firstly showed that the radiomic features improved the prognostic performance of the models containing clinical factors significantly. Our studies provide clinicians new tools to predict the risk of treatment failures. This may support optimization of treatment strategy of this disease, and subsequently improve the patient survival rate.

A lossless DWT-SVD domain watermarking for medical information security

  • Zermi, N.
  • Khaldi, A.
  • Kafi, M. R.
  • Kahlessenane, F.
  • Euschi, S.
Multimed Tools Appl 2021 Journal Article, cited 0 times
Website
The goal of this work is to protect as much as possible the images exchanged in telemedicine, to avoid any confusion between the patient's radiographs, these images are watermarked with the patient's information as well as the acquisition data. Thus, during the extraction, the doctor will be able to affirm with certainty that the images belong to the treated patient. The ultimate goal of our completed work is to properly integrate the watermark with as little distortion as possible to typically retain the medical information in the image. In this innovative approach used DWT decomposition is appropriately applied to the image which allows a remarkably satisfactory adjustment during the insertion. An SVD is then applied to the three subbands LL, LH and HL, which ideally allows retaining the maximum energy of the used image in a guaranteed minimum of singular values. A specific combination of the three resulting singular value matrices is then performed for watermark integration. The proposed approach ensures data integrity, patient confidentiality when sharing data, and robustness to several conventional attacks.

Blockchain for Privacy Preserving and Trustworthy Distributed Machine Learning in Multicentric Medical Imaging (C-DistriM)

  • Zerka, Fadila
  • Urovi, Visara
  • Vaidyanathan, Akshayaa
  • Barakat, Samir
  • Leijenaar, Ralph T. H.
  • Walsh, Sean
  • Gabrani-Juma, Hanif
  • Miraglio, Benjamin
  • Woodruff, Henry C.
  • Dumontier, Michel
  • Lambin, Philippe
IEEE Access 2020 Journal Article, cited 0 times
The utility of Artificial Intelligence (AI) in healthcare strongly depends upon the quality of the data used to build models, and the confidence in the predictions they generate. Access to sufficient amounts of high-quality data to build accurate and reliable models remains problematic owing to substantive legal and ethical constraints in making clinically relevant research data available offsite. New technologies such as distributed learning offer a pathway forward, but unfortunately tend to suffer from a lack of transparency, which undermines trust in what data are used for the analysis. To address such issues, we hypothesized that, a novel distributed learning that combines sequential distributed learning with a blockchain-based platform, namely Chained Distributed Machine learning C-DistriM, would be feasible and would give a similar result as a standard centralized approach. C-DistriM enables health centers to dynamically participate in training distributed learning models. We demonstrate C-DistriM using the NSCLC-Radiomics open data to predict two-year lung-cancer survival. A comparison of the performance of this distributed solution, evaluated in six different scenarios, and the centralized approach, showed no statistically significant difference (AUCs between central and distributed models), all DeLong tests yielded p -val >0.05. This methodology removes the need to blindly trust the computation in one specific server on a distributed learning network. This fusion of blockchain and distributed learning serves as a proof-of-concept to increase transparency, trust, and ultimately accelerate the adoption of AI in multicentric studies. We conclude that our blockchain-based model for sequential training on distributed datasets is a feasible approach, provides equivalent performance to the centralized approach.

Privacy preserving distributed learning classifiers - Sequential learning with small sets of data

  • Zerka, F.
  • Urovi, V.
  • Bottari, F.
  • Leijenaar, R. T. H.
  • Walsh, S.
  • Gabrani-Juma, H.
  • Gueuning, M.
  • Vaidyanathan, A.
  • Vos, W.
  • Occhipinti, M.
  • Woodruff, H. C.
  • Dumontier, M.
  • Lambin, P.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
BACKGROUND: Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. METHODS: We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). FINDINGS: The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. INTERPRETATION: Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset.

Starlight: A kernel optimizer for GPU processing

  • Zeni, Alberto
  • Del Sozzo, Emanuele
  • D'Arnese, Eleonora
  • Conficconi, Davide
  • Santambrogio, Marco D.
2024 Journal Article, cited 0 times
Website
Over the past few years, GPUs have found widespread adoption in many scientific domains, offering notable performance and energy efficiency advantages compared to CPUs. However, optimizing GPU high-performance kernels poses challenges given the complexities of GPU architectures and programming models. Moreover, current GPU development tools provide few high-level suggestions and overlook the underlying hardware. Here we present Starlight, an open-source, highly flexible tool for enhancing GPU kernel analysis and optimization. Starlight autonomously describes Roofline Models, examines performance metrics, and correlates these insights with GPU architectural bottlenecks. Additionally, Starlight predicts potential performance enhancements before altering the source code. We demonstrate its efficacy by applying it to literature genomics and physics applications, attaining speedups from 1.1× to 2.5× over state-of-the-art baselines. Furthermore, Starlight supports the development of new GPU kernels, which we exemplify through an image processing application, showing speedups of 12.7× and 140× when compared against state-of-the-art FPGA- and GPU-based solutions.

An Attention Based Deep Learning Model for Direct Estimation of Pharmacokinetic Maps from DCE-MRI Images

  • Zeng, Qingyuan
  • Zhou, Wu
2021 Conference Paper, cited 0 times
Website
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a useful imaging technique that can quantitatively measure the pharmacokinetic (PK) parameters to characterize the microvasculature of tissues. Typically, the PK parameters are extracted by fitting the MR signal intensity of the pixels on the time series with the nonlinear least-squares method. The main disadvantage is that there are thousands of voxels in a single MR slice and the time consumption of voxels fitting too btain the P K parameters is very large. Recently, deep learning methods based on convolutional neural networks (CNNs) and Long Short-Term Memory (LSTM) network have been applied to directly estimate the PK parameters from the acquired DCE-MRI image-temporal series. However, how to effectively extract discriminative spatial and temporal features within DCE-MRI for the estimation of PK parameters is still a challenging problem, due to the large intensity variation of tissue images in different temporal phases of DCE-MRI during the injection of contrast agents. In this work, we propose an attention based deep learning model for the estimation of PK parameters, which can improve the estimation performance of PK parameters by focusing on dominant spatial and temporal characteristics. Specifically, a temporal frame attention block (FAB) and a channel/spatial attention block (CSAB) are separately designed to focus on dominant features in specific temporal phases, channels and spatial areas for better estimation. Experimental results of clinical DCE-MRI from an open source RIDER-NEURO dataset with quantitative and qualitative evaluation demonstrate that the proposed method outperforms previously reported CNN-based and LSTM-based deep learning models for the estimation of PK maps, and the ablation study also demonstrates the effectiveness of the proposed attention-based modules. In addition, the visualization of the attention mechanism reflects interesting findings that are consistent with clinical interpretation.

Segmentation of gliomas in pre-operative and post-operative multimodal magnetic resonance imaging volumes based on a hybrid generative-discriminative framework

  • Zeng, Ke
  • Bakas, Spyridon
  • Sotiras, Aristeidis
  • Akbari, Hamed
  • Rozycki, Martin
  • Rathore, Saima
  • Pati, Sarthak
  • Davatzikos, Christos
2016 Conference Proceedings, cited 8 times
Website

Ensemble CNN Networks for GBM Tumors Segmentation Using Multi-parametric MRI

  • Zeineldin, Ramy A.
  • Karar, Mohamed E.
  • Mathis-Ullrich, Franziska
  • Burgert, Oliver
2022 Book Section, cited 0 times
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).

Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities

  • Zayed, Nourhan
  • Elnemr, Heba A
International Journal of Biomedical Imaging 2015 Journal Article, cited 30 times
Website
The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others.

Brain tumor detection based on Naïve Bayes Classification

  • Zaw, Hein Tun
  • Maneerat, Noppadol
  • Win, Khin Yadanar
2019 Conference Paper, cited 2 times
Website
Brain cancer is caused by the population of abnormal cells called glial cells that takes place in the brain. Over the years, the number of patients who have brain cancer is increasing with respect to the aging population, is a worldwide health problem. The objective of this paper is to develop a method to detect the brain tissues which are affected by cancer especially for grade-4 tumor, Glioblastoma multiforme (GBM). GBM is one of the most malignant cancerous brain tumors as they are fast growing and more likely to spread to other parts of the brain. In this paper, Naïve Bayes classification is utilized for recognition of a tumor region accurately that contains all spreading cancerous tissues. Brain MRI database, preprocessing, morphological operations, pixel subtraction, maximum entropy threshold, statistical features extraction, and Naïve Bayes classifier based prediction algorithm are used in this research. The goal of this method is to detect the tumor area from different brain MRI images and to predict that detected area whether it is a tumor or not. When compared to other methods, this method can properly detect the tumor located in different regions of the brain including the middle region (aligned with eye level) which is the significant advantage of this method. When tested on 50 MRI images, this method develops 81.25% detection rate on tumor images and 100% detection rate on non-tumor images with the overall accuracy 94%.

Noise Reduction in CT Using Learned Wavelet-Frame Shrinkage Networks

  • Zavala-Mondragon, L. A.
  • Rongen, P.
  • Bescos, J. O.
  • de With, P. H. N.
  • van der Sommen, F.
IEEE Trans Med Imaging 2022 Journal Article, cited 0 times
Website
Encoding-decoding (ED) CNNs have demonstrated state-of-the-art performance for noise reduction over the past years. This has triggered the pursuit of better understanding the inner workings of such architectures, which has led to the theory of deep convolutional framelets (TDCF), revealing important links between signal processing and CNNs. Specifically, the TDCF demonstrates that ReLU CNNs induce low-rankness, since these models often do not satisfy the necessary redundancy to achieve perfect reconstruction (PR). In contrast, this paper explores CNNs that do meet the PR conditions. We demonstrate that in these type of CNNs soft shrinkage and PR can be assumed. Furthermore, based on our explorations we propose the learned wavelet-frame shrinkage network, or LWFSN and its residual counterpart, the rLWFSN. The ED path of the (r)LWFSN complies with the PR conditions, while the shrinkage stage is based on the linear expansion of thresholds proposed Blu and Luisier. In addition, the LWFSN has only a fraction of the training parameters (<1%) of conventional CNNs, very small inference times, low memory footprint, while still achieving performance close to state-of-the-art alternatives, such as the tight frame (TF) U-Net and FBPConvNet, in low-dose CT denoising.

Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones

  • Zaridis, D. I.
  • Mylona, E.
  • Tachos, N.
  • Pezoulas, V. C.
  • Grigoriadis, G.
  • Tsiknakis, N.
  • Marias, K.
  • Tsiknakis, M.
  • Fotiadis, D. I.
2023 Journal Article, cited 0 times
Website
Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models' predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate's gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images.

A Deep Learning-based cropping technique to improve segmentation of prostate's peripheral zone

  • Zaridis, Dimitris
  • Mylona, Eugenia
  • Tachos, Nikolaos
  • Marias, Kostas
  • Tsiknakis, Manolis
  • Fotiadis, Dimitios I.
2021 Conference Paper, cited 0 times
Automatic segmentation of the prostate peripheral zone on Magnetic Resonance Images (MRI) is a necessary but challenging step for accurate prostate cancer diagnosis. Deep learning (DL) based methods, such as U-Net, have recently been developed to segment the prostate and its' sub-regions. Nevertheless, the presence of class imbalance in the image labels, where the background pixels dominate over the region to be segmented, may severely hamper the segmentation performance. In the present work, we propose a DL-based preprocessing pipeline for segmenting the peripheral zone of the prostate by cropping unnecessary information without making a priori assumptions regarding the location of the region of interest. The effect of DL-cropping for improving the segmentation performance was compared to the standard center-cropping using three state-of-the-art DL networks, namely U-net, Bridged U-net and Dense U-net. The proposed method achieved an improvement of 24%, 12% and 15% for the U-net, Bridged U-net and Dense U-net, respectively, in terms of Dice score.

MuSA: a graphical user interface for multi-OMICs data integration in radiogenomic studies

  • Zanfardino, Mario
  • Castaldo, Rossana
  • Pane, Katia
  • Affinito, Ornella
  • Aiello, Marco
  • Salvatore, Marco
  • Franzese, Monica
Sci RepScientific reports 2021 Journal Article, cited 0 times
Website

A functional artificial neural network for noninvasive pretreatment evaluation of glioblastoma patients

  • Zander, E.
  • Ardeleanu, A.
  • Singleton, R.
  • Bede, B.
  • Wu, Y.
  • Zheng, S.
Neurooncol Adv 2022 Journal Article, cited 0 times
Website
Background: Pretreatment assessments for glioblastoma (GBM) patients, especially elderly or frail patients, are critical for treatment planning. However, genetic profiling with intracranial biopsy carries a significant risk of permanent morbidity. We previously demonstrated that the CUL2 gene, encoding the scaffold cullin2 protein in the cullin2-RING E3 ligase (CRL2), can predict GBM radiosensitivity and prognosis. CUL2 expression levels are closely regulated with its copy number variations (CNVs). This study aims to develop artificial neural networks (ANNs) for pretreatment evaluation of GBM patients with inputs obtainable without intracranial surgical biopsies. Methods: Public datasets including Ivy-GAP, The Cancer Genome Atlas Glioblastoma (TCGA-GBM), and the Chinese Glioma Genome Atlas (CGGA) were used for training and testing of the ANNs. T1 images from corresponding cases were studied using automated segmentation for features of heterogeneity and tumor edge contouring. A ratio comparing the surface area of tumor borders versus the total volume (SvV) was derived from the DICOM-SEG conversions of segmented tumors. The edges of these borders were detected using the canny edge detector. Packages including Keras, Pytorch, and TensorFlow were tested to build the ANNs. A 4-layered ANN (8-8-8-2) with a binary output was built with optimal performance after extensive testing. Results: The 4-layered deep learning ANN can identify a GBM patient's overall survival (OS) cohort with 80%-85% accuracy. The ANN requires 4 inputs, including CUL2 copy number, patients' age at GBM diagnosis, Karnofsky Performance Scale (KPS), and SvV ratio. Conclusion: Quantifiable image features can significantly improve the ability of ANNs to identify a GBM patients' survival cohort. Features such as clinical measures, genetic data, and image data, can be integrated into a single ANN for GBM pretreatment evaluation.

Brain Tumor Detection and Classification Using Deep Learning and Sine-Cosine Fitness Grey Wolf Optimization

  • ZainEldin, Hanaa
  • Gamel, Samah A.
  • El-Kenawy, El-Sayed M.
  • Alharbi, Amal H.
  • Khafaga, Doaa Sami
  • Ibrahim, Abdelhameed
  • Talaat, Fatma M.
2023 Journal Article, cited 0 times
Diagnosing a brain tumor takes a long time and relies heavily on the radiologist’s abilities and experience. The amount of data that must be handled has increased dramatically as the number of patients has increased, making old procedures both costly and ineffective. Many researchers investigated a variety of algorithms for detecting and classifying brain tumors that were both accurate and fast. Deep Learning (DL) approaches have recently been popular in developing automated systems capable of accurately diagnosing or segmenting brain tumors in less time. DL enables a pre-trained Convolutional Neural Network (CNN) model for medical images, specifically for classifying brain cancers. The proposed Brain Tumor Classification Model based on CNN (BCM-CNN) is a CNN hyperparameters optimization using an adaptive dynamic sine-cosine fitness grey wolf optimizer (ADSCFGWO) algorithm. There is an optimization of hyperparameters followed by a training model built with Inception-ResnetV2. The model employs commonly used pre-trained models (Inception-ResnetV2) to improve brain tumor diagnosis, and its output is a binary 0 or 1 (0: Normal, 1: Tumor). There are primarily two types of hyperparameters: (i) hyperparameters that determine the underlying network structure; (ii) a hyperparameter that is responsible for training the network. The ADSCFGWO algorithm draws from both the sine cosine and grey wolf algorithms in an adaptable framework that uses both algorithms’ strengths. The experimental results show that the BCM-CNN as a classifier achieved the best results due to the enhancement of the CNN’s performance by the CNN optimization’s hyperparameters. The BCM-CNN has achieved 99.98% accuracy with the BRaTS 2021 Task 1 dataset.

Predictive Modeling for Voxel-Based Quantification of Imaging-Based Subtypes of Pancreatic Ductal Adenocarcinoma (PDAC): A Multi-Institutional Study

  • Zaid, Mohamed
  • Widmann, Lauren
  • Dai, Annie
  • Sun, Kevin
  • Zhang, Jie
  • Zhao, Jun
  • Hurd, Mark W
  • Varadhachary, Gauri R
  • Wolff, Robert A
  • Maitra, Anirban
Cancers 2020 Journal Article, cited 0 times
Website

AUTOMATIC KIDNEY SEGMENTATION, RECONSTRUCTION, PREOPERATIVE PLANNING, AND 3D PRINTING

  • ZAGKOU, SPYRIDOULA
2021 Thesis, cited 0 times
Website
Renal cancer is the seventh most prevalent cancer among men and the tenth most frequent cancer among women, accounting for 5% and 3% of all adult malignancies, respectively. Κidney cancer is increasing dramatically in developing countries due to inadequate living conditions but and in developed countries due to bad lifestyles, smoking, obesity, and hypertension. For decades, radical nephrectomy (RN) was the standard method to address the problem of the high incidence of kidney cancer. However, the utilization of minimally invasive partial nephrectomy (PN), for the treatment of localized small renal masses has increased with the advent of laparoscopic and robotic-assisted procedures. In this framework, certain factors must be considered in surgical planning and decision-making of partial nephrectomies, such as the morphology and location of the tumor. Advanced technologies such as automatic image segmentation, image and surface reconstruction, and 3D printing have been developed to assess the tumor anatomy before surgery and its relationship to surrounding structures, such as the arteriovenous system, with the aim of preventing damage. Overall, it is obvious that 3D printed anatomical kidney models are very useful to urologists, surgeons, and researchers as a reference point for preoperative planning and intraoperative visualization for a more efficient treatment and a high standard of care. Furthermore, they can provide a lot of degrees of comfort in education, in patient counseling, and in delivering therapeutic methods customized to the needs of each individual patient. To this context, the fundamental objective of this thesis is to provide an analytical and general pipeline for the generation of a renal 3D printed model from CT images. In addition, there are proposed methods to enhance preoperative planning and help surgeons to prepare with increased accuracy the surgical procedure so that improve their performance. Keywords: Medical Image, Computed Tomography (CT), Semantic Segmentation, Convolutional Neural Networks (CNNs), Surface Reconstruction, Mesh Processing, 3D Printing of Kidney, Operative assistance

Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images

  • Zabala-Travers, Silvina
  • Choi, Mina
  • Cheng, Wei-Chung
  • Badano, Aldo
Medical Physics 2015 Journal Article, cited 4 times
Website
PURPOSE: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. METHODS: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow ("jet"), a heated black-body ("hot"), and a gray ("gray") color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. RESULTS: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. CONCLUSIONS: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images.

Radiomics study of lung tumor volume segmentation technique in contrast-enhanced Computed Tomography (CT) thorax images: A comparative study

  • Yunus, Mardhiyati Mohd
  • Sin, Ng Hui
  • Sabarudin, Akmal
  • Karim, Muhammad Khalis Abdul
  • Kechik, Mohd Mustafa Awang
  • Razali, Rosmizan Ahmad
  • Shamsul, Mohd Shahril Mohd
2023 Conference Paper, cited 0 times
Website
Medical image segmentation is crucial in extracting information regarding tumour characteristics including lung cancer. To obtain the information of macroscopic (tumour volume) and microscopic features (radiomics study), image segmentation process is required. Various kind of advance segmentation algorithms are available nowadays yet there is no so-called ‘the best segmentation technique’ that can be used in medical imaging modalities. This study compared manual slice by slice segmentation and semi-automated segmentation of lung tumour volume measurement with radiomics features of shape analysis and first-order statistical measures of texture analysis. Manual slice by slice delineation and region-growing semi-automated segmentation using 3D slicer software was performed on 45 sets of contrast-enhanced Computed Tomography (CT) Thorax images downloaded from The Cancer Imaging Archive (TCIA). The results showed that the manually and semi-automated segmentation has high similarity with volume Hausdorff distance (AHD) measured as 1.02 ± 0.71mm, high Dice similarity coefficient (DSC) value is 0.83 ± 0.05 and p value is 0.997; p > 0.05. Overall, 84.62% of the features under shape analysis and 33.33% of first-order statistical measures of texture analysis are no significant difference between these two segmentation methods. In conclusion, semiautomated segmentation can be perform as good as manual segmentation in lung tumour volume measurement, especially in terms of the ability to extract the shape order features of the lung tumour radiomics analysis.

Evaluating Scale Attention Network for Automatic Brain Tumor Segmentation with Large Multi-parametric MRI Database

  • Yuan, Yading
2022 Book Section, cited 0 times
Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. This is the 10th year of Brain Tumor Segmentation (BraTS) Challenge that utilizes multi-institutional multi-parametric magnetic resonance imaging (mpMRI) scans for tasks: 1) evaluation the state-of-the-art methods for the segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans; and 2) the evaluation of classification methods to predict the MGMT promoter methylation status at pre-operative baseline scans. We participated the image segmentation task by applying a fully automated segmentation framework that we previously developed in BraTS 2020. This framework, named as scale-attention network, incorporates a dynamic scale attention mechanism to integrate low-level details with high-level feature maps at different scales. Our framework was trained using the 1251 challenge training cases provided by BraTS 2021, and achieved an average Dice Similarity Coefficient (DSC) of 0.9277, 0.8851 and 0.8754, as well as 95% Hausdorff distance (in millimeter) of 4.2242, 15.3981 and 11.6925 on 570 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the second place in the brain tumor segmentation task of RSNA-ASNR-MICCAI BraTS 2021 Challenge (id: deepX).

Automatic Brain Tumor Segmentation with Scale Attention Network

  • Yuan, Yading
2021 Book Section, cited 0 times
Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. Multimodal Brain Tumor Segmentation Challenge 2020 (BraTS 2020) provides a common platform for comparing different automatic algorithms on multi-parametric Magnetic Resonance Imaging (mpMRI) in tasks of 1) Brain tumor segmentation MRI scans; 2) Prediction of patient overall survival (OS) from pre-operative MRI scans; 3) Distinction of true tumor recurrence from treatment related effects and 4) Evaluation of uncertainty measures in segmentation. We participate the image segmentation challenge by developing a fully automatic segmentation network based on encoder-decoder architecture. In order to better integrate information across different scales, we propose a dynamic scale attention mechanism that incorporates low-level details with high-level semantics from feature maps at different scales. Our framework was trained using the 369 challenge training cases provided by BraTS 2020, and achieved an average Dice Similarity Coefficient (DSC) of 0.8828, 0.8433 and 0.8177, as well as 95% Hausdorff distance (in millimeter) of 5.2176, 17.9697 and 13.4298 on 166 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the 3rd place among 693 registrations in the BraTS 2020 challenge.

A 3D semi-automated co-segmentation method for improved tumor target delineation in 3D PET/CT imaging

  • Yu, Zexi
  • Bui, Francis M
  • Babyn, Paul
2015 Conference Proceedings, cited 1 times
Website
The planning of radiotherapy is increasingly based on multi-modal imaging techniques such as positron emission tomography (PET)-computed tomography (CT), since PET/CT provides not only anatomical but also functional assessment of the tumor. In this work, we propose a novel co-segmentation method, utilizing both the PET and CT images, to localize the tumor. The method constructs the segmentation problem as minimization of a Markov random field model, which encapsulates features from both imaging modalities. The minimization problem can then be solved by the maximum flow algorithm, based on graph cuts theory. The proposed tumor delineation algorithm was validated in both a phantom, with a high-radiation area, and in patient data. The obtained results show significant improvement compared to existing segmentation methods, with respect to various qualitative and quantitative metrics.

Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images

  • Yu, Zexi
2016 Thesis, cited 0 times
Website

Prediction of pathologic stage in non-small cell lung cancer using machine learning algorithm based on CT image feature analysis

  • Yu, L.
  • Tao, G.
  • Zhu, L.
  • Wang, G.
  • Li, Z.
  • Ye, J.
  • Chen, Q.
BMC Cancer 2019 Journal Article, cited 11 times
Website
PURPOSE: To explore imaging biomarkers that can be used for diagnosis and prediction of pathologic stage in non-small cell lung cancer (NSCLC) using multiple machine learning algorithms based on CT image feature analysis. METHODS: Patients with stage IA to IV NSCLC were included, and the whole dataset was divided into training and testing sets and an external validation set. To tackle imbalanced datasets in NSCLC, we generated a new dataset and achieved equilibrium of class distribution by using SMOTE algorithm. The datasets were randomly split up into a training/testing set. We calculated the importance value of CT image features by means of mean decrease gini impurity generated by random forest algorithm and selected optimal features according to feature importance (mean decrease gini impurity > 0.005). The performance of prediction model in training and testing sets were evaluated from the perspectives of classification accuracy, average precision (AP) score and precision-recall curve. The predictive accuracy of the model was externally validated using lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) samples from TCGA database. RESULTS: The prediction model that incorporated nine image features exhibited a high classification accuracy, precision and recall scores in the training and testing sets. In the external validation, the predictive accuracy of the model in LUAD outperformed that in LUSC. CONCLUSIONS: The pathologic stage of patients with NSCLC can be accurately predicted based on CT image features, especially for LUAD. Our findings extend the application of machine learning algorithms in CT image feature prediction for pathologic staging and identify potential imaging biomarkers that can be used for diagnosis of pathologic stage in NSCLC patients.

Correlative hierarchical clustering-based low-rank dimensionality reduction of radiomics-driven phenotype in non-small cell lung cancer

  • Bardia Yousefi
  • Nariman Jahani
  • Michael J. LaRiviere
  • Eric Cohen
  • Meng-Kang Hsieh
  • José Marcio Luna
  • Rhea D. Chitalia
  • Jeffrey C. Thompson
  • Erica L. Carpenter
  • Sharyn I. Katz
  • Despina Kontos
2019 Conference Paper, cited 0 times
Website
Background: Lung cancer is one of the most common cancers in the United States and the most fatal, with 142,670 deaths in 2019. Accurately determining tumor response is critical to clinical treatment decisions, ultimately impacting patient survival. To better differentiate between non-small cell lung cancer (NSCLC) responders and non-responders to therapy, radiomic analysis is emerging as a promising approach to identify associated imaging features undetectable by the human eye. However, the plethora of variables extracted from an image may actually undermine the performance of computer-aided prognostic assessment, known as the curse of dimensionality. In the present study, we show that correlative-driven hierarchical clustering improves high-dimensional radiomics-based feature selection and dimensionality reduction, ultimately predicting overall survival in NSCLC patients. Methods: To select features for high-dimensional radiomics data, a correlation-incorporated hierarchical clustering algorithm automatically categorizes features into several groups. The truncation distance in the resulting dendrogram graph is used to control the categorization of the features, initiating low-rank dimensionality reduction in each cluster, and providing descriptive features for Cox proportional hazards (CPH)-based survival analysis. Using a publicly available non- NSCLC radiogenomic dataset of 204 patients’ CT images, 429 established radiomics features were extracted. Low-rank dimensionality reduction via principal component analysis (PCA) was employed (k=1, n<1) to find the representative components of each cluster of features and calculate cluster robustness using the relative weighted consistency metric. Results: Hierarchical clustering categorized radiomic features into several groups without primary initialization of cluster numbers using the correlation distance metric (as a function) to truncate the resulting dendrogram into different distances. The dimensionality was reduced from 429 to 67 features (for truncation distance of 0.1). The robustness within the features in clusters was varied from -1.12 to -30.02 for truncation distances of 0.1 to 1.8, respectively, which indicated that the robustness decreases with increasing truncation distance when smaller number of feature classes (i.e., clusters) are selected. The best multivariate CPH survival model had a C-statistic of 0.71 for truncation distance of 0.1, outperforming conventional PCA approaches by 0.04, even when the same number of principal components was considered for feature dimensionality. Conclusions: Correlative hierarchical clustering algorithm truncation distance is directly associated with robustness of the clusters of features selected and can effectively reduce feature dimensionality while improving outcome prediction.

Incremental Learning Meets Transfer Learning: Application to Multi-site Prostate MRI Segmentation

  • You, Chenyu
  • Xiang, Jinlin
  • Su, Kun
  • Zhang, Xiaoran
  • Dong, Siyuan
  • Onofrey, John
  • Staib, Lawrence
  • Duncan, James S.
2022 Conference Paper, cited 9 times
Website
Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, “incremental” refers to training sequentially constructed datasets, and “transfer” is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.

A feasibility study to estimate optimal rigid-body registration using combinatorial rigid registration optimization (CORRO)

  • Yorke, A. A.
  • Solis, D., Jr.
  • Guerrero, T.
J Appl Clin Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Clinical image pairs provide the most realistic test data for image registration evaluation. However, the optimal registration is unknown. Using combinatorial rigid registration optimization (CORRO) we demonstrate a method to estimate the optimal alignment for rigid-registration of clinical image pairs. METHODS: Expert selected landmark pairs were selected for each CT/CBCT image pair for six cases representing head and neck, thoracic, and pelvic anatomic regions. Combination subsets of a k number of landmark pairs (k-combination set) were generated without repeat to form a large set of k-combination sets (k-set) for k = 4,8,12. The rigid transformation between the image pairs was calculated for each k-combination set. The mean and standard deviation of these transformations were used to derive final registration for each k-set. RESULTS: The standard deviation of registration output decreased as the k-size increased for all cases. The joint entropy evaluated for each k-set of each case was smaller than those from two commercially available registration programs indicating a stronger correlation between the image pair after CORRO was used. A joint histogram plot of all three algorithms showed high correlation between them. As further proof of the efficacy of CORRO the joint entropy of each member of 30 000 k-combination sets in k = 4 were calculated for one of the thoracic cases. The minimum joint entropy was found to exist at the estimated mean of registration indicating CORRO converges to the optimal rigid-registration results. CONCLUSIONS: We have developed a methodology called CORRO that allows us to estimate optimal alignment for rigid-registration of clinical image pairs using a large set landmark point. The results for the rigid-body registration have been shown to be comparable to results from commercially available algorithms for all six cases. CORRO can serve as an excellent tool that can be used to test and validate rigid registration algorithms.

Quality Assurance of Image Registration Using Combinatorial Rigid Registration Optimization (CORRO)

  • Yorke, Afua A.
  • McDonald, Gary C.
  • Solis, David
  • Guerrero, Thomas
2021 Journal Article, cited 0 times
Purpose: Expert selected landmark points on clinical image pairs to provide a basis for rigid registration validation. Using combinatorial rigid registration optimization (CORRO) provide a statistically characterized reference data set for image registration of the pelvis by estimating optimal registration. Materials ad Methods: Landmarks for each CT/CBCT image pair for 58 cases were identified. From the landmark pairs, combination subsets of k-number of landmark pairs were generated without repeat, forming k-set for k=4, 8, and 12. A rigid registration between the image pairs was computed for each k-combination set (2,000-8,000,000). The mean and standard deviation of the registration were used as final registration for each image pair. Joint entropy was used to validate the output results. Results: An average of 154 (range: 91-212) landmark pairs were selected for each CT/CBCT image pair. The mean standard deviation of the registration output decreased as the k-size increased for all cases. In general, the joint entropy evaluated was found to be lower than results from commercially available software. Of all 58 cases 58.3% of the k=4, 15% of k=8 and 18.3% of k=12 resulted in the better registration using CORRO as compared to 8.3% from a commercial registration software. The minimum joint entropy was determined for one case and found to exist at the estimated registration mean in agreement with the CORRO algorithm. Conclusion: The results demonstrate that CORRO works even in the extreme case of the pelvic anatomy where the CBCT suffers from reduced quality due to increased noise levels. The estimated optimal registration using CORRO was found to be better than commercially available software for all k-sets tested. Additionally, the k-set of 4 resulted in overall best outcomes when compared to k=8 and 12, which is anticipated because k=8 and 12 are more likely to have combinations that affected the accuracy of the registration.

Prognostic value of tumor metabolic imaging phenotype by FDG PET radiomics in HNSCC

  • Yoon, H.
  • Ha, S.
  • Kwon, S. J.
  • Park, S. Y.
  • Kim, J.
  • O, J. H.
  • Yoo, I. R.
Ann Nucl Med 2021 Journal Article, cited 1 times
Website
Objective Tumor metabolic phenotype can be assessed with integrated image pattern analysis of 18F-fluoro-deoxy-glucose (FDG) Positron Emission Tomography/Computed Tomography (PET/CT), called radiomics. This study was performed to assess the prognostic value of radiomics PET parameters in head and neck squamous cell carcinoma (HNSCC) patients. Methods 18F-fluoro-deoxy-glucose (FDG) PET/CT data of 215 patients from HNSCC collection free database in The Cancer Imaging Archive (TCIA), and 122 patients in Seoul St. Mary’s Hospital with baseline FDG PET/CT for locally advanced HNSCC were reviewed. Data from TCIA database were used as a training cohort, and data from Seoul St. Mary’s Hospital as a validation cohort. With the training cohort, primary tumors were segmented by Nestles’ adaptive thresholding method. Segmental tumors in PET images were preprocessed using relative resampling of 64 bins. Forty-two PET parameters, including conventional parameters and texture parameters, were measured. Binary groups of homogeneous imaging phenotypes, clustered by K-means method, were compared for overall survival (OS) and disease-free survival (DFS) by log-rank test. Selected individual radiomics parameters were tested along with clinical factors, including age and sex, by Cox-regression test for OS and DFS, and the significant parameters were tested with multivariate analysis. Significant parameters on multivariate analysis were again tested with multivariate analysis in the validation cohort. Results A total of 119 patients, 70 from training, and 49 from validation cohort, were included in the study. The median follow-up period was 62 and 52 months for the training and the validation cohort, respectively. In the training cohort. binary groups with different metabolic radiomics phenotypes showed significant difference in OS (p = 0.036), and borderline difference in DFS (p = 0.086). Gray-Level Non-Uniformity for zone (GLNUGLZLM) was the most significant prognostic factor for both OS (hazard ratio [HR] 3.1, 95% confidence interval [CI] 1.4–7.3, p = 0.008) and DFS (HR 4.5, CI 1.3–16, p = 0.020). Multivariate analysis revealed GLNUGLZLM as an independent prognostic factor for OS (HR 3.7, 95% CI 1.1–7.5, p = 0.032). GLNUGLZLM remained as an independent prognostic factor in the validation cohort (HR 14.8. 95% CI 3.3–66, p < 0.001). Conclusions Baseline FDG PET radiomics contain risk information for survival prognosis in HNSCC patients. The metabolic heterogeneity parameter, GLNUGLZLM, may assist clinicians in patient risk assessment as a feasible prognostic factor.

MRI-Based Deep-Learning Method for Determining Glioma <em>MGMT</em> Promoter Methylation Status

  • Yogananda, C.G.B.
  • Shah, B.R.
  • Nalawade, S.S.
  • Murugesan, G.K.
  • Yu, F.F.
  • Pinho, M.C.
  • Wagner, B.C.
  • Mickey, B.
  • Patel, T.R.
  • Fei, B.
  • Madhuranthakam, A.J.
  • Maldjian, J.A.
American Journal of Neuroradiology 2021 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation confers an improved prognosis and treatment response in gliomas. We developed a deep learning network for determining MGMT promoter methylation status using T2 weighted Images (T2WI) only.MATERIALS AND METHODS: Brain MR imaging and corresponding genomic information were obtained for 247 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. One hundred sixty-three subjects had a methylated MGMT promoter. A T2WI-only network (MGMT-net) was developed to determine MGMT promoter methylation status and simultaneous single-label tumor segmentation. The network was trained using 3D-dense-UNets. Three-fold cross-validation was performed to generalize the performance of the networks. Dice scores were computed to determine tumor-segmentation accuracy.RESULTS: The MGMT-net demonstrated a mean cross-validation accuracy of 94.73% across the 3 folds (95.12%, 93.98%, and 95.12%, [SD, 0.66%]) in predicting MGMT methylation status with a sensitivity and specificity of 96.31% [SD, 0.04%] and 91.66% [SD, 2.06%], respectively, and a mean area under the curve of 0.93 [SD, 0.01]. The whole tumor-segmentation mean Dice score was 0.82 [SD, 0.008].CONCLUSIONS: We demonstrate high classification accuracy in predicting MGMT promoter methylation status using only T2WI. Our network surpasses the sensitivity, specificity, and accuracy of histologic and molecular methods. This result represents an important milestone toward using MR imaging to predict prognosis and treatment response.IDHisocitrate dehydrogenaseMGMTO6-methylguanine-DNA methyltransferasePCRpolymerase chain reactionT2WIT2 weighted ImagesTCGAThe Cancer Genome AtlasTCIAThe Cancer Imaging Archive

Non-invasive Profiling of Molecular Markers in Brain Gliomas using Deep Learning and Magnetic Resonance Images

  • Yogananda, Chandan Ganesh Bangalore
2021 Thesis, cited 0 times
Website
Gliomas account for the most common malignant primary brain tumors in both pediatric and adult populations. They arise from glial cells and are divided into low grade and high-grade gliomas with significant differences in patient survival. Patients with aggressive high-grade gliomas have life expectancies of less than 2 years. Glioblastoma (GBM) are aggressive brain tumors classified by the world health organization (WHO) as stage IV brain cancer. The overall survival for GBM patients is poor and is in the range of 12 to 15 months. These tumors are typically treated by surgery, followed by radiotherapy and chemotherapy. Gliomas often consist of active tumor tissue, necrotic tissue, and surrounding edema. Magnetic Resonance Imaging (MRI) is the most commonly used modality to assess brain tumors because of its superior soft tissue contrast. MRI tumor segmentation is used to identify the subcomponents as enhancing, necrotic or edematous tissue. Due to the heterogeneity and tissue relaxation differences in these subcomponents, multi-parametric (or multi-contrast) MRI is often used for accurate segmentation. Manual brain tumor segmentation is a challenging and tedious task for human experts due to the variability of tumor appearance, unclear borders of the tumor and the need to evaluate multiple MR images with different contrasts simultaneously. In addition, manual segmentation is often prone to significant intra- and inter-rater variability. To address these issues, Chapter 2 of my dissertation aims at designing and developing a highly accurate, 3D Dense-Unet Convolutional Neural Network (CNN) for segmenting brain tumors into subcomponents that can easily be incorporated into a clinical workflow. Primary brain tumors demonstrate broad variations in imaging features, response to therapy, and prognosis. It has become evident that this heterogeneity is associated with specific molecular and genetic profiles. For example, isocitrate dehydrogenase 1 and 2 (IDH 1/2) mutated gliomas demonstrate increased survival compared to wild-type gliomas with the same histologic grade. Identification of the IDH mutation status as a marker for therapy and prognosis is considered one of the most important recent discoveries in brain glioma biology. Additionally, 1p/19q co-deletion and O6-methyl guanine-DNA methyltransferase (MGMT) promoter methylation is associated with differences in response to specific chemoradiation regimens. Currently, the only reliable way of determining a molecular marker is by obtaining glioma tissue either via an invasive brain biopsy or following open surgical resection. Although the molecular profiling of gliomas is now a routine part of the evaluation of specimens obtained at biopsy or tumor resection, it would be helpful to have this information prior to surgery. In some cases, the information would aid in planning the extent of tumor resection. In others, for tumors in locations where resection is not possible, and the risk of a biopsy is high, accurate delineation of the molecular and genetic profile of the tumor might be used to guide empiric treatment with radiation and/or chemotherapy. The ability to non-invasively profile these molecular markers using only T2w MRI has significant implications in determining therapy, predicting prognosis, and feasible clinical translation. Thus, Chapters 3, 4 and 5 of my dissertation focuses on developing and evaluating deep learning algorithms for non-invasive profiling of molecular markers in brain gliomas using T2w MRI only. This includes developing highly accurate fully automated deep learning networks for, (i) classification of IDH mutation status (Chapter 3), (ii) classification of 1p/19q co-deletion status (Chapter 4), and (iii) classification of MGMT promoter status in Brain Gliomas (Chapter 5). An important caveat of using MRI is the effects of degradation on the images, such as motion artifact, and in turn, on the performance of deep learning-based algorithms. Motion artifacts are an especially pervasive source of MR image quality degradation and can be due to gross patient movements, as well as cardiac and respiratory motion. In clinical practice, these artifacts can interfere with diagnostic interpretation, necessitating repeat imaging. The effect of motion artifacts on medical images and deep learning based molecular profiling algorithms has not been studied systematically. It is likely that motion corruption will also lead to reduced performance of deep-learning algorithms in classifying brain tumor images. Deep learning based brain tumor segmentation and molecular profiling algorithms generally perform well only on specific datasets. Clinical translation of such algorithms has the potential to reduce interobserver variability, and improve planning for radiation therapy, improve speed & response to therapy. Although these algorithms perform very well on several publicly available datasets, their generalization to clinical datasets or tasks have been poor, preventing easy clinical translation. Thus, Chapter 6 of my dissertation focuses on evaluating the performance of the molecular profiling algorithms on motion corrupted, motion corrected and clinical T2w MRI. This includes, (i) evaluating the effect of motion corruption on the molecular profiling algorithms, (ii) determining if deep learning-based motion correction can recover the performance of these algorithms to levels similar to non-corrupted images, and (iii) evaluating the performance of these algorithms on clinical T2w MRI before & after motion correction. This chapter is an investigation on the effects of induced motion artifact on deep learning-based molecular classification, and the relative importance of robust correction methods in recovering the accuracies for potential clinical applicability. Deep-learning studies typically require a very large amount of data to achieve good performance. The number of subjects available from the TCIA database is relatively small when compared to the sample sizes typically required for deep learning. Despite this caveat, the data are representative of real-world clinical experience, with multiparametric MR images from multiple institutions, and represents one of the largest publicly available brain tumor databases. Additionally, the acquisition parameters and imaging vendor platforms are diverse across the imaging centers contributing data to TCIA. This study provides a framework for training, evaluating, and benchmarking any new artifact-correction architectures for potential insertion into a workflow. Although our results show promise for expeditious clinical translation, it will be essential to train and validate the algorithms using additional independent datasets. Thus, Chapter 7 of my dissertation discusses the limitations and possible future directions for this work.

Lung cancer deaths in the National Lung Screening Trial attributed to nonsolid nodules

  • Yip, Rowena
  • Yankelevitz, David F
  • Hu, Minxia
  • Li, Kunwei
  • Xu, Dong Ming
  • Jirapatnakul, Artit
  • Henschke, Claudia I
RadiologyRadiology 2016 Journal Article, cited 0 times

Lung Cancers Manifesting as Part-Solid Nodules in the National Lung Screening Trial

  • Yip, Rowena
  • Henschke, Claudia I
  • Xu, Dong Ming
  • Li, Kunwei
  • Jirapatnakul, Artit
  • Yankelevitz, David F
American Journal of Roentgenology 2017 Journal Article, cited 13 times
Website

The Tumor Mix-Up in 3D Unet for Glioma Segmentation

  • Yin, Pengyu
  • Hu, Yingdong
  • Liu, Jing
  • Duan, Jiaming
  • Yang, Wei
  • Cheng, Kun
2020 Book Section, cited 0 times
Automated segmentation of glioma and its subregions has significant importance throughout the clinical work flow including diagnosis, monitoring and treatment planning of brain cancer. The automatic delineation of tumours have draw much attention in the past few years, particularly the neural network based supervised learning methods. While the clinical data acquisition is much expensive and time consuming, which is the key limitation of machine learning in medical data. We describe a solution for the brain tumor segmentation in the context of the BRATS19 challenge. The major learning scheme is based on the 3D-Unet encoder and decoder with intense data augmentation followed by bias correction. At the moment we submit this short paper, our solution achieved Dice scores of 76.84, 85.74 and 74.51 for the enhancing tumor, whole tumor and tumor core, respectively on the validation data.

The Effect of Heterogenous Subregions in Glioblastomas on Survival Stratification: A Radiomics Analysis Using the Multimodality MRI

  • Yin, L.
  • Liu, Y.
  • Zhang, X.
  • Lu, H.
  • Liu, Y.
Technol Cancer Res Treat 2021 Journal Article, cited 0 times
Website
Intratumor heterogeneity is partly responsible for the poor prognosis of glioblastoma (GBM) patients. In this study, we aimed to assess the effect of different heterogeneous subregions of GBM on overall survival (OS) stratification. A total of 105 GBM patients were retrospectively enrolled and divided into long-term and short-term OS groups. Four MRI sequences, including contrast-enhanced T1-weighted imaging (T1C), T1, T2, and FLAIR, were collected for each patient. Then, 4 heterogeneous subregions, i.e. the region of entire abnormality (rEA), the regions of contrast-enhanced tumor (rCET), necrosis (rNec) and edema/non-contrast-enhanced tumor (rE/nCET), were manually drawn from the 4 MRI sequences. For each subregion, 50 radiomics features were extracted. The stratification performance of 4 heterogeneous subregions, as well as the performances of 4 MRI sequences, was evaluated both alone and in combination. Our results showed that rEA was superior in stratifying long-and short-term OS. For the 4 MRI sequences used in this study, the FLAIR sequence demonstrated the best performance of survival stratification based on the manual delineation of heterogeneous subregions. Our results suggest that heterogeneous subregions of GBMs contain different prognostic information, which should be considered when investigating survival stratification in patients with GBM.

CA-Net: Collaborative Attention Network for Multi-modal Diagnosis of Gliomas

  • Yin, Baocai
  • Cheng, Hu
  • Wang, Fengyan
  • Wang, Zengfu
2022 Book Section, cited 0 times
Deep neural network methods have led to impressive breakthroughs in the medical image field. Most of them focus on single-modal data, while diagnoses in clinical practice are usually determined based on multi-modal data, especially for tumor diseases. In this paper, we intend to find a way to effectively fuse radiology images and pathology images for the diagnosis of gliomas. To this end, we propose a collaborative attention network (CA-Net), which consists of three attention-based feature fusion modules, multi-instance attention, cross attention, and attention fusion. We first take an individual network for each modality to extract the original features. Multi-instance attention combines different informative patches in the pathology image to form a holistic pathology feature. Cross attention interacts between the two modalities and enhances single modality features by exploring complementary information from the other modality. The cross attention matrixes imply the feature reliability, so they are further utilized to obtain a coefficient for each modality to linearly fuse the enhanced features as the final representation in the attention fusion module. The three attention modules are collaborative to discover a comprehensive representation. Our result on the CPM-RadPath outperforms other fusion methods by a large margin, which demonstrates the effectiveness of the proposed method.

Brain Tumor Classification Based on MRI Images and Noise Reduced Pathology Images

  • Yin, Baocai
  • Cheng, Hu
  • Wang, Fengyan
  • Wang, Zengfu
2021 Book Section, cited 0 times
Gliomas are the most common and severe malignant tumors of the brain. The diagnosis and grading of gliomas are typically based on MRI images and pathology images. To improve the diagnosis accuracy and efficiency, we intend to design a framework for computer-aided diagnosis combining the two modalities. Without loss of generality, we first take an individual network for each modality to get the features and fuse them to predict the subtype of gliomas. For MRI images, we directly take a 3D-CNN to extract features, supervised by a cross-entropy loss function. There are too many normal regions in abnormal whole slide pathology images (WSI), which affect the training of pathology features. We call these normal regions as noise regions and propose two ideas to reduce them. Firstly, we introduce a nucleus segmentation model trained on some public datasets. The regions that has a small number of nuclei are excluded in the subsequent training of tumor classification. Secondly, we take a noise-rank module to further suppress the noise regions. After the noise reduction, we train a gliomas classification model based on the rest regions and obtain the features of pathology images. Finally, we fuse the features of the two modalities by a linear weighted module. We evaluate the proposed framework on CPM-RadPath2020 and achieve the first rank on the validation set.

DIAGNOSIS OF LUNG CANCER USING MULTISCALE CONVOLUTIONAL NEURAL NETWORK

  • Homayoon Yektai
  • Mohammad Manthouri
Biomedical Engineering: Applications, Basis and Communications 2020 Journal Article, cited 0 times
Website
Lung cancer is one of the dangerous diseases that cause huge cancer death worldwide. Early detection of lung cancer is the only possible way to improve a patient’s chance for survival. This study presents an innovative automated diagnosis classification method for Computed Tomography (CT) images of lungs. In this paper, the CT scan of lung images was analyzed with the multiscale convolution. The entire lung is segmented from the CT images and the parameters are calculated from the segmented image. The use of image processing techniques and identifying patterns in the detection of lung cancer from CT images reduces human errors in detecting tumors, and speeds up diagnosis time. Artificial Neural Network (ANN) has been widely used to detect lung cancer, and has significantly reduced the percentage of errors. Therefore, in this paper, Convolution Neural Network (CNN), which is the most effective method, is used for the detection of various types of cancers. This study presents a Multiscale Convolutional Neural Network (MCNN) approach for the classification of tumors. Based on the structure of MCNN, which presents CT picture to several deep convolutional neural networks with different size and resolutions, the classical handcrafted features extraction step is avoided. The proposed approach gives better classification rates than the classical state of the art methods allowing a safer Computer-Aided Diagnosis of pleural cancer. This study reaches a diagnosis accuracy of 93.7±0.3 using multiscale convolution technique, which reveals the efficiency of the proposed method.

Effects of phase aberration on transabdominal focusing for a large aperture, low f-number histotripsy transducer

  • Yeats, Ellen
  • Gupta, Dinank
  • Xu, Zhen
  • Hall, Timothy L
Physics in Medicine & Biology 2022 Journal Article, cited 4 times
Website

Development and Validation of an Automated Image-Based Deep Learning Platform for Sarcopenia Assessment in Head and Neck Cancer

  • Ye, Zezhong
  • Saraf, Anurag
  • Ravipati, Yashwanth
  • Hoebers, Frank
  • Catalano, Paul J.
  • Zha, Yining
  • Zapaishchykova, Anna
  • Likitlersuang, Jirapat
  • Guthier, Christian
  • Tishler, Roy B.
  • Schoenfeld, Jonathan D.
  • Margalit, Danielle N.
  • Haddad, Robert I.
  • Mak, Raymond H.
  • Naser, Mohamed
  • Wahid, Kareem A.
  • Sahlsten, Jaakko
  • Jaskari, Joel
  • Kaski, Kimmo
  • Mäkitie, Antti A.
  • Fuller, Clifton D.
  • Aerts, Hugo J. W. L.
  • Kann, Benjamin H.
2023 Journal Article, cited 0 times
Website
Sarcopenia is an established prognostic factor in patients with head and neck squamous cell carcinoma (HNSCC); the quantification of sarcopenia assessed by imaging is typically achieved through the skeletal muscle index (SMI), which can be derived from cervical skeletal muscle segmentation and cross-sectional area. However, manual muscle segmentation is labor intensive, prone to interobserver variability, and impractical for large-scale clinical use.To develop and externally validate a fully automated image-based deep learning platform for cervical vertebral muscle segmentation and SMI calculation and evaluate associations with survival and treatment toxicity outcomes.For this prognostic study, a model development data set was curated from publicly available and deidentified data from patients with HNSCC treated at MD Anderson Cancer Center between January 1, 2003, and December 31, 2013. A total of 899 patients undergoing primary radiation for HNSCC with abdominal computed tomography scans and complete clinical information were selected. An external validation data set was retrospectively collected from patients undergoing primary radiation therapy between January 1, 1996, and December 31, 2013, at Brigham and Women’s Hospital. The data analysis was performed between May 1, 2022, and March 31, 2023.C3 vertebral skeletal muscle segmentation during radiation therapy for HNSCC.Overall survival and treatment toxicity outcomes of HNSCC.The total patient cohort comprised 899 patients with HNSCC (median [range] age, 58 [24-90] years; 140 female [15.6%] and 755 male [84.0%]). Dice similarity coefficients for the validation set (n = 96) and internal test set (n = 48) were 0.90 (95% CI, 0.90-0.91) and 0.90 (95% CI, 0.89-0.91), respectively, with a mean 96.2% acceptable rate between 2 reviewers on external clinical testing (n = 377). Estimated cross-sectional area and SMI values were associated with manually annotated values (Pearson r = 0.99; P &lt; .001) across data sets. On multivariable Cox proportional hazards regression, SMI-derived sarcopenia was associated with worse overall survival (hazard ratio, 2.05; 95% CI, 1.04-4.04; P = .04) and longer feeding tube duration (median [range], 162 [6-1477] vs 134 [15-1255] days; hazard ratio, 0.66; 95% CI, 0.48-0.89; P = .006) than no sarcopenia.This prognostic study’s findings show external validation of a fully automated deep learning pipeline to accurately measure sarcopenia in HNSCC and an association with important disease outcomes. The pipeline could enable the integration of sarcopenia assessment into clinical decision making for individuals with HNSCC.

Optimizing interstitial photodynamic therapy with custom cylindrical diffusers

  • Yassine, Abdul‐Amir
  • Lilge, Lothar
  • Betz, Vaughn
Journal of biophotonics 2018 Journal Article, cited 0 times
Website

Machine learning for real-time optical property recovery in interstitial photodynamic therapy: a stimulation-based study

  • Yassine, Abdul-Amir
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2021 Journal Article, cited 1 times
Website

Automatic interstitial photodynamic therapy planning via convex optimization

  • Yassine, Abdul-Amir
  • Kingsford, William
  • Xu, Yiwen
  • Cassidy, Jeffrey
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2018 Journal Article, cited 3 times
Website

A NOVEL COMPARATIVE STUDY FOR AUTOMATIC THREE-CLASS AND FOUR-CLASS COVID-19 CLASSIFICATION ON X-RAY IMAGES USING DEEP LEARNING

  • Yaşar, Hüseyin
  • Ceylan, Murat
2022 Journal Article, cited 0 times
Website
The contagiousness rate of the COVID-19 virus, which was evaluated to have been transmitted from an animal to a human during the last months of 2019, is higher than the MERS-Cov and SARS-Cov viruses originating from the same family. The high rate of contagion has caused the COVID-19 virus to spread rapidly to all countries of the world. It is of great importance to be able to detect cases quickly in order to control the spread of the COVID-19 virus. Therefore, the development of systems that make automatic COVID-19 diagnoses using artificial intelligence approaches based on Xray, CT scans, and ultrasound images are an urgent and indispensable requirement. In order to increase the number of X-ray images used within the study, a mixed data set was created by combining eight different data sets, thus maximizing the scope of the study. In the study, a total of 9,667 X ray images were used, including 3,405 of COVID-19 samples, 2,780 of bacterial pneumonia samples, 1,493 of viral pneumonia samples and 1,989 of healthy samples. In this study, which aims to diagnose COVID-19 disease using X-ray images, automatic classification has been performed using two different classification structures: COVID-19 Pneumonia/Other Pneumonia/Healthy and COVID-19 Pneumonia/Bacterial Pneumonia/Viral Pneumonia/Healthy. Convolutional Neural Networks (CNNs), a successful deep learning method, were used as a classifier within the study. A total of seven CNN architectures were used: Mobilenetv2, Resnet101, Googlenet, Xception, Densenet201, Efficientnetb0, and Inceptionv3 architectures. The classification results were obtained from the original X-ray images, and the images were obtained by using Local Binary Pattern and Local Entropy. Then, new classification results were calculated from the obtained results using a pipeline algorithm. Detailed results were obtained to meet the scope of the study. According to the results of the experiments carried out, the three most successful CNN architectures for both three-class and four class automatic classification were Densenet201, Xception, and Inceptionv3, respectively. In addition, it is understood that the pipeline algorithm used in the study is very useful for improving the results. The study results show that up to an improvement of 1.57% were achieved in some comparison parameters.

A novel study for automatic two-class COVID-19 diagnosis (between COVID-19 and Healthy, Pneumonia) on X-ray images using texture analysis and 2-D/3-D convolutional neural networks

  • Yasar, H.
  • Ceylan, M.
Multimed Syst 2022 Journal Article, cited 0 times
Website
The pandemic caused by the COVID-19 virus affects the world widely and heavily. When examining the CT, X-ray, and ultrasound images, radiologists must first determine whether there are signs of COVID-19 in the images. That is, COVID-19/Healthy detection is made. The second determination is the separation of pneumonia caused by the COVID-19 virus and pneumonia caused by a bacteria or virus other than COVID-19. This distinction is key in determining the treatment and isolation procedure to be applied to the patient. In this study, which aims to diagnose COVID-19 early using X-ray images, automatic two-class classification was carried out in four different titles: COVID-19/Healthy, COVID-19 Pneumonia/Bacterial Pneumonia, COVID-19 Pneumonia/Viral Pneumonia, and COVID-19 Pneumonia/Other Pneumonia. For this study, 3405 COVID-19, 2780 Bacterial Pneumonia, 1493 Viral Pneumonia, and 1989 Healthy images obtained by combining eight different data sets with open access were used. In the study, besides using the original X-ray images alone, classification results were obtained by accessing the images obtained using Local Binary Pattern (LBP) and Local Entropy (LE). The classification procedures were repeated for the images that were combined with the original images, LBP, and LE images in various combinations. 2-D CNN (Two-Dimensional Convolutional Neural Networks) and 3-D CNN (Three-Dimensional Convolutional Neural Networks) architectures were used as classifiers within the scope of the study. Mobilenetv2, Resnet101, and Googlenet architectures were used in the study as a 2-D CNN. A 24-layer 3-D CNN architecture has also been designed and used. Our study is the first to analyze the effect of diversification of input data type on classification results of 2-D/3-D CNN architectures. The results obtained within the scope of the study indicate that diversifying X-ray images with tissue analysis methods in the diagnosis of COVID-19 and including CNN input provides significant improvements in the results. Also, it is understood that the 3-D CNN architecture can be an important alternative to achieve a high classification result.

Deep Learning–Based Approaches to Improve Classification Parameters for Diagnosing COVID-19 from CT Images

  • Yasar, H.
  • Ceylan, M.
Cognit Comput 2021 Journal Article, cited 0 times
Website
Patients infected with the COVID-19 virus develop severe pneumonia, which generally leads to death. Radiological evidence has demonstrated that the disease causes interstitial involvement in the lungs and lung opacities, as well as bilateral ground-glass opacities and patchy opacities. In this study, new pipeline suggestions are presented, and their performance is tested to decrease the number of false-negative (FN), false-positive (FP), and total misclassified images (FN + FP) in the diagnosis of COVID-19 (COVID-19/non-COVID-19 and COVID-19 pneumonia/other pneumonia) from CT lung images. A total of 4320 CT lung images, of which 2554 were related to COVID-19 and 1766 to non-COVID-19, were used for the test procedures in COVID-19 and non-COVID-19 classifications. Similarly, a total of 3801 CT lung images, of which 2554 were related to COVID-19 pneumonia and 1247 to other pneumonia, were used for the test procedures in COVID-19 pneumonia and other pneumonia classifications. A 24-layer convolutional neural network (CNN) architecture was used for the classification processes. Within the scope of this study, the results of two experiments were obtained by using CT lung images with and without local binary pattern (LBP) application, and sub-band images were obtained by applying dual-tree complex wavelet transform (DT-CWT) to these images. Next, new classification results were calculated from these two results by using the five pipeline approaches presented in this study. For COVID-19 and non-COVID-19 classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9676, 0.9181, 0.9456, 0.9545, and 0.9890, respectively; using pipeline approaches, the values were 0.9832, 0.9622, 0.9577, 0.9642, and 0.9923, respectively. For COVID-19 pneumonia/other pneumonia classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9615, 0.7270, 0.8846, 0.9180, and 0.9370, respectively; using pipeline approaches, the values were 0.9915, 0.8140, 0.9071, 0.9327, and 0.9615, respectively. The results of this study show that classification success can be increased by reducing the time to obtain per-image results through using the proposed pipeline approaches.

Rhinological Status of Patients with Nasolacrimal Duct Obstruction

  • Yartsev, Vasily D.
  • Atkova, Eugenia L.
  • Rozmanov, Eugeniy O.
  • Yartseva, Nina D.
International Archives of Otorhinolaryngology 2021 Journal Article, cited 0 times
Website
Introduction Studying the state of the nasal cavity and its sinuses and the morphometric parameters of the inferior nasal conchae, as well as a comparative analysis of obtained values in patients with primary (PANDO) and secondary acquired nasolacrimal duct obstruction (SALDO), is relevant. Objective To study the rhinological status of patients with PANDO and SALDO. Methods The present study was based on the results of computed tomography (CT) dacryocystography in patients with PANDO (n = 45) and SALDO due to exposure to radioactive iodine (n = 14). The control group included CT images of paranasal sinuses in patients with no pathology (n = 49). Rhinological status according to the Newman and Lund-Mackay scales and volume of the inferior nasal conchae were assessed. Statistical processing included nonparametric statistics methods; χ2 Pearson test; and the Spearman rank correlation method. Results The difference in values of the Newman and Lund-Mackay scales for the tested groups was significant. A significant difference in scores by the Newman scale was revealed when comparing the results of patients with SALDO and PANDO. Comparing the scores by the Lund-Mackay scale, a significant difference was found between the results of patients with SALDO and PANDO and between the results of patients with PANDO and the control group. Conclusion It was demonstrated that the rhinological status of patients with PANDO was worse than that of patients with SALDO and of subjects in the control group. No connection was found between the volume of the inferior nasal conchae and the development of lacrimal duct obstruction. Keywords Nasolacrimal Duct - sinus - computed tomography - dacryocystography - newman scale - lund-mackay scale

State-of-the-Art CNN Optimizer for Brain Tumor Segmentation in Magnetic Resonance Images

  • Yaqub, M.
  • Jinchao, F.
  • Zia, M. S.
  • Arshid, K.
  • Jia, K.
  • Rehman, Z. U.
  • Mehmood, A.
2020 Journal Article, cited 0 times
Website
Brain tumors have become a leading cause of death around the globe. The main reason for this epidemic is the difficulty conducting a timely diagnosis of the tumor. Fortunately, magnetic resonance images (MRI) are utilized to diagnose tumors in most cases. The performance of a Convolutional Neural Network (CNN) depends on many factors (i.e., weight initialization, optimization, batches and epochs, learning rate, activation function, loss function, and network topology), data quality, and specific combinations of these model attributes. When we deal with a segmentation or classification problem, utilizing a single optimizer is considered weak testing or validity unless the decision of the selection of an optimizer is backed up by a strong argument. Therefore, optimizer selection processes are considered important to validate the usage of a single optimizer in order to attain these decision problems. In this paper, we provides a comprehensive comparative analysis of popular optimizers of CNN to benchmark the segmentation for improvement. In detail, we perform a comparative analysis of 10 different state-of-the-art gradient descent-based optimizers, namely Adaptive Gradient (Adagrad), Adaptive Delta (AdaDelta), Stochastic Gradient Descent (SGD), Adaptive Momentum (Adam), Cyclic Learning Rate (CLR), Adaptive Max Pooling (Adamax), Root Mean Square Propagation (RMS Prop), Nesterov Adaptive Momentum (Nadam), and Nesterov accelerated gradient (NAG) for CNN. The experiments were performed on the BraTS2015 data set. The Adam optimizer had the best accuracy of 99.2% in enhancing the CNN ability in classification and segmentation.

Clinically relevant modeling of tumor growth and treatment response

  • Yankeelov, Thomas E
  • Atuegwu, Nkiruka
  • Hormuth, David
  • Weis, Jared A
  • Barnes, Stephanie L
  • Miga, Michael I
  • Rericha, Erin C
  • Quaranta, Vito
Science Translational Medicine 2013 Journal Article, cited 70 times
Website
Current mathematical models of tumor growth are limited in their clinical application because they require input data that are nearly impossible to obtain with sufficient spatial resolution in patients even at a single time point--for example, extent of vascularization, immune infiltrate, ratio of tumor-to-normal cells, or extracellular matrix status. Here we propose the use of emerging, quantitative tumor imaging methods to initialize a new generation of predictive models. In the near future, these models could be able to forecast clinical outputs, such as overall response to treatment and time to progression, which will provide opportunities for guided intervention and improved patient care.

An Improvement of Survival Stratification in Glioblastoma Patients via Combining Subregional Radiomics Signatures

  • Yang, Y.
  • Han, Y.
  • Hu, X.
  • Wang, W.
  • Cui, G.
  • Guo, L.
  • Zhang, X.
Front Neurosci 2021 Journal Article, cited 0 times
Website
Purpose: To investigate whether combining multiple radiomics signatures derived from the subregions of glioblastoma (GBM) can improve survival prediction of patients with GBM. Methods: In total, 129 patients were included in this study and split into training (n = 99) and test (n = 30) cohorts. Radiomics features were extracted from each tumor region then radiomics scores were obtained separately using least absolute shrinkage and selection operator (LASSO) COX regression. A clinical nomogram was also constructed using various clinical risk factors. Radiomics nomograms were constructed by combing a single radiomics signature from the whole tumor region with clinical risk factors or combining three radiomics signatures from three tumor subregions with clinical risk factors. The performance of these models was assessed by the discrimination, calibration and clinical usefulness metrics, and was compared with that of the clinical nomogram. Results: Incorporating the three radiomics signatures, i.e., Radscores for ET, NET, and ED, into the radiomics-based nomogram improved the performance in estimating survival (C-index: training/test cohort: 0.717/0.655) compared with that of the clinical nomogram (C-index: training/test cohort: 0.633/0.560) and that of the radiomics nomogram based on single region radiomics signatures (C-index: training/test cohort: 0.656/0.535). Conclusion: The multiregional radiomics nomogram exhibited a favorable survival stratification accuracy.

Cascaded Coarse-to-Fine Neural Network for Brain Tumor Segmentation

  • Yang, Shuojue
  • Guo, Dong
  • Wang, Lu
  • Wang, Guotai
2021 Book Section, cited 0 times
A cascaded framework of coarse-to-fine networks is proposed to segment brain tumor from multi-modality MR images into three subregions: enhancing tumor, whole tumor and tumor core. The framework is designed to decompose this multi-class segmentation into two sequential tasks according to hierarchical relationship among these regions. In the first task, a coarse-to-fine model based on Global Context Network predicts segmentation of whole tumor, which provides a bounding box of all three substructures to crop the input MR images. In the second task, cropped multi-modality MR images are fed into another two coarse-to-fine models based on NvNet trained on small patches to generate segmentation of tumor core and enhancing tumor, respectively. Experiments with BraTS 2020 validation set show that the proposed method achieves average Dice scores of 0.8003, 0.9123, 0.8630 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2020 testing set were 0.81715, 0.88229, 0.83085, respectively.

Snake-based interactive tooth segmentation for 3D mandibular meshes

  • Yang, Rui
  • Abdi, Amir H.
  • Eghbal, Atabak
  • Wang, Edward
  • Tran, Khanh Linh
  • Yang, David
  • Hodgson, Antony
  • Prisman, Eitan
  • Fels, Sidney
  • Linte, Cristian A.
  • Siewerdsen, Jeffrey H.
2021 Conference Paper, cited 0 times
Website
Mandibular meshes segmented from computerized tomography (CT) images contain rich information of the dentition conditions, which impairs the performance of shape completion algorithms relying on such data, but can benefit virtual planning for oral reconstructive surgeries. To locate the alveolar process and remove the dentition area, we propose a semiautomatic method using non-rigid registration, an active contour model, and constructive solid geometry (CSG) operations. An easy-to-use interactive tool is developed allowing users to adjust the tooth crown contour position. A validation study and a comparison study were conducted for method evaluation. In the validation study, we removed teeth for 28 models acquired from Vancouver General Hospital (VGH) and ran a shape completion test. Regarding 95th percentile Hausdorff distance (HD95), using edentulous models produced significantly better predictions of the premorbid shapes of diseased mandibles than using models with inconsistent dentition conditions (Z = −2.484, p = 0.01). The volumetric Dice score (DSC) shows no significant difference. In the second study, we compared the proposed method to manual removal in terms of manual processing time, symmetric HD95, and symmetric root mean square deviation (RMSD). The result indicates that our method reduced the manual processing time by 40% on average and approached the accuracy of manual tooth segmentation. It is promising to warrant further efforts towards clinical usage. This work forms the basis of a useful tool for coupling jaw reconstruction and restorative dentition for patient treatment planning.

Learning Dynamic Convolutions for Multi-modal 3D MRI Brain Tumor Segmentation

  • Yang, Qiushi
  • Yuan, Yixuan
2021 Book Section, cited 0 times
Accurate automated brain tumor segmentation with 3D Magnetic Resonance Image (MRIs) liberates doctors from tedious annotation work and further monitors and provides prompt treatment of the disease. Many recent Deep Convolutional Neural Networks (DCNN) achieve tremendous success on medical image analysis, especially tumor segmentation, while they usually use static networks without considering the inherent diversity of multi-modal inputs. In this paper, we introduce a dynamic convolutional module into brain tumor segmentation and help to learn input-adaptive parameters for specific multi-modal images. To the best of our knowledge, this is the first work to adopt dynamic convolutional networks to segment brain tumor with 3D MRI data. In addition, we employ multiple branches to learn low-level features from multi-modal inputs in an end-to-end fashion. We further investigate boundary information and propose a boundary-aware module to enforce our model to pay more attention to important pixels. Experimental results on the testing dataset and cross-validation dataset split from the training dataset of BraTS 2020 Challenge demonstrate that our proposed framework obtains competitive Dice scores compared with state-of-the-art approaches.

A Novel Deep Learning Framework for Standardizing the Label of OARs in CT

  • Yang, Qiming
  • Chao, Hongyang
  • Nguyen, Dan
  • Jiang, Steve
2019 Conference Paper, cited 0 times
When organs at risk (OARs) are contoured in computed tomography (CT) images for radiotherapy treatment planning, the labels are often inconsistent, which severely hampers the collection and curation of clinical data for research purpose. Currently, data cleaning is mainly done manually, which is time-consuming. The existing methods for automatically relabeling OARs remain unpractical with real patient data, due to the inconsistent delineation and similar small-volume OARs. This paper proposes an improved data augmentation technique according to the characteristics of clinical data. Besides, a novel 3D non-local convolutional neural network is proposed, which includes a decision making network with voting strategy. The resulting model can automatically identify OARs and solve the problems in existing methods, achieving the accurate OAR re-labeling goal. We used partial data from a public head-and-neck dataset (HN_PETCT) for training, and then tested the model on datasets from three different medical institutions. We have obtained the state-of-the-art results for identifying 28 OARs in the head-and-neck region, and also our model is capable of handling multi-center datasets indicating strong generalization ability. Compared to the baseline, the final result of our model achieved a significant improvement in the average true positive rate (TPR) on the three test datasets (+8.27%, +2.39%, +5.53%, respectively). More importantly, the F1 score of small-volume OAR with only 9 training samples increased from 28.63% to 91.17%.

Development of a radiomics nomogram based on the 2D and 3D CT features to predict the survival of non-small cell lung cancer patients

  • Yang, Lifeng
  • Yang, Jingbo
  • Zhou, Xiaobo
  • Huang, Liyu
  • Zhao, Weiling
  • Wang, Tao
  • Zhuang, Jian
  • Tian, Jie
European Radiology 2018 Journal Article, cited 0 times
Website

CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy

  • Yang, J.
  • Veeraraghavan, H.
  • van Elmpt, W.
  • Dekker, A.
  • Gooding, M.
  • Sharp, G.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS: Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES: The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS: This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.

Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017

  • Yang, J.
  • Veeraraghavan, H.
  • Armato, S. G., 3rd
  • Farahani, K.
  • Kirby, J. S.
  • Kalpathy-Kramer, J.
  • van Elmpt, W.
  • Dekker, A.
  • Han, X.
  • Feng, X.
  • Aljabar, P.
  • Oliveira, B.
  • van der Heyden, B.
  • Zamdborg, L.
  • Lam, D.
  • Gooding, M.
  • Sharp, G. C.
Med Phys 2018 Journal Article, cited 172 times
Website
PURPOSE: This report presents the methods and results of the Thoracic Auto-Segmentation Challenge organized at the 2017 Annual Meeting of American Association of Physicists in Medicine. The purpose of the challenge was to provide a benchmark dataset and platform for evaluating performance of autosegmentation methods of organs at risk (OARs) in thoracic CT images. METHODS: Sixty thoracic CT scans provided by three different institutions were separated into 36 training, 12 offline testing, and 12 online testing scans. Eleven participants completed the offline challenge, and seven completed the online challenge. The OARs were left and right lungs, heart, esophagus, and spinal cord. Clinical contours used for treatment planning were quality checked and edited to adhere to the RTOG 1106 contouring guidelines. Algorithms were evaluated using the Dice coefficient, Hausdorff distance, and mean surface distance. A consolidated score was computed by normalizing the metrics against interrater variability and averaging over all patients and structures. RESULTS: The interrater study revealed highest variability in Dice for the esophagus and spinal cord, and in surface distances for lungs and heart. Five out of seven algorithms that participated in the online challenge employed deep-learning methods. Although the top three participants using deep learning produced the best segmentation for all structures, there was no significant difference in the performance among them. The fourth place participant used a multi-atlas-based approach. The highest Dice scores were produced for lungs, with averages ranging from 0.95 to 0.98, while the lowest Dice scores were produced for esophagus, with a range of 0.55-0.72. CONCLUSION: The results of the challenge showed that the lungs and heart can be segmented fairly accurately by various algorithms, while deep-learning methods performed better on the esophagus. Our dataset together with the manual contours for all training cases continues to be available publicly as an ongoing benchmarking resource.

Abdominal CT pancreas segmentation using multi-scale convolution with aggregated transformations

  • Yang, Jin
  • Marcus, Daniel S.
  • Sotiras, Aristeidis
  • Iftekharuddin, Khan M.
  • Chen, Weijie
2023 Conference Paper, cited 0 times
Convolutional neural networks (CNNs) are a popular choice for medical image segmentation. However, they may be challenged by the large inter-subject variation in organ shapes and sizes due to CNNs typically employing convolutions with fixed-sized local receptive fields. To address this limitation, we proposed multi-scale aggregated residual convolution (MARC) and iterative multi-scale aggregated residual convolution (iMARC) to capture finer and richer features at various scales. Our goal is to improve single convolutions’ representation capabilities. This is achieved by employing convolutions with varying-sized receptive fields, combining multiple convolutions into a deeper one, and dividing single convolutions into a set of channel-independent sub-convolutions. These implementations result in an increase in their depth, width, and cardinality. The proposed MARC and iMARC can be easily integrated into general CNN architectures and trained end-to-end. To evaluate the improvements of MARC and iMARC on CNNs’ segmentation capabilities, we integrated MARC and iMARC into a standard 2D U-Net architecture for pancreas segmentation on abdominal computed tomography (CT) images. The results showed that our proposed MARC and iMARC enhanced the representation capabilities of single convolutions, resulting in improved segmentation performance with lower computational complexity.

Combining Global Information with Topological Prior for Brain Tumor Segmentation

  • Yang, Hua
  • Shen, Zhiqiang
  • Li, Zhaopei
  • Liu, Jinqing
  • Xiao, Jinchao
2022 Book Section, cited 0 times
Gliomas are the most common and aggressive malignant primary brain tumors. Automatic brain tumor segmentation from multi-modality magnetic resonance images using deep learning methods is critical for gliomas diagnosis. Deep learning segmentation architectures, especially based on fully convolutional neural network, have proved great performance on medical image segmentation. However, these approaches cannot explicitly model global information and overlook the topology structure of lesion regions, which leaves room for improvement. In this paper, we propose a convolution-and-transformer network (COTRNet) to explicitly capture global information and a topology aware loss to constrain the network to learn topological information. Moreover, we exploit transfer learning by using pretrained parameters on ImageNet and deep supervision by adding multi-level predictions to further improve the segmentation performance. COTRNet achieved dice scores of 78.08%, 76.18%, and 83.92% in the enhancing tumor, the tumor core, and the whole tumor segmentation on brain tumor segmentation challenge 2021. Experimental results demonstrated effectiveness of the proposed method.

Research on the Content-Based Classification of Medical Image

  • Yang, Hui
  • Liu, Feng
  • Wang, Zhiqi
  • Tang, Han
  • Sun, Shuyang
  • Sun, Shilei
2017 Journal Article, cited 1 times
Website
Medical images have increased tremendously in numbers and categories these years as the devices generating them become more and more advanced. In this paper, four classifiers for automatically identifying medical images of different body parts are explored and implemented. Classic and recognized image descriptors such as wavelet transform and SIFT are utilized and combined with SVM and proposed modified KNN to verify the validity of the traditional classification methods when applied to medical images. In the process, a novel representation of wavelet feature is advanced in combination with a proposed tuned KNN. This wavelet feature is also applied with SVM. SIFT and its variety, dense SIFT are both employed to extract image features and they are formatted by the spatial pyramid model into a concatenated histogram. All these methods are compared with one another for accuracy and efficiency. Moreover, a convolutional network (CNN) is constructed to classify medical images. We show that in regards to the various types and huge numbers of medical images, traditional methods and deep learning approach such as CNN can both achieve high accuracy results. The methods illustrated in this paper can all be reasonably applied to medical image application with variance in speed and accuracy.

Efficient diagnosis of hematologic malignancies using bone marrow microscopic images: A method based on MultiPathGAN and MobileViTv2

  • Yang, G.
  • Qin, Z.
  • Mu, J.
  • Mao, H.
  • Mao, H.
  • Han, M.
Comput Methods Programs Biomed 2023 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVES: Hematologic malignancies, including the associated multiple subtypes, are critically threatening to human health. The timely detection of malignancies is crucial for their effective treatment. In this regard, the examination of bone marrow smears constitutes a crucial step. Nonetheless, the conventional approach to cell identification and enumeration is laborious and time-intensive. Therefore, the present study aimed to develop a method for the efficient diagnosis of these malignancies directly from bone marrow microscopic images. METHODS: A deep learning-based framework was developed to facilitate the diagnosis of common hematologic malignancies. First, a total of 2033 microscopic images of bone marrow analysis, including the images for 6 disease types and 1 healthy control, were collected from two Chinese medical websites. Next, the collected images were classified into the training, validation, and test datasets in the ratio of 7:1:2. Subsequently, a method of stain normalization to multi-domains (stain domain augmentation) based on the MultiPathGAN model was developed to equalize the stain styles and expand the image datasets. Afterward, a lightweight hybrid model named MobileViTv2, which integrates the strengths of both CNNs and ViTs, was developed for disease classification. The resulting model was trained and utilized to diagnose patients based on multiple microscopic images of their bone marrow smears, obtained from a cohort of 61 individuals. RESULTS: MobileViTv2 exhibited an average accuracy of 94.28% when applied to the test set, with multiple myeloma, acute lymphocytic leukemia, and lymphoma revealed as the three diseases diagnosed with the highest accuracy values of 98%, 96%, and 96%, respectively. Regarding patient-level prediction, the average accuracy of MobileViTv2 was 96.72%. This model outperformed both CNN and ViT models in terms of accuracy, despite utilizing only 9.8 million parameters. When applied to two public datasets, MobileViTv2 exhibited accuracy values of 99.75% and 99.72%, respectively, and outperformed previous methods. CONCLUSIONS: The proposed framework could be applied directly to bone marrow microscopic images with different stain styles to efficiently establish the diagnosis of common hematologic malignancies.

MRI Brain Tumor Segmentation and Patient Survival Prediction Using Random Forests and Fully Convolutional Networks

  • Yang, Guang
  • Nigel Allinson
  • Xujiong Ye
2018 Book Section, cited 1 times
Website

Clinical application of mask region-based convolutional neural network for the automatic detection and segmentation of abnormal liver density based on hepatocellular carcinoma computed tomography datasets

  • Yang, C. J.
  • Wang, C. K.
  • Fang, Y. D.
  • Wang, J. Y.
  • Su, F. C.
  • Tsai, H. M.
  • Lin, Y. J.
  • Tsai, H. W.
  • Yeh, L. R.
PLoS One 2021 Journal Article, cited 0 times
Website
The aim of the study was to use a previously proposed mask region-based convolutional neural network (Mask R-CNN) for automatic abnormal liver density detection and segmentation based on hepatocellular carcinoma (HCC) computed tomography (CT) datasets from a radiological perspective. Training and testing datasets were acquired retrospectively from two hospitals of Taiwan. The training dataset contained 10,130 images of liver tumor densities of 11,258 regions of interest (ROIs). The positive testing dataset contained 1,833 images of liver tumor densities with 1,874 ROIs, and negative testing data comprised 20,283 images without abnormal densities in liver parenchyma. The Mask R-CNN was used to generate a medical model, and areas under the curve, true positive rates, false positive rates, and Dice coefficients were evaluated. For abnormal liver CT density detection, in each image, we identified the mean area under the curve, true positive rate, and false positive rate, which were 0.9490, 91.99%, and 13.68%, respectively. For segmentation ability, the highest mean Dice coefficient obtained was 0.8041. This study trained a Mask R-CNN on various HCC images to construct a medical model that serves as an auxiliary tool for alerting radiologists to abnormal CT density in liver scans; this model can simultaneously detect liver lesions and perform automatic instance segmentation.

Source free domain adaptation for medical image segmentation with fourier style mining

  • Yang, C.
  • Guo, X.
  • Chen, Z.
  • Yuan, Y.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Unsupervised domain adaptation (UDA) aims to exploit the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled target domain. Existing UDA techniques typically assume that samples from source and target domains are freely accessible during the training. However, it may be impractical to access source images due to privacy concerns, especially in medical imaging scenarios with the patient information. To tackle this issue, we devise a novel source free domain adaptation framework with fourier style mining, where only a well-trained source segmentation model is available for the adaptation to the target domain. Our framework is composed of two stages: a generation stage and an adaptation stage. In the generation stage, we design a Fourier Style Mining (FSM) generator to inverse source-like images through statistic information of the pretrained source model and mutual Fourier Transform. These generated source-like images can provide source data distribution and benefit the domain alignment. In the adaptation stage, we design a Contrastive Domain Distillation (CDD) module to achieve feature-level adaptation, including a domain distillation loss to transfer relation knowledge and a domain contrastive loss to narrow down the domain gap by a self-supervised paradigm. Besides, a Compact-Aware Domain Consistency (CADC) module is proposed to enhance consistency learning by filtering out noisy pseudo labels with shape compactness metric, thus achieving output-level adaptation. Extensive experiments on cross-device and cross-centre datasets are conducted for polyp and prostate segmentation, and our method delivers impressive performance compared with state-of-the-art domain adaptation methods. The source code is available at https://github.com/CityU-AIM-Group/SFDA-FSM.

Variation-Aware Federated Learning With Multi-Source Decentralized Medical Image Data

  • Yan, Z.
  • Wicaksana, J.
  • Wang, Z.
  • Yang, X.
  • Cheng, K. T.
IEEE J Biomed Health Inform 2021 Journal Article, cited 69 times
Website
Privacy concerns make it infeasible to construct a large medical image dataset by fusing small ones from different sources/institutions. Therefore, federated learning (FL) becomes a promising technique to learn from multi-source decentralized data with privacy preservation. However, the cross-client variation problem in medical image data would be the bottleneck in practice. In this paper, we propose a variation-aware federated learning (VAFL) framework, where the variations among clients are minimized by transforming the images of all clients onto a common image space. We first select one client with the lowest data complexity to define the target image space and synthesize a collection of images through a privacy-preserving generative adversarial network, called PPWGAN-GP. Then, a subset of those synthesized images, which effectively capture the characteristics of the raw images and are sufficiently distinct from any raw image, is automatically selected for sharing with other clients. For each client, a modified CycleGAN is applied to translate its raw images to the target image space defined by the shared synthesized images. In this way, the cross-client variation problem is addressed with privacy preservation. We apply the framework for automated classification of clinically significant prostate cancer and evaluate it using multi-source decentralized apparent diffusion coefficient (ADC) image data. Experimental results demonstrate that the proposed VAFL framework stably outperforms the current horizontal FL framework. As VAFL is independent of deep learning architectures for classification, we believe that the proposed framework is widely applicable to other medical image classification tasks.

Markerless Lung Tumor Localization From Intraoperative Stereo Color Fluoroscopic Images for Radiotherapy

  • Yan, Yongxuan
  • Fujii, Fumitake
  • Shiinoki, Takehiro
  • Liu, Shengping
IEEE Access 2024 Journal Article, cited 0 times
Website
Accurately determining tumor regions from stereo fluoroscopic images during radiotherapy is a challenging task. As a result, high-density fiducial markers are implanted around tumors in clinical practice as internal surrogates of the tumor, which leads to associated surgical risks. This study was conducted to achieve lung tumor localization without the use of fiducial markers. We propose training a cascade U-net system to perform color to grayscale conversion, enhancement, bone suppression, and tumor detection to determine the precise tumor region. We generated Digitally Reconstructed Radiographs (DRRs) and tumor labels from 4D planning CT images as training data. An improved maximum projection algorithm and a novel color-to-gray conversion algorithm were proposed to improve the quality of the generated training data. Training a bone suppression model using bone-enhanced and bone-suppressed DRRs enables the bone suppression model to achieve better bone suppression performance. The mean peak signal-to-noise ratios in the test sets of the trained translation and bone suppression models are 39.284 ± 0.034 dB and 37.713 ± 0.724 dB, respectively. The results indicate that our proposed markerless tumor localization method is applicable in seven out of ten cases; in applicable cases, the centroid position error of the tumor detection model is less than 1.13 mm; and the calculated tumor center motion trajectories using the proposed network highly coincide with the motion trajectories of implanted fiducial markers in over 60% of captured groups, providing a promising direction for markerless tumor localization tracking methods.

Accelerating Brain DTI and GYN MRI Studies Using Neural Network

  • Yan, Yuhao
Medical Physics 2021 Thesis, cited 0 times
Website
There always exists a demand to accelerate the time-consuming MRI acquisition process. Many methods have been proposed to achieve this goal, including deep learning method which appears to be a robust tool compared to conventional methods. While many works have been done to evaluate the performance of neural networks on standard anatomical MR images, few attentions have been paid to accelerating other less conventional MR image acquisitions. This work aims to evaluate the feasibility of neural networks on accelerating Brain DTI and Gynecological Brachytherapy MRI.Three neural networks including U-net, Cascade-net and PD-net were evaluated. Brain DTI data was acquired from public database RIDER NEURO MRI while cervix gynecological MRI data was acquired from Duke University Hospital clinic data. A 25% Cartesian undersampling strategy was applied to all the training and test data. Diffusion weighted images and quantitative functional maps in Brain DTI, T1-spgr and T2 images in GYN studies were reconstructed. The performance of the neural networks was evaluated by quantitatively calculating the similarity between the reconstructed images and the reference images, using the metric Total Relative Error (TRE). Results showed that with the architectures and parameters set in this work, all three neural networks could accelerate Brain DTI and GYN T2 MR imaging. Generally, PD-net slightly outperformed Cascade-net, and they both outperformed U-net with respect to image reconstruction performance. While this was also true for reconstruction of quantitative functional diffusion weighted maps and GYN T1-spgr images, the overall performance of the three neural networks on these two tasks needed further improvement. To be concluded, PD-net is very promising on accelerating T2-weighted-based MR imaging. Future work can focus on adjusting the parameters and architectures of the neural networks to improve the performance on accelerating GYN T1-spgr MR imaging and adopting more robust undersampling strategy such as radial undersampling strategy to further improve the overall acceleration performance.

3D Deep Residual Encoder-Decoder CNNS with Squeeze-and-Excitation for Brain Tumor Segmentation

  • Yan, Kai
  • Sun, Qiuchang
  • Li, Ling
  • Li, Zhicheng
2020 Book Section, cited 0 times
Segmenting brain tumors from multimodal MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Due to the highly heterogeneous appearance and shape, segmentation of brain tumors in multimodal MRI scans is a challenging task in medical image analysis. In recent years, many segmentation algorithms based on neural network architecture are proposed to address this task. Observing the previous state-of-the-art algorithms, not only did we explore multimodal brain tumor segmentation in 2D space, 2.5D space and 3D space respectively, we also made a lot of attempts in attention block to improve the segmentation result. In this paper, we describe a 3D deep residual encoder-decoder CNNS with Squeeze-and-Excitation block for brain tumor segmentation. In order to learn more effective image features, we have utilized an attention module after each Res-block to weight each channel, which emphasizes useful features while suppresses invalid ones. To deal with class imbalance, we have formulated a weighted Dice loss function. We find that 3D segmentation network with attention block which can enhance context features can significantly improve the performance. In addition, the results of data preprocessing have a great impact on segmentation performance. Our method obtained Dice scores of 0.70, 0.85 and 0.80 for segmenting enhancing tumor, whole tumor and tumor core, respectively on the testing data set.

Predicting 1p/19q co-deletion status from magnetic resonance imaging using deep learning in adult-type diffuse lower-grade gliomas: a discovery and validation study

  • Yan, J.
  • Zhang, S.
  • Sun, Q.
  • Wang, W.
  • Duan, W.
  • Wang, L.
  • Ding, T.
  • Pei, D.
  • Sun, C.
  • Wang, W.
  • Liu, Z.
  • Hong, X.
  • Wang, X.
  • Guo, Y.
  • Li, W.
  • Cheng, J.
  • Liu, X.
  • Li, Z. C.
  • Zhang, Z.
Lab Invest 2022 Journal Article, cited 0 times
Website
Determination of 1p/19q co-deletion status is important for the classification, prognostication, and personalized therapy in diffuse lower-grade gliomas (LGG). We developed and validated a deep learning imaging signature (DLIS) from preoperative magnetic resonance imaging (MRI) for predicting the 1p/19q status in patients with LGG. The DLIS was constructed on a training dataset (n = 330) and validated on both an internal validation dataset (n = 123) and a public TCIA dataset (n = 102). The receiver operating characteristic (ROC) analysis and precision recall curves (PRC) were used to measure the classification performance. The area under ROC curves (AUC) of the DLIS was 0.999 for training dataset, 0.986 for validation dataset, and 0.983 for testing dataset. The F1-score of the prediction model was 0.992 for training dataset, 0.940 for validation dataset, and 0.925 for testing dataset. Our data suggests that DLIS could be used to predict the 1p/19q status from preoperative imaging in patients with LGG. The imaging-based deep learning has the potential to be a noninvasive tool predictive of molecular markers in adult diffuse gliomas.

Classification of LGG/GBM Brain Tumor in MRI Using Deep-Learning Schemes: A Study

  • Yamuna, S.
  • Vijayakumar, K.
  • Valli, R.
2023 Conference Paper, cited 0 times
Website
Brain abnormalities require immediate medical attention, including diagnosis and treatment. One of the most severe brain disorders is brain tumor, and magnetic resonance imaging (MRI) is frequently used for clinical level screening of these illnesses. In order to categorize brain MRI images into low-grade gliomas (LGG) and glioblastoma-multiform (GBM), a deep learning strategy will be implemented in this work. The steps in this scheme are as follows: (i) gathering data and converting 3D to 2D; (ii) deep features mining using selected scheme; (iii) binary classification using SoftMax; and (iv) comparison analysis using selected deep learning techniques to determine the best model for additional refinement. The LGG/GBM photos are thought to be gathered by the Cancer Imaging Archive (TCIA) database. The results of this study demonstrate that max-pooling offers a higher accuracy than average-pooling based models, and the performance of the created scheme is validated using both average- and maxpooling. In the chosen models, the result of VGG16 is superior for the LGG/GBM detection task.

Deep learning method for brain tumor identification with multimodal 3D-MRI

  • Yakaiah, Potharaju
  • Srikar, D.
  • Kaushik, G.
  • Geetha, Y.
2023 Conference Paper, cited 0 times
Website
In the primary gliomas, the brain tumors be the majority frequent of all types. Both the accurate and detailed delineation of tumor borders are significant for detection, treatment planning, also discovering risk factors this paper presents a brain tumor segmentation system using a deep learning approach. U-net is a new type of deep learning network which has been trained to segment the brain tumors. Essentially, our architecture be a nested, deeply-supervised decoder-encoder-skipper network. We use the BraTS data set as our training data for our model. For all practical purposes, a tumor in the validation dataset must be 0.757, 0.17 also 0.89.

Automatic 3D Mesh-Based Centerline Extraction from a Tubular Geometry Form

  • Yahya-Zoubir, Bahia
  • Hamami, Latifa
  • Saadaoui, Llies
  • Ouared, Rafik
Information Technology And Control 2016 Journal Article, cited 0 times
Website

Morphological diagnosis of hematologic malignancy using feature fusion-based deep convolutional neural network

  • Yadav, D. P.
  • Kumar, D.
  • Jalal, A. S.
  • Kumar, A.
  • Singh, K. U.
  • Shah, M. A.
2023 Journal Article, cited 0 times
Website
Leukemia is a cancer of white blood cells characterized by immature lymphocytes. Due to blood cancer, many people die every year. Hence, the early detection of these blast cells is necessary for avoiding blood cancer. A novel deep convolutional neural network (CNN) 3SNet that has depth-wise convolution blocks to reduce the computation costs has been developed to aid the diagnosis of leukemia cells. The proposed method includes three inputs to the deep CNN model. These inputs are grayscale and their corresponding histogram of gradient (HOG) and local binary pattern (LBP) images. The HOG image finds the local shape, and the LBP image describes the leukaemia cell's texture pattern. The suggested model was trained and tested with images from the AML-Cytomorphology_LMU dataset. The mean average precision (MAP) for the cell with less than 100 images in the dataset was 84%, whereas for cells with more than 100 images in the dataset was 93.83%. In addition, the ROC curve area for these cells is more than 98%. This confirmed proposed model could be an adjunct tool to provide a second opinion to a doctor.

A Multi-path Decoder Network for Brain Tumor Segmentation

  • Xue, Yunzhe
  • Xie, Meiyan
  • Farhat, Fadi G.
  • Boukrina, Olga
  • Barrett, A. M.
  • Binder, Jeffrey R.
  • Roshan, Usman W.
  • Graves, William W.
2020 Book Section, cited 0 times
The identification of brain tumor type, shape, and size from MRI images plays an important role in glioma diagnosis and treatment. Manually identifying the tumor is time expensive and prone to error. And while information from different image modalities may help in principle, using these modalities for manual tumor segmentation may be even more time consuming. Convolutional U-Net architectures with encoders and decoders are state of the art in automated methods for image segmentation. Often only a single encoder and decoder is used, where different modalities and regions of the tumor share the same model parameters. This may lead to incorrect segmentations. We propose a convolutional U-Net that has separate, independent encoders for each image modality. The outputs from each encoder are concatenated and given to separate fusion and decoder blocks for each region of the tumor. The features from each decoder block are then calibrated in a final feature fusion block, after which the model gives it final predictions. Our network is an end-to-end model that simplifies training and reproducibility. On the BraTS 2019 validation dataset our model achieves average Dice values of 0.75, 0.90, and 0.83 for the enhancing tumor, whole tumor, and tumor core subregions respectively.

Deep hybrid neural-like P systems for multiorgan segmentation in head and neck CT/MR images

  • Xue, Jie
  • Wang, Yuan
  • Kong, Deting
  • Wu, Feiyang
  • Yin, Anjie
  • Qu, Jianhua
  • Liu, Xiyu
Expert Systems with Applications 2021 Journal Article, cited 0 times
Website
Automatic segmentation of organs-at-risk (OARs) of the head and neck, such as the brainstem, the left and right parotid glands, mandible, optic chiasm, and the left and right optic nerves, are crucial when formulating radiotherapy plans. However, there are difficulties due to (1) the small sizes of these organs (especially the optic chiasm and optic nerves) and (2) the different positions and phenotypes of the OARs. In this paper, we propose a novel, automatic multiorgan segmentation algorithm based on a new hybrid neural-like P system, to alleviate the above challenges. The new P system possesses the joint advantages of cell-like and neural-like P systems and includes new structures and rules, allowing it to solve more real-world problems in parallelism. In the new P system, effective ensemble convolutional neural networks (CNNs) are implemented with different initializations simultaneously to perform pixel-wise segmentations of OARs, which can obtain more effective features and leverage the strength of ensemble learning. Evaluations on three public datasets show the effectiveness and robustness of the proposed algorithm for accurate OARs segmentation in various image modalities.

Lung cancer diagnosis in CT images based on Alexnet optimized by modified Bowerbird optimization algorithm

  • Xu, Yeguo
  • Wang, Yuhang
  • Razmjooy, Navid
Biomedical Signal Processing and Control 2022 Journal Article, cited 0 times
Objective Cancer is the uncontrolled growth of abnormal cells that do not function as normal cells. Lung cancer is the leading cause of cancer death in the world, so early detection of lung disease will have a major impact on the likelihood of a definitive cure. Computed Tomography (CT) has been identified as one of the best imaging techniques. Various tools available for medical image processing include data collection in the form of images and algorithms for image analysis and system testing. Methods This study proposes a new diagnosis system for lung cancer based on image processing and artificial intelligence from CT-scan images. In the present study, after noise reduction based on wiener Filtering, Alexnet has been utilized for diagnosing healthy and cancerous cases. The system also uses optimum terms of different features, including Gabor wavelet transform, GLCM, and GLRM to be used in replacing with the network feature extraction part. The study also uses a new modified version of the Satin Bowerbird Optimization Algorithm for optimal designing of the Alexnet architecture and optimal selection of the features. Results Simulation results of the proposed method on the RIDER Lung CT collection database and the comparison results with some other state-of-the-art methods show that the proposed method provides a satisfying tool for lung cancer diagnosis. The comparison results show that the proposed method with 95.96% accuracy shows the highest value toward the others. The results also show that a higher harmonic mean value for the proposed method with higher F1-score of the method toward the others. Plus, the highest test recall results (98.06%) of the proposed method indicate its higher rate of relevant instances that are retrieved for the images. Conclusion Therefore, using the proposed method can provide an efficient tool for optimal diagnosis of the Lung Cancer from the CT Images. Significance this shows that using the proposed method as a new deep-learning-based methodology, can provide higher accuracy and can resolve the big problem of optimal hyperparameters selection of the deep-learning-based methodology techniques for the aimed case.

Brain Tumor Segmentation Using Attention-Based Network in 3D MRI Images

  • Xu, Xiaowei
  • Zhao, Wangyuan
  • Zhao, Jun
2020 Book Section, cited 0 times
Gliomas are the most common primary brain malignancies. Identifying the sub-regions of gliomas before surgery is meaningful, which may extend the survival of patients. However, due to the heterogeneous appearance and shape of gliomas, it is a challenge to accurately segment the enhancing tumor, the necrotic, the non-enhancing tumor core and the peritumoral edema. In this study, an attention-based network was used to segment the glioma sub-regions in multi-modality MRI scans. Attention U-Net was employed as the basic architecture of the proposed network. The attention gates help the network focus on the task-relevant regions in the image. Besides the spatial-wise attention gates, the channel-wise attention gates proposed in SE Net were also embedded into the segmentation network. This attention mechanism in the feature dimension prompts the network to focus on the useful feature maps. Furthermore, in order to reduce false positives, a training strategy combined with a sampling strategy was proposed in our study. The segmentation performance of the proposed network was evaluated on the BraTS 2019 validation dataset and testing dataset. In the validation dataset, the dice similarity coefficients of enhancing tumor, tumor core and whole tumor were 0.759, 0.807 and 0.893 respectively. And in the testing dataset, the dice scores of enhancing tumor, tumor core and whole tumor were 0.794, 0.814 and 0.866 respectively.

CARes‐UNet: Content‐Aware residual UNet for lesion segmentation of COVID‐19 from chest CT images

  • Xu, Xinhua
  • Wen, Yuhang
  • Zhao, Lu
  • Zhang, Yi
  • Zhao, Youjun
  • Tang, Zixuan
  • Yang, Ziduo
  • Chen, Calvin Yu‐Chian
Medical Physics 2021 Journal Article, cited 0 times
Website

Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression

  • XU, Xiaoyang
2019 Thesis, cited 0 times
Website
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.

Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis

  • Xu, Weijie
  • Nie, Lina
  • Chen, Beijing
  • Ding, Weiping
2023 Journal Article, cited 0 times
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.

Radiomics-based survival risk stratification of glioblastoma is associated with different genome alteration

  • Xu, P. F.
  • Li, C.
  • Chen, Y. S.
  • Li, D. P.
  • Xi, S. Y.
  • Chen, F. R.
  • Li, X.
  • Chen, Z. P.
Comput Biol Med 2023 Journal Article, cited 0 times
Website
BACKGROUND: Glioblastoma (GBM) is a remarkable heterogeneous tumor with few non-invasive, repeatable, and cost-effective prognostic biomarkers reported. In this study, we aim to explore the association between radiomic features and prognosis and genomic alterations in GBM. METHODS: A total of 180 GBM patients (training cohort: n = 119; validation cohort 1: n = 37; validation cohort 2: n = 24) were enrolled and underwent preoperative MRI scans. From the multiparametric (T1, T1-Gd, T2, and T2-FLAIR) MR images, the radscore was developed to predict overall survival (OS) in a multistep postprocessing workflow and validated in two external validation cohorts. The prognostic accuracy of the radscore was assessed with concordance index (C-index) and Brier scores. Furthermore, we used hierarchical clustering and enrichment analysis to explore the association between image features and genomic alterations. RESULTS: The MRI-based radscore was significantly correlated with OS in the training cohort (C-index: 0.70), validation cohort 1 (C-index: 0.66), and validation cohort 2 (C-index: 0.74). Multivariate analysis revealed that the radscore was an independent prognostic factor. Cluster analysis and enrichment analysis revealed that two distinct phenotypic clusters involved in distinct biological processes and pathways, including the VEGFA-VEGFR2 signaling pathway (q-value = 0.033), JAK-STAT signaling pathway (q-value = 0.049), and regulation of MAPK cascade (q-value = 0.0015/0.025). CONCLUSIONS: Radiomic features and radiomics-derived radscores provided important phenotypic and prognostic information with great potential for risk stratification in GBM.

Development and acceptability validation of a deep learning-based tool for whole-prostate segmentation on multiparametric MRI: a multicenter study

  • Xu, L.
  • Zhang, G.
  • Zhang, D.
  • Zhang, J.
  • Zhang, X.
  • Bai, X.
  • Chen, L.
  • Jin, R.
  • Mao, L.
  • Li, X.
  • Sun, H.
  • Jin, Z.
Quant Imaging Med Surg 2023 Journal Article, cited 0 times
Website
BACKGROUND: Accurate whole prostate segmentation on magnetic resonance imaging (MRI) is important in the management of prostatic diseases. In this multicenter study, we aimed to develop and evaluate a clinically applicable deep learning-based tool for automatic whole prostate segmentation on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI). METHODS: In this retrospective study, 3-dimensional (3D) U-Net-based models in the segmentation tool were trained with 223 patients who underwent prostate MRI and subsequent biopsy from 1 hospital and validated in 1 internal testing cohort (n=95) and 3 external testing cohorts: PROSTATEx Challenge for T2WI and DWI (n=141), Tongji Hospital (n=30), and Beijing Hospital for T2WI (n=29). Patients from the latter 2 centers were diagnosed with advanced prostate cancer. The DWI model was further fine-tuned to compensate for the scanner variety in external testing. A quantitative evaluation, including Dice similarity coefficients (DSCs), 95% Hausdorff distance (95HD), and average boundary distance (ABD), and a qualitative analysis were used to evaluate the clinical usefulness. RESULTS: The segmentation tool showed good performance in the testing cohorts on T2WI (DSC: 0.922 for internal testing and 0.897-0.947 for external testing) and DWI (DSC: 0.914 for internal testing and 0.815 for external testing with fine-tuning). The fine-tuning process significantly improved the DWI model's performance in the external testing dataset (DSC: 0.275 vs. 0.815; P<0.01). Across all testing cohorts, the 95HD was <8 mm, and the ABD was <3 mm. The DSCs in the prostate midgland (T2WI: 0.949-0.976; DWI: 0.843-0.942) were significantly higher than those in the apex (T2WI: 0.833-0.926; DWI: 0.755-0.821) and base (T2WI: 0.851-0.922; DWI: 0.810-0.929) (all P values <0.01). The qualitative analysis showed that 98.6% of T2WI and 72.3% of DWI autosegmentation results in the external testing cohort were clinically acceptable. CONCLUSIONS: The 3D U-Net-based segmentation tool can automatically segment the prostate on T2WI with good and robust performance, especially in the prostate midgland. Segmentation on DWI was feasible, but fine-tuning might be needed for different scanners.

A Deep Supervised U-Attention Net for Pixel-Wise Brain Tumor Segmentation

  • Xu, Jia Hua
  • Teng, Wai Po Kevin
  • Wang, Xiong Jun
  • Nürnberger, Andreas
2021 Book Section, cited 0 times
Glioblastoma (GBM) is one of the leading causes of cancer death. The imaging diagnostics are critical for all phases in the treatment of brain tumor. However, manually-checked output by a radiologist has several limitations such as tedious annotation, time consuming and subjective biases, which influence the outcome of a brain tumor affected region. Therefore, the development of an automatic segmentation framework has attracted lots of attention from both clinical and academic researchers. Recently, most state-of-the-art algorithms are derived from deep learning methodologies such as the U-net, attention network. In this paper, we propose a deep supervised U-Attention Net framework for pixel-wise brain tumor segmentation, which combines the U-net, Attention network and a deep supervised multistage layer. Subsequently, we are able to achieve a low resolution and high resolution feature representations even for small tumor regions. Preliminary results of our method on training data have mean dice coefficients of about 0.75, 0.88, and 0.80; on the other hand, validation data achieve a mean dice coefficient of 0.67, 0.86, and 0.70, for enhancing tumor (ET), whole tumor (WT), and tumor core (TC) respectively .

Prostate cancer detection using residual networks

  • Xu, Helen
  • Baxter, John S H
  • Akin, Oguz
  • Cantor-Rivera, Diego
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS: A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS: The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION: This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.

Deep Generative Adversarial Reinforcement Learning for Semi-Supervised Segmentation of Low-Contrast and Small Objects in Medical Images

  • Xu, C.
  • Zhang, T.
  • Zhang, D.
  • Zhang, D.
  • Han, J.
IEEE Trans Med Imaging 2024 Journal Article, cited 0 times
Website
Deep reinforcement learning (DRL) has demonstrated impressive performance in medical image segmentation, particularly for low-contrast and small medical objects. However, current DRL-based segmentation methods face limitations due to the optimization of error propagation in two separate stages and the need for a significant amount of labeled data. In this paper, we propose a novel deep generative adversarial reinforcement learning (DGARL) approach that, for the first time, enables end-to-end semi-supervised medical image segmentation in the DRL domain. DGARL ingeniously establishes a pipeline that integrates DRL and generative adversarial networks (GANs) to optimize both detection and segmentation tasks holistically while mutually enhancing each other. Specifically, DGARL introduces two innovative components to facilitate this integration in semi-supervised settings. First, a task-joint GAN with two discriminators links the detection results to the GAN's segmentation performance evaluation, allowing simultaneous joint evaluation and feedback. This ensures that DRL and GAN can be directly optimized based on each other's results. Second, a bidirectional exploration DRL integrates backward exploration and forward exploration to ensure the DRL agent explores the correct direction when forward exploration is disabled due to lack of explicit rewards. This mitigates the issue of unlabeled data being unable to provide rewards and rendering DRL unexplorable. Comprehensive experiments on three generalization datasets, comprising a total of 640 patients, demonstrate that our novel DGARL achieves 85.02% Dice and improves at least 1.91% for brain tumors, achieves 73.18% Dice and improves at least 4.28% for liver tumors, and achieves 70.85% Dice and improves at least 2.73% for pancreas compared to the ten most recent advanced methods, our results attest to the superiority of DGARL. Code is available at GitHub.

Optical breast atlas as a testbed for image reconstruction in optical mammography

  • Xing, Y.
  • Duan, Y.
  • P. Indurkar P
  • Qiu, A.
  • Chen, N.
Sci Data 2021 Journal Article, cited 0 times
Website
We present two optical breast atlases for optical mammography, aiming to advance the image reconstruction research by providing a common platform to test advanced image reconstruction algorithms. Each atlas consists of five individual breast models. The first atlas provides breast vasculature surface models, which are derived from human breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using image segmentation. A finite element-based method is used to deform the breast vasculature models from their natural shapes to generate the second atlas, compressed breast models. Breast compression is typically done in X-ray mammography but also necessary for some optical mammography systems. Technical validation is presented to demonstrate how the atlases can be used to study the image reconstruction algorithms. Optical measurements are generated numerically with compressed breast models and a predefined configuration of light sources and photodetectors. The simulated data is fed into three standard image reconstruction algorithms to reconstruct optical images of the vasculature, which can then be compared with the ground truth to evaluate their performance.

UniMiSS: Universal Medical Self-supervised Learning via Breaking Dimensionality Barrier

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
  • Wu, Qi
2022 Conference Proceedings, cited 1 times
Website
Self-supervised learning (SSL) opens up huge opportunities for medical image analysis that is well known for its lack of annotations. However, aggregating massive (unlabeled) 3D medical images like computerized tomography (CT) remains challenging due to its high imaging cost and privacy restrictions. In this paper, we advocate bringing a wealth of 2D images like chest X-rays as compensation for the lack of 3D data, aiming to build a universal medical self-supervised representation learning framework, called UniMiSS. The following problem is how to break the dimensionality barrier, i.e., making it possible to perform SSL with both 2D and 3D images? To achieve this, we design a pyramid U-like medical Transformer (MiT). It is composed of the switchable patch embedding (SPE) module and Transformers. The SPE module adaptively switches to either 2D or 3D patch embedding, depending on the input dimension. The embedded patches are converted into a sequence regardless of their original dimensions. The Transformers model the long-term dependencies in a sequence-to-sequence manner, thus enabling UniMiSS to learn representations from both 2D and 3D images. With the MiT as the backbone, we perform the UniMiSS in a self-distillation manner. We conduct expensive experiments on six 3D/2D medical image analysis tasks, including segmentation and classification. The results show that the proposed UniMiSS achieves promising performance on various downstream tasks, outperforming the ImageNet pre-training and other advanced SSL counterparts substantially. Code is available at https://github.com/YtongXie/UniMiSS-code.

Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest CT

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
  • Fulham, Michael
  • Zhang, Yanning
Information Fusion 2018 Journal Article, cited 13 times
Website

Semi-supervised Adversarial Model for Benign-Malignant Lung Nodule Classification on Chest CT

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
Medical Image Analysis 2019 Journal Article, cited 0 times
Classification of benign-malignant lung nodules on chest CT is the most critical step in the early detection of lung cancer and prolongation of patient survival. Despite their success in image classification, deep convolutional neural networks (DCNNs) always require a large number of labeled training data, which are not available for most medical image analysis applications due to the work required in image acquization and particularly image annotation. In this paper, we propose a semi-supervised adversarial classification (SSAC) model that can be trained by using both labeled and unlabeled data for benign-malignant lung nodule classification. This model consists of an adversarial autoencoder-based unsupervised reconstruction network R, a supervised classification network C, and learnable transition layers that enable the adaption of the image representation ability learned by R to C. The SSAC model has been extended to the multi-view knowledge-based collaborative learning, aiming to employ three SSACs to characterize each nodule’s overall appearance, heterogeneity in shape and texture, respectively, and to perform such characterization on nine planar views. The MK-SSAC model has been evaluated on the benchmark LIDC-IDRI dataset and achieves an accuracy of 92.53% and an AUC of 95.81%, which are superior to the performance of other lung nodule classification and semi-supervised learning approaches.

Low-complexity atlas-based prostate segmentation by combining global, regional, and local metrics

  • Xie, Qiuliang
  • Ruan, Dan
Medical Physics 2014 Journal Article, cited 15 times
Website
PURPOSE: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. METHODS: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, a target-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. RESULTS: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors' method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. CONCLUSIONS: The proposed method requires a fixed number of nonrigid registrations, independent of atlas size, providing desirable scalability especially important for a large or growing atlas. When applied to prostate segmentation, the method achieved better performance to the state-of-the-art atlas-based approaches, with significant improvement in computation efficiency. The proposed rationale of utilizing jointly global, regional, and local metrics, based on the information characteristic and surrogate behavior for registration and fusion subtasks, can be extended naturally to similarity metrics beyond MSE, such as correlation or mutual information types.

An Automated Segmentation Method for Lung Parenchyma Image Sequences Based on Fractal Geometry and Convex Hull Algorithm

  • Xiao, Xiaojiao
  • Zhao, Juanjuan
  • Qiang, Yan
  • Wang, Hua
  • Xiao, Yingze
  • Zhang, Xiaolong
  • Zhang, Yudong
Applied Sciences 2018 Journal Article, cited 1 times
Website

CateNorm: Categorical Normalization for Robust Medical Image Segmentation

  • Xiao, Junfei
  • Yu, Lequan
  • Zhou, Zongwei
  • Bai, Yutong
  • Xing, Lei
  • Yuille, Alan
  • Zhou, Yuyin
2022 Conference Proceedings, cited 0 times
Website

Efficient copyright protection for three CT images based on quaternion polar harmonic Fourier moments

  • Xia, Zhiqiu
  • Wang, Xingyuan
  • Li, Xiaoxiao
  • Wang, Chunpeng
  • Unar, Salahuddin
  • Wang, Mingxu
  • Zhao, Tingting
Signal Processing 2019 Journal Article, cited 0 times

Volume fractions of DCE-MRI parameter as early predictor of histologic response in soft tissue sarcoma: A feasibility study

  • Xia, Wei
  • Yan, Zhuangzhi
  • Gao, Xin
European Journal of Radiology 2017 Journal Article, cited 2 times
Website

Predicting Microvascular Invasion in Hepatocellular Carcinoma Using CT-based Radiomics Model

  • Xia, T. Y.
  • Zhou, Z. H.
  • Meng, X. P.
  • Zha, J. H.
  • Yu, Q.
  • Wang, W. L.
  • Song, Y.
  • Wang, Y. C.
  • Tang, T. Y.
  • Xu, J.
  • Zhang, T.
  • Long, X. Y.
  • Liang, Y.
  • Xiao, W. B.
  • Ju, S. H.
RadiologyRadiology 2023 Journal Article, cited 3 times
Website
Background Prediction of microvascular invasion (MVI) may help determine treatment strategies for hepatocellular carcinoma (HCC). Purpose To develop a radiomics approach for predicting MVI status based on preoperative multiphase CT images and to identify MVI-associated differentially expressed genes. Materials and Methods Patients with pathologically proven HCC from May 2012 to September 2020 were retrospectively included from four medical centers. Radiomics features were extracted from tumors and peritumor regions on preoperative registration or subtraction CT images. In the training set, these features were used to build five radiomics models via logistic regression after feature reduction. The models were tested using internal and external test sets against a pathologic reference standard to calculate area under the receiver operating characteristic curve (AUC). The optimal AUC radiomics model and clinical-radiologic characteristics were combined to build the hybrid model. The log-rank test was used in the outcome cohort (Kunming center) to analyze early recurrence-free survival and overall survival based on high versus low model-derived score. RNA sequencing data from The Cancer Image Archive were used for gene expression analysis. Results A total of 773 patients (median age, 59 years; IQR, 49-64 years; 633 men) were divided into the training set (n = 334), internal test set (n = 142), external test set (n = 141), outcome cohort (n = 121), and RNA sequencing analysis set (n = 35). The AUCs from the radiomics and hybrid models, respectively, were 0.76 and 0.86 for the internal test set and 0.72 and 0.84 for the external test set. Early recurrence-free survival (P < .01) and overall survival (P < .007) can be categorized using the hybrid model. Differentially expressed genes in patients with findings positive for MVI were involved in glucose metabolism. Conclusion The hybrid model showed the best performance in prediction of MVI. (c) RSNA, 2023 Supplemental material is available for this article. See also the editorial by Summers in this issue.

Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile

  • Xia, Tian
2018 Thesis, cited 0 times
Website

Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation

  • Xia, J.
  • Lu, Y.
  • Tan, L.
Comput Math Methods Med 2020 Journal Article, cited 0 times
Website
Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter-adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength beta adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.

Automatic glioma segmentation based on adaptive superpixel

  • Wu, Yaping
  • Zhao, Zhe
  • Wu, Weiguo
  • Lin, Yusong
  • Wang, Meiyun
BMC Med Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. METHODS: The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. RESULTS: The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.

Joint model- and immunohistochemistry-driven few-shot learning scheme for breast cancer segmentation on 4D DCE-MRI

  • Wu, Youqing
  • Wang, Yihang
  • Sun, Heng
  • Jiang, Chunjuan
  • Li, Bo
  • Li, Lihua
  • Pan, Xiang
Applied Intelligence 2022 Journal Article, cited 0 times
Website
Automatic segmentation of breast cancer on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which reveals both temporal and spatial profiles of the foundational anatomy, plays a crucial role in the clinical diagnosis and treatment of breast cancer. Recently, deep learning has witnessed great advances in tumour segmentation tasks. However, most of those high-performing models require a large number of annotated gold-standard samples, which remains a challenge in the accurate segmentation of 4D DCE-MRI breast cancer with high heterogeneity. To address this problem, we propose a joint immunohistochemistry- (IHC) and model-driven few-shot learning scheme for 4D DCE-MRI breast cancer segmentation. Specifically, a unique bidirectional convolutional recurrent graph attention autoencoder (BiCRGADer) is developed to exploit the spatiotemporal pharmacokinetic characteristics contained in 4D DCE-MRI sequences. Moreover, the IHC-driven strategy that employs a few-shot learning scenario optimizes BiCRGADer by learning the features of MR imaging phenotypes of specific molecular subtypes during training. In particular, a parameter-free module (PFM) is designed to adaptively enrich query features with support features and masks. The combined model- and IHC-driven scheme boosts performance with only a small training sample size. We conduct methodological analyses and empirical evaluations on datasets from The Cancer Imaging Archive (TCIA) to justify the effectiveness and adaptability of our scheme. Extensive experiments show that the proposed scheme outperforms state-of-the-art segmentation models and provides a potential and powerful noninvasive approach for the artificial intelligence community dealing with oncological applications.

DeepMMSA: A Novel Multimodal Deep Learning Method for Non-small Cell Lung Cancer Survival Analysis

  • Wu, Yujiao
  • Ma, Jie
  • Huang, Xiaoshui
  • Ling, Sai Ho
  • Weidong Su, Steven
2021 Conference Paper, cited 18 times
Website
Lung cancer is the leading cause of cancer death worldwide. The critical reason for the deaths is delayed diagnosis and poor prognosis. With the accelerated development of deep learning techniques, it has been successfully applied extensively in many real-world applications, including health sectors such as medical image interpretation and disease diagnosis. By combining more modalities that being engaged in the processing of information, multimodal learning can extract better features and improve the predictive ability. The conventional methods for lung cancer survival analysis normally utilize clinical data and only provide a statistical probability. To improve the survival prediction accuracy and help prognostic decision-making in clinical practice for medical experts, we for the first time propose a multimodal deep learning framework for non-small cell lung cancer (NSCLC) survival analysis, named DeepMMSA. This framework leverages CT images in combination with clinical data, enabling the abundant information held within medical images to be associate with lung cancer survival information. We validate our model on the data of 422 NSCLC patients from The Cancer Imaging Archive (TCIA). Experimental results support our hypothesis that there is an underlying relationship between prognostic information and radiomic images. Besides, quantitative results show that our method could surpass the state-of-the-art methods by 4% on concordance.

Mutual consistency learning for semi-supervised medical image segmentation

  • Wu, Yicheng
  • Ge, Zongyuan
  • Zhang, Donghao
  • Xu, Minfeng
  • Zhang, Lei
  • Xia, Yong
  • Cai, Jianfei
Medical Image Analysis 2022 Journal Article, cited 1 times
Website

Three-Plane–assembled Deep Learning Segmentation of Gliomas

  • Wu, Shaocheng
  • Li, Hongyang
  • Quang, Daniel
  • Guan, Yuanfang
Radiology: Artificial Intelligence 2020 Journal Article, cited 0 times
Website
An accurate and fast deep learning approach developed for automatic segmentation of brain glioma on multimodal MRI scans achieved Sørensen–Dice scores of 0.80, 0.83, and 0.91 for enhancing tumor, tumor core, and whole tumor, respectively. Purpose To design a computational method for automatic brain glioma segmentation of multimodal MRI scans with high efficiency and accuracy. Materials and Methods The 2018 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset was used in this study, consisting of routine clinically acquired preoperative multimodal MRI scans. Three subregions of glioma—the necrotic and nonenhancing tumor core, the peritumoral edema, and the contrast-enhancing tumor—were manually labeled by experienced radiologists. Two-dimensional U-Net models were built using a three-plane–assembled approach to segment three subregions individually (three-region model) or to segment only the whole tumor (WT) region (WT-only model). The term three-plane–assembled means that coronal and sagittal images were generated by reformatting the original axial images. The model performance for each case was evaluated in three classes: enhancing tumor (ET), tumor core (TC), and WT. Results On the internal unseen testing dataset split from the 2018 BraTS training dataset, the proposed models achieved mean Sørensen–Dice scores of 0.80, 0.84, and 0.91, respectively, for ET, TC, and WT. On the BraTS validation dataset, the proposed models achieved mean 95% Hausdorff distances of 3.1 mm, 7.0 mm, and 5.0 mm, respectively, for ET, TC, and WT and mean Sørensen–Dice scores of 0.80, 0.83, and 0.91, respectively, for ET, TC, and WT. On the BraTS testing dataset, the proposed models ranked fourth out of 61 teams. The source code is available at https://github.com/GuanLab/Brain_Glioma. Conclusion This deep learning method consistently segmented subregions of brain glioma with high accuracy, efficiency, reliability, and generalization ability on screening images from a large population, and it can be efficiently implemented in clinical practice to assist neuro-oncologists or radiologists. Supplemental material is available for this article.

Whole Mammography Diagnosis via Multi-instance Supervised Discriminative Localization and Classification

  • Wu, Qingxia
  • Tan, Hongna
  • Wu, Yaping
  • Dong, Pei
  • Che, Jifei
  • Li, Zheren
  • Lei, Chenjin
  • Shen, Dinggang
  • Xue, Zhong
  • Wang, Meiyun
2022 Conference Proceedings, cited 0 times
Precise mammography diagnosis plays a vital role in breast cancer management, especially in identifying malignancy with computer assistance. Due to high resolution, large image size, and small lesion region, it is challenging to localize lesions while classifying the whole mammography, which also renders difficulty for annotating mammography datasets and balancing tumor and normal background regions for training. To fully use local lesion information and macroscopic malignancy information, we propose a two-step mammography classification method based on multi-instance learning. In step one, a multi-task encoder-decoder architecture (mt-ConvNext-Unet) is employed for instance-level lesion localization and lesion type classification. To enhance the ability of feature extraction, we adopt ConvNext as the encoder, and added normalization layer and scSE attention blocks in the decoder to strengthen localization ability of small lesions. A classification branch is used after the encoder to jointly train lesion classification and segmentation. The instance-based outputs are merged into the image-level both for segmentation and classification (SegMap and ClsMap). In step two, a whole mammography classification model is applied for breast-level cancer diagnosis by combining the results of CC and MLO views with EfficientNet. Experimental results on the open dataset show that our method not only accurately classifies breast cancer on mammography but also highlights the suspicious regions.

Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition

  • Wu, Panpan
  • Xia, Kewen
  • Yu, Hengyong
Computer Methods and Programs in Biomedicine 2016 Journal Article, cited 5 times
Website
BACKGROUND AND OBJECTIVE: Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. METHODS: An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. RESULTS: Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. CONCLUSIONS: Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE.

Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning

  • Wu, Panpan
  • Sun, Xuanchao
  • Zhao, Ziping
  • Wang, Haishuai
  • Pan, Shirui
  • Schuller, Bjorn
Comput Intell Neurosci 2020 Journal Article, cited 0 times
Website
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.

Development and Validation of Pre- and Post-Operative Models to Predict Recurrence After Resection of Solitary Hepatocellular Carcinoma: A Multi-Institutional Study

  • Wu, Ming-Yu
  • Qiao, Qian
  • Wang, Ke
  • Ji, Gu-Wei
  • Cai, Bing
  • Li, Xiang-Cheng
Cancer Manag Res 2020 Journal Article, cited 1 times
Website
Background: The ideal candidates for resection are patients with solitary hepatocellular carcinoma (HCC); however, postoperative recurrence rate remains high. We aimed to establish prognostic models to predict HCC recurrence based on readily accessible clinical parameters and multi-institutional databases. Patients and Methods: A total of 485 patients undergoing curative resection for solitary HCC were recruited from two independent institutions and the Cancer Imaging Archive database. We randomly divided the patients into training (n=323) and validation cohorts (n=162). Two models were developed: one using pre-operative and one using pre- and post-operative parameters. Performance of the models was compared with staging systems. Results: Using multivariable analysis, albumin-bilirubin grade, serum alpha-fetoprotein and tumor size were selected into the pre-operative model; albumin-bilirubin grade, serum alpha-fetoprotein, tumor size, microvascular invasion and cirrhosis were selected into the postoperative model. The two models exhibited better discriminative ability (concordance index: 0.673-0.728) and lower prediction error (integrated Brier score: 0.169-0.188) than currently used staging systems for predicting recurrence in both cohorts. Both models stratified patients into low- and high-risk subgroups of recurrence with distinct recurrence patterns. Conclusion: The two models with corresponding user-friendly calculators are useful tools to predict recurrence before and after resection that may facilitate individualized management of solitary HCC.

A comprehensive texture feature analysis framework of renal cell carcinoma: pathological, prognostic, and genomic evaluation based on CT images

  • Wu, K.
  • Wu, P.
  • Yang, K.
  • Li, Z.
  • Kong, S.
  • Yu, L.
  • Zhang, E.
  • Liu, H.
  • Guo, Q.
  • Wu, S.
Eur Radiol 2022 Journal Article, cited 14 times
Website
OBJECTIVES: We tried to realize accurate pathological classification, assessment of prognosis, and genomic molecular typing of renal cell carcinoma by CT texture feature analysis. To determine whether CT texture features can perform accurate pathological classification and evaluation of prognosis and genomic characteristics in renal cell carcinoma. METHODS: Patients with renal cell carcinoma from five open-source cohorts were analyzed retrospectively in this study. These data were randomly split to train and test machine learning algorithms to segment the lesion, predict the histological subtype, tumor stage, and pathological grade. Dice coefficient and performance metrics such as accuracy and AUC were calculated to evaluate the segmentation and classification model. Quantitative decomposition of the predictive model was conducted to explore the contribution of each feature. Besides, survival analysis and the statistical correlation between CT texture features, pathological, and genomic signatures were investigated. RESULTS: A total of 569 enhanced CT images of 443 patients (mean age 59.4, 278 males) were included in the analysis. In the segmentation task, the mean dice coefficient was 0.96 for the kidney and 0.88 for the cancer region. For classification of histologic subtype, tumor stage, and pathological grade, the model was on a par with radiologists and the AUC was 0.83 [Formula: see text] 0.1, 0.80 [Formula: see text] 0.1, and 0.77 [Formula: see text] 0.1 at 95% confidence intervals, respectively. Moreover, specific quantitative CT features related to clinical prognosis were identified. A strong statistical correlation (R(2) = 0.83) between the feature crosses and genomic characteristics was shown. The structural equation modeling confirmed significant associations between CT features, pathological (beta = - 0.75), and molecular subtype (beta = - 0.30). CONCLUSIONS: The framework illustrates high performance in the pathological classification of renal cell carcinoma. Prognosis and genomic characteristics can be inferred by quantitative image analysis. KEY POINTS: * The analytical framework exhibits high-performance pathological classification of renal cell carcinoma and is on a par with human radiologists. * Quantitative decomposition of the predictive model shows that specific texture features contribute to histologic subtype and tumor stage classification. * Structural equation modeling shows the associations of genomic characteristics to CT texture features. Overall survival and molecular characteristics can be inferred by quantitative CT texture analysis in renal cell carcinoma.

Identifying relations between imaging phenotypes and molecular subtypes of breast cancer: Model discovery and external validation

  • Wu, Jia
  • Sun, Xiaoli
  • Wang, Jeff
  • Cui, Yi
  • Kato, Fumi
  • Shirato, Hiroki
  • Ikeda, Debra M
  • Li, Ruijiang
Journal of Magnetic Resonance Imaging 2017 Journal Article, cited 17 times
Website
Purpose: To determine whether dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) characteristics of the breast tumor and background parenchyma can distinguish molecular subtypes (ie, luminal A/B or basal) of breast cancer. Materials and methods: In all, 84 patients from one institution and 126 patients from The Cancer Genome Atlas (TCGA) were used for discovery and external validation, respectively. Thirty-five quantitative image features were extracted from DCE-MRI (1.5 or 3T) including morphology, texture, and volumetric features, which capture both tumor and background parenchymal enhancement (BPE) characteristics. Multiple testing was corrected using the Benjamini-Hochberg method to control the false-discovery rate (FDR). Sparse logistic regression models were built using the discovery cohort to distinguish each of the three studied molecular subtypes versus the rest, and the models were evaluated in the validation cohort. Results: On univariate analysis in discovery and validation cohorts, two features characterizing tumor and two characterizing BPE were statistically significant in separating luminal A versus nonluminal A cancers; two features characterizing tumor were statistically significant for separating luminal B; one feature characterizing tumor and one characterizing BPE reached statistical significance for distinguishing basal (Wilcoxon P < 0.05, FDR < 0.25). In discovery and validation cohorts, multivariate logistic regression models achieved an area under the receiver operator characteristic curve (AUC) of 0.71 and 0.73 for luminal A cancer, 0.67 and 0.69 for luminal B cancer, and 0.66 and 0.79 for basal cancer, respectively. Conclusion: DCE-MRI characteristics of breast cancer and BPE may potentially be used to distinguish among molecular subtypes of breast cancer. Level of evidence: 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1017-1027. Keywords: breast cancer; classification; dynamic contrast enhanced MRI; imaging genomics; molecular subtype.

Magnetic resonance imaging and molecular features associated with tumor-infiltrating lymphocytes in breast cancer

  • Wu, Jia
  • Li, Xuejie
  • Teng, Xiaodong
  • Rubin, Daniel L
  • Napel, Sandy
  • Daniel, Bruce L
  • Li, Ruijiang
Breast Cancer Research 2018 Journal Article, cited 0 times
Website

Heterogeneous Enhancement Patterns of Tumor-adjacent Parenchyma at MR Imaging Are Associated with Dysregulated Signaling Pathways and Poor Survival in Breast Cancer

  • Wu, Jia
  • Li, Bailiang
  • Sun, Xiaoli
  • Cao, Guohong
  • Rubin, Daniel L
  • Napel, Sandy
  • Ikeda, Debra M
  • Kurian, Allison W
  • Li, Ruijiang
RadiologyRadiology 2017 Journal Article, cited 9 times
Website

Intratumor partitioning and texture analysis of dynamic contrast‐enhanced (DCE)‐MRI identifies relevant tumor subregions to predict pathological response of breast cancer to neoadjuvant chemotherapy

  • Wu, Jia
  • Gong, Guanghua
  • Cui, Yi
  • Li, Ruijiang
Journal of Magnetic Resonance Imaging 2016 Journal Article, cited 43 times
Website
PURPOSE: To predict pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multiregion analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). MATERIALS AND METHODS: In this Institutional Review Board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using 3T DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with high temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitative Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. RESULTS: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast washout were statistically significant (P < 0.05) after correcting for multiple testing, with area under the receiver operating characteristic (ROC) curve (AUC) or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (P = 0.002) in leave-one-out cross-validation. This improved upon conventional imaging predictors such as tumor volume (AUC = 0.53) and texture features based on whole-tumor analysis (AUC = 0.65). CONCLUSION: The heterogeneity of the tumor subregion associated with fast washout on DCE-MRI predicted pathological response to NAC in breast cancer. J. Magn. Reson. Imaging 2016;44:1107-1115.

Unsupervised clustering of quantitative image phenotypes reveals breast cancer subtypes with distinct prognoses and molecular pathways

  • Wu, Jia
  • Cui, Yi
  • Sun, Xiaoli
  • Cao, Guohong
  • Li, Bailiang
  • Ikeda, Debra M
  • Kurian, Allison W
  • Li, Ruijiang
Clinical Cancer Research 2017 Journal Article, cited 14 times
Website

HarDNet-BTS: A Harmonic Shortcut Network for Brain Tumor Segmentation

  • Wu, Hung-Yu
  • Lin, Youn-Long
2022 Book Section, cited 0 times
Tumor segmentation of brain MRI image is an important and challenging computer vision task. With well-curated multi-institutional multi-parametric MRI (mpMRI) data, the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021 is a great bench-marking venue for world-wide researchers to contribute to the advancement of the state-of-the-art. HarDNet is a memory-efficient neural network backbone that has demonstrated excellent performance and efficiency in image classification, object detection, real-time semantic segmentation, and colonoscopy polyp segmentation. In this paper, we propose HarDNet-BTS, a U-Net-like encoder-decoder architecture with HarDNet backbone, for Brain Tumor Segmentation. We train it with the BraTS 2021 dataset using three training strategies and ensemble the resultant models to improve the prediction quality. Assessment reports from the BraTS 2021 validation server show that HarDNet-BTS delivers state-of-the-art performance (Dice_ET = 0.8442, Dice_TC = 0.8793, Dice_WT = 0.9260, HD95_ET = 12.592, HD95_TC = 7.073, HD95_WT = 3.884). It was ranked 8th in the validation phase. Its performance on the final testing dataset is consistent with that of the validation phase (Dice_ET = 0.8727, Dice_TC = 0.8665, Dice_WT = 0.9286, HD95_ET = 8.496, HD95_TC = 18.606, HD95_WT = 4.059). Inferencing an MRI case takes only 16 s of GPU time and 6GBs of GPU memory.

Optimal batch determination for improved harmonization and prognostication of multi-center PET/CT radiomics feature in head and neck cancer

  • Wu, Huiqin
  • Liu, Xiaohui
  • Peng, Lihong
  • Yang, Yuling
  • Zhou, Zidong
  • Du, Dongyang
  • Xu, Hui
  • Lv, Wenbing
  • Lu, Lijun
Phys Med Biol 2023 Journal Article, cited 0 times
Website
Objective. To determine the optimal approach for identifying and mitigating batch effects in PET/CT radiomics features, and further improve the prognosis of patients with head and neck cancer (HNC), this study investigated the performance of three batch harmonization methods.Approach. Unsupervised harmonization identified the batch labels by K-means clustering. Supervised harmonization regarding the image acquisition factors (center, manufacturer, scanner, filter kernel) as known/given batch labels, and Combat harmonization was then implemented separately and sequentially based on the batch labels, i.e. harmonizing features among batches determined by each factor individually or harmonizing features among batches determined by multiple factors successively. Extensive experiments were conducted to predict overall survival (OS) on public PET/CT datasets that contain 800 patients from 9 centers.Main results. In the external validation cohort, results show that compared to original models without harmonization, Combat harmonization would be beneficial in OS prediction with C-index of 0.687-0.740 versus 0.684-0.767. Supervised harmonization slightly outperformed unsupervised harmonization in all models (C-index: 0.692-0.767 versus 0.684-0.750). Separate harmonization outperformed sequential harmonization in CT_m+clinic and CT_cm+clinic models with C-index of 0.752 and 0.722, respectively, while sequential harmonization involved clinical features in PET_rs+clinic model further improving the performance and achieving the highest C-index of 0.767.Significance. Optimal batch determination especially sequential harmonization for Combat holds the potential to improve the prognostic power of radiomics model in multi-center HNC dataset with PET/CT imaging.

Evaluating long-term outcomes via computed tomography in lung cancer screening

  • Wu, D
  • Liu, R
  • Levitt, B
  • Riley, T
  • Baumgartner, KB
J Biom Biostat 2016 Journal Article, cited 0 times

Predicting Genotype and Survival in Glioma Using Standard Clinical MR Imaging Apparent Diffusion Coefficient Images: A Pilot Study from The Cancer Genome Atlas

  • Wu, C-C
  • Jain, R
  • Radmanesh, A
  • Poisson, LM
  • Guo, W-Y
  • Zagzag, D
  • Snuderl, M
  • Placantonakis, DG
  • Golfinos, J
  • Chi, AS
American Journal of Neuroradiology 2018 Journal Article, cited 1 times
Website

Dosiomics improves prediction of locoregional recurrence for intensity modulated radiotherapy treated head and neck cancer cases

  • Wu, A.
  • Li, Y.
  • Qi, M.
  • Lu, X.
  • Jia, Q.
  • Guo, F.
  • Dai, Z.
  • Liu, Y.
  • Chen, C.
  • Zhou, L.
  • Song, T.
Oral Oncol 2020 Journal Article, cited 0 times
Website
OBJECTIVES: To investigate whether dosiomics can benefit to IMRT treated patient's locoregional recurrences (LR) prediction through a comparative study on prediction performance inspection between radiomics methods and that integrating dosiomics in head and neck cancer cases. MATERIALS AND METHODS: A cohort of 237 patients with head and neck cancer from four different institutions was obtained from The Cancer Imaging Archive and utilized to train and validate the radiomics-only prognostic model and integrate the dosiomics prognostic model. For radiomics, the radiomics features were initially extracted from images, including CTs and PETs, and selected on the basis of their concordance index (CI) values, then condensed via principle component analysis. Lastly, multivariate Cox proportional hazards regression models were constructed with class-imbalance adjustment as the LR prediction models by inputting those condensed features. For dosiomics integration model establishment, the initial features were similar, but with additional 3-dimensional dose distribution from radiation treatment plans. The CI and the Kaplan-Meier curves with log-rank analysis were used to assess and compare these models. RESULTS: Observed from the independent validation dataset, the CI of the model for dosiomics integration (0.66) was significantly different from that for radiomics (0.59) (Wilcoxon test, p=5.9x10(-31)). The integrated model successfully classified the patients into high- and low-risk groups (log-rank test, p=2.5x10(-02)), whereas the radiomics model was not able to provide such classification (log-rank test, p=0.37). CONCLUSION: Dosiomics can benefit in predicting the LR in IMRT-treated patients and should not be neglected for related investigations.

Determining patient abdomen thickness from a single digital radiograph with a computational model: clinical results from a proof of concept study

  • Worrall, M.
  • Vinnicombe, S.
  • Sutton, D.
Br J Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: A computational model has been created to estimate the abdominal thickness of a patient following an X-ray examination; its intended application is assisting with patient dose audit of paediatric X-ray examinations. This work evaluates the accuracy of the computational model in a clinical setting for adult patients undergoing anteroposterior (AP) abdomen X-ray examinations. METHODS: The model estimates patient thickness using the radiographic image, the exposure factors with which the image was acquired, a priori knowledge of the characteristics of the X-ray unit and detector and the results of extensive Monte Carlo simulation of patient examinations. For 20 patients undergoing AP abdominal X-ray examinations, the model was used to estimate the patient thickness; these estimates were compared against a direct measurement made at the time of the examination. RESULTS: Estimates of patient thickness made using the model were on average within +/-5.8% of the measured thickness. CONCLUSION: The model can be used to accurately estimate the thickness of a patient undergoing an AP abdominal X-ray examination where the patient's size falls within the range of the size of patients used to create the computational model. ADVANCES IN KNOWLEDGE: This work demonstrates that it is possible to accurately estimate the AP abdominal thickness of an adult patient using the digital X-ray image and a computational model.

Development of a method for automating effective patient diameter estimation for digital radiography

  • Worrall, Mark
2019 Thesis, cited 0 times
Website
National patient dose audit of paediatric radiographic examinations is complicated by a lack of data containing a direct measurement of the patient diameter in the examination orientation or height and weight. This has meant that National Diagnostic Reference Levels (NDRLs) for paediatric radiographic examinations have not been updated in the UK since 2000, despite significant changes in imaging technology over that period. This work is the first step in the development of a computational model intended to automate an estimate of paediatric patient diameter. Whilst the application is intended for a paediatric population, its development within this thesis uses an adult cohort. The computational model uses the radiographic image, the examination exposure factors and a priori information relating to the x-ray system and the digital detector. The computational model uses the Beer-Lambert law. A hypothesis was developed that this would work for clinical exposures despite its single energy photon basis. Values of initial air kerma are estimated from the examination exposure factors and measurements made on the x-ray system. Values of kerma at the image receptor are estimated from a measurement of pixel value made at the centre of the radiograph and the measured calibration between pixel value and kerma for the image receptor. Values of effective linear attenuation coefficient are estimated from Monte Carlo simulations. Monte Carlo simulations were created for two x-ray systems. The simulations were optimised and thoroughly validated to ensure that any result obtained is accurate. The validation process compared simulation results with measurements made on the x-ray units themselves, producing values for effective linear attenuation coefficient that were demonstrated to be accurate. Estimates of attenuator thickness can be made using the estimated values for each variable. The computational model was demonstrated to accurately estimate the thickness of single composition attenuators across a range of thicknesses and exposure factors on three different x-ray systems. The computational model was used in a clinical validation study of 20 adult patients undergoing AP abdominal x-ray examinations. For 19 of these examinations, it estimated the true patient thickness to within ±9%. This work presents a feasible computational model that could be used to automate the estimation of paediatric patient thickness during radiographic examinations allowing for automation of paediatric radiographic dose audit.

Quantifying the reproducibility of lung ventilation images between 4-Dimensional Cone Beam CT and 4-Dimensional CT

  • Woodruff, Henry C.
  • Shieh, Chun-Chien
  • Hegi-Johnson, Fiona
  • Keall, Paul J.
  • Kipritidis, John
Medical Physics 2017 Journal Article, cited 2 times
Website

Deep learning for semi-automated unidirectional measurement of lung tumor size in CT

  • Woo, M.
  • Devane, A. M.
  • Lowe, S. C.
  • Lowther, E. L.
  • Gimbel, R. W.
Cancer Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: Performing Response Evaluation Criteria in Solid Tumor (RECISTS) measurement is a non-trivial task requiring much expertise and time. A deep learning-based algorithm has the potential to assist with rapid and consistent lesion measurement. PURPOSE: The aim of this study is to develop and evaluate deep learning (DL) algorithm for semi-automated unidirectional CT measurement of lung lesions. METHODS: This retrospective study included 1617 lung CT images from 8 publicly open datasets. A convolutional neural network was trained using 1373 training and validation images annotated by two radiologists. Performance of the DL algorithm was evaluated 244 test images annotated by one radiologist. DL algorithm's measurement consistency with human radiologist was evaluated using Intraclass Correlation Coefficient (ICC) and Bland-Altman plotting. Bonferroni's method was used to analyze difference in their diagnostic behavior, attributed by tumor characteristics. Statistical significance was set at p < 0.05. RESULTS: The DL algorithm yielded ICC score of 0.959 with human radiologist. Bland-Altman plotting suggested 240 (98.4 %) measurements realized within the upper and lower limits of agreement (LOA). Some measurements outside the LOA revealed difference in clinical reasoning between DL algorithm and human radiologist. Overall, the algorithm marginally overestimated the size of lesion by 2.97 % compared to human radiologists. Further investigation indicated tumor characteristics may be associated with the DL algorithm's diagnostic behavior of over or underestimating the lesion size compared to human radiologist. CONCLUSIONS: The DL algorithm for unidirectional measurement of lung tumor size demonstrated excellent agreement with human radiologist.

Deep Learning Frameworks to Improve Inter-Observer Variability in CT Measurement of Solid Tumor

  • Woo, MinJae
2021 Thesis, cited 0 times
Website

Training and Validation of Deep Learning-Based Auto-Segmentation Models for Lung Stereotactic Ablative Radiotherapy Using Retrospective Radiotherapy Planning Contours

  • Wong, J.
  • Huang, V.
  • Giambattista, J. A.
  • Teke, T.
  • Kolbeck, C.
  • Giambattista, J.
  • Atrchian, S.
Front Oncol 2021 Journal Article, cited 0 times
Website
Purpose: Deep learning-based auto-segmented contour (DC) models require high quality data for their development, and previous studies have typically used prospectively produced contours, which can be resource intensive and time consuming to obtain. The aim of this study was to investigate the feasibility of using retrospective peer-reviewed radiotherapy planning contours in the training and evaluation of DC models for lung stereotactic ablative radiotherapy (SABR). Methods: Using commercial deep learning-based auto-segmentation software, DC models for lung SABR organs at risk (OAR) and gross tumor volume (GTV) were trained using a deep convolutional neural network and a median of 105 contours per structure model obtained from 160 publicly available CT scans and 50 peer-reviewed SABR planning 4D-CT scans from center A. DCs were generated for 50 additional planning CT scans from center A and 50 from center B, and compared with the clinical contours (CC) using the Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). Results: Comparing DCs to CCs, the mean DSC and 95% HD were 0.93 and 2.85mm for aorta, 0.81 and 3.32mm for esophagus, 0.95 and 5.09mm for heart, 0.98 and 2.99mm for bilateral lung, 0.52 and 7.08mm for bilateral brachial plexus, 0.82 and 4.23mm for proximal bronchial tree, 0.90 and 1.62mm for spinal cord, 0.91 and 2.27mm for trachea, and 0.71 and 5.23mm for GTV. DC to CC comparisons of center A and center B were similar for all OAR structures. Conclusions: The DCs developed with retrospective peer-reviewed treatment contours approximated CCs for the majority of OARs, including on an external dataset. DCs for structures with more variability tended to be less accurate and likely require using a larger number of training cases or novel training approaches to improve performance. Developing DC models from existing radiotherapy planning contours appears feasible and warrants further clinical workflow testing.

Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning

  • Wong, Jordan
  • Fong, Allan
  • McVicar, Nevin
  • Smith, Sally
  • Giambattista, Joshua
  • Wells, Derek
  • Kolbeck, Carter
  • Giambattista, Jonathan
  • Gondara, Lovedeep
  • Alexander, Abraham
Radiother Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Deep learning-based auto-segmented contours (DC) aim to alleviate labour intensive contouring of organs at risk (OAR) and clinical target volumes (CTV). Most previous DC validation studies have a limited number of expert observers for comparison and/or use a validation dataset related to the training dataset. We determine if DC models are comparable to Radiation Oncologist (RO) inter-observer variability on an independent dataset. METHODS: Expert contours (EC) were created by multiple ROs for central nervous system (CNS), head and neck (H&N), and prostate radiotherapy (RT) OARs and CTVs. DCs were generated using deep learning-based auto-segmentation software trained by a single RO on publicly available data. Contours were compared using Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS: Sixty planning CT scans had 2-4 ECs, for a total of 60 CNS, 53 H&N, and 50 prostate RT contour sets. The mean DC and EC contouring times were 0.4 vs 7.7 min for CNS, 0.6 vs 26.6 min for H&N, and 0.4 vs 21.3 min for prostate RT contours. There were minimal differences in DSC and 95% HD involving DCs for OAR comparisons, but more noticeable differences for CTV comparisons. CONCLUSIONS: The accuracy of DCs trained by a single RO is comparable to expert inter-observer variability for the RT planning contours in this study. Use of deep learning-based auto-segmentation in clinical practice will likely lead to significant benefits to RT planning workflow and resources.

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding

  • Wong, Alex
  • Chen, Allison
  • Wu, Yangchao
  • Cicek, Safa
  • Tiard, Alexandre
  • Hong, Byung-Woo
  • Soatto, Stefano
2022 Book Section, cited 0 times
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues. We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network. Our embedding network learns features that can resolve detailed structures in the brain without the need for high-resolution training images, which are often unavailable and expensive to acquire. Alternatively, the encoder-decoder learns global structures by means of striding and max pooling. Our embedding network complements the encoder-decoder architecture by guiding the decoder with fine-grained details lost to spatial downsampling during the encoder stage. Unlike previous works, our decoder outputs at 2× the input resolution, where a single pixel in the input resolution is predicted by four neighboring subpixels in our output. To obtain the output at the original scale, we propose a learnable downsampler (as opposed to hand-crafted ones e.g. bilinear) that combines subpixel predictions. Our approach improves the baseline architecture by ≈ 11.7% and achieves the state of the art on the ATLAS public benchmark dataset with a smaller memory footprint and faster runtime than the best competing method. Our source code has been made available at: https://github.com/alexklwong/subpixel-embedding-segmentation.

Improving breast cancer diagnostics with deep learning for MRI

  • Witowski, Jan
  • Heacock, Laura
  • Reig, Beatriu
  • Kang, Stella K
  • Lewin, Alana
  • Pysarenko, Kristine
  • Patel, Shalin
  • Samreen, Naziya
  • Rudnicki, Wojciech
  • Łuczyńska, Elżbieta
Science Translational Medicine 2022 Journal Article, cited 0 times
Website

Introducing the Medical Physics Dataset Article

  • Williamson, Jeffrey F
  • Das, Shiva K
  • Goodsitt, Mitchell S
  • Deasy, Joseph O
Medical Physics 2017 Journal Article, cited 7 times
Website

Effect of patient inhalation profile and airway structure on drug deposition in image-based models with particle-particle interactions

  • Williams, J.
  • Kolehmainen, J.
  • Cunningham, S.
  • Ozel, A.
  • Wolfram, U.
Int J Pharm 2022 Journal Article, cited 0 times
Website
For many of the one billion sufferers of respiratory diseases worldwide, managing their disease with inhalers improves their ability to breathe. Poor disease management and rising pollution can trigger exacerbations that require urgent relief. Higher drug deposition in the throat instead of the lungs limits the impact on patient symptoms. To optimise delivery to the lung, patient-specific computational studies of aerosol inhalation can be used. However in many studies, inhalation modelling does not represent situations when the breathing is impaired, such as in recovery from an exacerbation, where the patient's inhalation is much faster and shorter. Here we compare differences in deposition of inhaler particles (10, 4 mum) in the airways of three patients. We aimed to evaluate deposition differences between healthy and impaired breathing with image-based healthy and diseased patient models. We found that the ratio of drug in the lower to upper lobes was 35% larger with a healthy inhalation. For smaller particles the upper airway deposition was similar in all patients, but local deposition hotspots differed in size, location and intensity. Our results identify that image-based airways must be used in respiratory modelling. Various inhalation profiles should be tested for optimal prediction of inhaler deposition.

Deep-Learning-based Segmentation of Organs-at-Risk in the Head for MR-assisted Radiation Therapy Planning

  • Wiesinger, Florian
  • Petit, Steven
  • Hideghéty, Katalin
  • Hernandez Tamames, Juan
  • McCallum, Hazel
  • Maxwell, Ross
  • Pearson, Rachel
  • Verduijn, Gerda
  • Darázs, Barbara
  • Kaushik, Sandeep
  • Cozzini, Cristina
  • Bobb, Chad
  • Fodor, Emese
  • Paczona, Viktor
  • Kószó, Renáta
  • Együd, Zsófia
  • Borzasi, Emőke
  • Végváry, Zoltán
  • Tan, Tao
  • Gyalai, Bence
  • Czabány, Renáta
  • Deák-Karancsi, Borbála
  • Kolozsvári, Bernadett
  • Czipczer, Vanda
  • Capala, Marta
  • Ruskó, László
2021 Journal Article, cited 0 times
Website
Segmentation of organs-at-risk (OAR) in MR images has several clinical applications; including radiation therapy (RT) planning. This paper presents a deep-learning-based method to segment 15 structures in the head region. The proposed method first applies 2D U-Net models to each of the three planes (axial, coronal, sagittal) to roughly segment the structure. Then, the results of the 2D models are combined into a fused prediction to localize the 3D bounding box of the structure. Finally, a 3D U-Net is applied to the volume of the bounding box to determine the precise contour of the structure. The model was trained on a public dataset and evaluated on both public and private datasets that contain T2-weighted MR scans of the head-and-neck region. For all cases the contour of each structure was defined by operators trained by expert clinical delineators. The evaluation demonstrated that various structures can be accurately and efficiently localized and segmented using the presented framework. The contours generated by the proposed method were also qualitatively evaluated. The majority (92%) of the segmented OARs was rated as clinically useful for radiation therapy.

Supervised Machine Learning Approach Utilizing Artificial Neural Networks for Automated Prostate Zone Segmentation in Abdominal MR images

  • Wieser, Hans-Peter
2013 Thesis, cited 0 times
Website

Proton radiotherapy spot order optimization to maximize the FLASH effect

  • Widenfalk, Oscar
2023 Thesis, cited 0 times
Cancer is a group of deadly diseases, to which one treatment method is radiotherapy. Recent studies indicate advantages of delivering so-called FLASH treatments using ultra-high dose rates (> 40 Gy/s), with a normal tissue sparing FLASH effect. Delivering a high dose in a short time imposes requirements on both the treatment machine and the treatment plan. To see as much of the FLASH effect as possible, the delivery pattern should be optimized, which is the focus of this thesis. The optimization method was applied to 17 lung plans, and the results show that a local-searchbased optimization achieves overall good results, achieving a mean FLASH coverage of 31.7 % outside of the CTV after a mean optimization time of 8.75 s. This is faster than published results using a genetic algorithm.

Proton radiotherapy spot order optimization to maximize the FLASH effect

  • Widenfalk, Oscar
2023 Thesis, cited 0 times
Website
Cancer is a group of deadly diseases, to which one treatment method is radiotherapy. Recent studies indicate advantages of delivering so-called FLASH treatments using ultra-high dose rates (> 40 Gy/s), with a normal tissue sparing FLASH effect. Delivering a high dose in a short time imposes requirements on both the treatment machine and the treatment plan. To see as much of the FLASH effect as possible, the delivery pattern should be optimized, which is the focus of this thesis. The optimization method was applied to 17 lung plans, and the results show that a local-search-based optimization achieves overall good results, achieving a mean FLASH coverage of 31.7 % outside of the CTV after a mean optimization time of 8.75 s. This is faster than published results using a genetic algorithm.

Customized Federated Learning for Multi-Source Decentralized Medical Image Classification

  • Wicaksana, J.
  • Yan, Z.
  • Yang, X.
  • Liu, Y.
  • Fan, L.
  • Cheng, K. T.
IEEE J Biomed Health Inform 2022 Journal Article, cited 4 times
Website
The performance of deep networks for medical image analysis is often constrained by limited medical data, which is privacy-sensitive. Federated learning (FL) alleviates the constraint by allowing different institutions to collaboratively train a federated model without sharing data. However, the federated model is often suboptimal with respect to the characteristics of each client's local data. Instead of training a single global model, we propose Customized FL (CusFL), for which each client iteratively trains a client-specific/private model based on a federated global model aggregated from all private models trained in the immediate previous iteration. Two overarching strategies employed by CusFL lead to its superior performance: 1) the federated model is mainly for feature alignment and thus only consists of feature extraction layers; 2) the federated feature extractor is used to guide the training of each private model. In that way, CusFL allows each client to selectively learn useful knowledge from the federated model to improve its personalized model. We evaluated CusFL on multi-source medical image datasets for the identification of clinically significant prostate cancer and the classification of skin lesions.

The Image Biomarker Standardization Initiative: Standardized Convolutional Filters for Reproducible Radiomics and Enhanced Clinical Insights

  • Whybra, P.
  • Zwanenburg, A.
  • Andrearczyk, V.
  • Schaer, R.
  • Apte, A. P.
  • Ayotte, A.
  • Baheti, B.
  • Bakas, S.
  • Bettinelli, A.
  • Boellaard, R.
  • Boldrini, L.
  • Buvat, I.
  • Cook, G. J. R.
  • Dietsche, F.
  • Dinapoli, N.
  • Gabrys, H. S.
  • Goh, V.
  • Guckenberger, M.
  • Hatt, M.
  • Hosseinzadeh, M.
  • Iyer, A.
  • Lenkowicz, J.
  • Loutfi, M. A. L.
  • Lock, S.
  • Marturano, F.
  • Morin, O.
  • Nioche, C.
  • Orlhac, F.
  • Pati, S.
  • Rahmim, A.
  • Rezaeijo, S. M.
  • Rookyard, C. G.
  • Salmanpour, M. R.
  • Schindele, A.
  • Shiri, I.
  • Spezi, E.
  • Tanadini-Lang, S.
  • Tixier, F.
  • Upadhaya, T.
  • Valentini, V.
  • van Griethuysen, J. J. M.
  • Yousefirizi, F.
  • Zaidi, H.
  • Muller, H.
  • Vallieres, M.
  • Depeursinge, A.
RadiologyRadiology 2024 Journal Article, cited 1 times
Website
Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations x three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.

Sensitivity of standardised radiomics algorithms to mask generation across different software platforms

  • Whybra, Philip
  • Spezi, Emiliano
Sci RepScientific reports 2023 Journal Article, cited 0 times
Website
The field of radiomics continues to converge on a standardised approach to image processing and feature extraction. Conventional radiomics requires a segmentation. Certain features can be sensitive to small contour variations. The industry standard for medical image communication stores contours as coordinate points that must be converted to a binary mask before image processing can take place. This study investigates the impact that the process of converting contours to mask can have on radiomic features calculation. To this end we used a popular open dataset for radiomics standardisation and we compared the impact of masks generated by importing the dataset into 4 medical imaging software. We interfaced our previously standardised radiomics platform with these software using their published application programming interface to access image volume, masks and other data needed to calculate features. Additionally, we used super-sampling strategies to systematically evaluate the impact of contour data pre processing methods on radiomic features calculation. Finally, we evaluated the effect that using different mask generation approaches could have on patient clustering in a multi-center radiomics study. The study shows that even when working on the same dataset, mask and feature discrepancy occurs depending on the contour to mask conversion technique implemented in various medical imaging software. We show that this also affects patient clustering and potentially radiomic-based modelling in multi-centre studies where a mix of mask generation software is used. We provide recommendations to negate this issue and facilitate reproducible and reliable radiomics.

CT-based radiomic analysis of hepatocellular carcinoma patients to predict key genomic information

  • West, Derek L
  • Kotrotsou, Aikaterini
  • Niekamp, Andrew Scott
  • Idris, Tagwa
  • Giniebra Camejo, Dunia
  • Mazal, Nicolas James
  • Cardenas, Nicolas James
  • Goldberg, Jackson L
  • Colen, Rivka R
Journal of Clinical Oncology 2017 Journal Article, cited 1 times
Website

Deep learning in CT colonography: differentiating premalignant from benign colorectal polyps

  • Wesp, P.
  • Grosu, S.
  • Graser, A.
  • Maurus, S.
  • Schulz, C.
  • Knosel, T.
  • Fabritius, M. P.
  • Schachtner, B.
  • Yeh, B. M.
  • Cyran, C. C.
  • Ricke, J.
  • Kazmierczak, P. M.
  • Ingrisch, M.
Eur Radiol 2022 Journal Article, cited 0 times
Website
OBJECTIVES: To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning. METHODS: In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++. RESULTS: The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 +/- 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of >/= 0.25 in 90% of polyp tissue. CONCLUSIONS: In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader. KEY POINTS: * Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans. * Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts. * Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6-9 mm size.

Multi-task Learning for Brain Tumor Segmentation

  • Weninger, Leon
  • Liu, Qianyu
  • Merhof, Dorit
2020 Book Section, cited 0 times
Accurate and reproducible detection of a brain tumor and segmentation of its sub-regions has high relevance in clinical trials and practice. Numerous recent publications have shown that deep learning algorithms are well suited for this application. However, fully supervised methods require a large amount of annotated training data. To obtain such data, time-consuming expert annotations are necessary. Furthermore, the enhancing core appears to be the most challenging to segment among the different sub-regions. Therefore, we propose a novel and straightforward method to improve brain tumor segmentation by joint learning of three related tasks with a partly shared architecture. Next to the tumor segmentation, image reconstruction and detection of enhancing tumor are learned simultaneously using a shared encoder. Meanwhile, different decoders are used for the different tasks, allowing for arbitrary switching of the loss function. In effect, this means that the architecture can partly learn on data without annotations by using only the autoencoder part. This makes it possible to train on bigger, but unannotated datasets, as only the segmenting decoder needs to be fine-tuned solely on annotated images. The second auxiliary task, detecting the presence of enhancing tumor tissue, is intended to provide a focus of the network on this area, and provides further information for postprocessing. The final prediction on the BraTS validation data using our method gives Dice scores of 0.89, 0.79 and 0.75 for the whole tumor, tumor core and the enhancing tumor region, respectively.

Automatic Segmentation of Brain Tumor from 3D MR Images Using SegNet, U-Net, and PSP-Net

  • Weng, Yan-Ting
  • Chan, Hsiang-Wei
  • Huang, Teng-Yi
2020 Book Section, cited 0 times
In the study, we used three two-dimensional convolutional neural networks, including SegNet, U-Net, and PSP-Net, to design an automatic segmentation of brain tumor from three-dimensional MR datasets. We extracted 2D slices from three slice orientations as the input tensor of the network in the training stage. In the prediction stage, we predict a volume several times with slicing along different angles. Based on the results, we learned that the result predicted more times has better outcomes than those predicted less times. Also, we implement two ensemble methods to combine the result of the three networks. According to the results, the above strategies all contributed to the improvement of the accuracy of segmentation.

General purpose radiomics for multi-modal clinical research

  • Wels, Michael G.
  • Suehling, Michael
  • Muehlberg, Alexander
  • Lades, Félix
2019 Conference Proceedings, cited 0 times
Website
In this paper we present an integrated software solution∗ targeting clinical researchers for discovering relevant radiomic biomarkers covering the entire value chain of clinical radiomics research. Its intention is to make this kind of research possible even for less experienced scientists. The solution provides means to create, collect, manage, and statistically analyze patient cohorts consisting of potentially multimodal 3D medical imaging data, associated volume of interest annotations, and radiomic features. Volumes of interest can be created by an extensive set of semi-automatic segmentation tools. Radiomic feature computation relies on the de facto standard library PyRadiomics and ensures comparability and reproducibility of carried out studies. Tabular cohort studies containing the radiomics of the volumes of interest can be managed directly within the software solution. The integrated statistical analysis capabilities introduce an additional layer of abstraction allowing non-experts to benefit from radiomics research as well. There are ready-to-use methods for clustering, uni- and multivariate statistics, and machine learning to be applied to the collected cohorts. They are validated in two case studies: for one thing, on a subset of the publicly available NSCLC-Radiomics data collection containing pretreatment CT scans of 317 non-small cell lung cancer (NSCLC) patients and for another, on the Lung Image Database Consortium imaging study with diagnostic and lung cancer screening CT scans including 2,753 distinct lesions from 870 patients. Integrated software solutions with optimized workflows like the one presented and further developments thereof may play an important role in making precision medicine come to life in clinical environments.

Predicting Isocitrate Dehydrogenase Mutation Status in Glioma Using Structural Brain Networks and Graph Neural Networks

  • Wei, Yiran
  • Li, Yonghao
  • Chen, Xi
  • Schönlieb, Carola-Bibiane
  • Li, Chao
  • Price, Stephen J.
2022 Book Section, cited 0 times
Glioma is a common malignant brain tumor with distinct survival among patients. The isocitrate dehydrogenase (IDH) gene mutation provides critical diagnostic and prognostic value for glioma. It is of crucial significance to non-invasively predict IDH mutation based on pre-treatment MRI. Machine learning/deep learning models show reasonable performance in predicting IDH mutation using MRI. However, most models neglect the systematic brain alterations caused by tumor invasion, where widespread infiltration along white matter tracts is a hallmark of glioma. Structural brain network provides an effective tool to characterize brain organisation, which could be captured by the graph neural networks (GNN) to more accurately predict IDH mutation. Here we propose a method to predict IDH mutation using GNN, based on the structural brain network of patients. Specifically, we firstly construct a network template of healthy subjects, consisting of atlases of edges (white matter tracts) and nodes (cortical/subcortical brain regions) to provide regions of interest (ROIs). Next, we employ autoencoders to extract the latent multi-modal MRI features from the ROIs of edges and nodes in patients, to train a GNN architecture for predicting IDH mutation. The results show that the proposed method outperforms the baseline models using the 3D-CNN and 3D-DenseNet. In addition, model interpretation suggests its ability to identify the tracts infiltrated by tumor, corresponding to clinical prior knowledge. In conclusion, integrating brain networks with GNN offers a new avenue to study brain lesions using computational neuroscience and computer vision approaches.

A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images

  • Webb, Grayson
2018 Thesis, cited 0 times
Website
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method. Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume error are also improved while achieving lower deviation.

Cox models with time‐varying covariates and partly‐interval censoring–A maximum penalised likelihood approach

  • Webb, Annabel
  • Ma, Jun
2022 Journal Article, cited 0 times
Time-varying covariates can be important predictors when model based predictions are considered. A Cox model that includes time-varying covariates is usually referred to as an extended Cox model. When only right censoring is presented in the observed survival times, the conventional partial likelihood method is still applicable to estimate the regression coefficients of an extended Cox model. However, if there are interval-censored survival times, then the partial likelihood method is not directly available unless an imputation, such as the middle point imputation, is used to replaced the left- and interval-censored data. However, such imputation methods are well known for causing biases. This paper considers fitting of the extended Cox models using the maximum penalised likelihood method allowing observed survival times to be partly interval censored, where a penalty function is used to regularise the baseline hazard estimate. We present simulation studies to demonstrate the performance of our proposed method, and illustrate our method with applications to two real datasets from medical research.

EfficientNetV2 based for MRI brain tumor image classification

  • Waskita, A. A.
  • Amda, Julfa Muhammad
  • Sihono, Dwi Seno Kuncoro
  • Prasetio, Heru
2023 Conference Paper, cited 1 times
Website
An accurate and timely diagnosis is of utmost importance when it comes to treating brain tumors effectively. To facilitate this process, we have developed a brain tumor classification approach that employs transfer learning using a pre-trained version of the EfficientNet V2 model. Our dataset comprises brain tumor images that have been categorized into four distinct labels: tumor (glioma, meningioma, pituitary) and normal. As our base model, we employed the EfficientNet V2 model with variations of B0, B1, B2, and B3 for experiments. To adapt the model to our number of label categories, we modified the final layer and retrained it on our dataset. Our optimization process involved using Adam’s algorithm and the categorical cross-entropy loss function. We conducted experiments in multiple stages, which involved randomizing the dataset, pre-processing, training the model, and evaluating the results. During the evaluation, we used appropriate metrics to assess the accuracy and loss of the test data. Furthermore, we analyzed the performance of the model by visualizing the loss and accuracy curves throughout the training process. Our extensive experimentation involving dataset randomization, pre-processing, model training, and evaluation has yielded remarkable results. Through relevant evaluation metrics and visualization of loss and accuracy curves, we have achieved impressive accuracy and loss rates on test data. Our research has led us to the successful classification of brain tumors using the EfficientNet V2 models with B0, B1, B2, and B3 variations. Additionally, our use of a confusion matrix has allowed us to assess the classification ability of each tumor category. This breakthrough research has the potential to greatly enhance medical diagnosis by utilizing transfer learning techniques and pre-trained models. We hope that this approach can help detect and treat brain tumors in their early stages, ultimately leading to better patient outcomes.

Quantifying the incremental value of deep learning: Application to lung nodule detection

  • Warsavage, Theodore Jr
  • Xing, Fuyong
  • Baron, Anna E
  • Feser, William J
  • Hirsch, Erin
  • Miller, York E
  • Malkoski, Stephen
  • Wolf, Holly J
  • Wilson, David O
  • Ghosh, Debashis
PLoS One 2020 Journal Article, cited 0 times
Website
We present a case study for implementing a machine learning algorithm with an incremental value framework in the domain of lung cancer research. Machine learning methods have often been shown to be competitive with prediction models in some domains; however, implementation of these methods is in early development. Often these methods are only directly compared to existing methods; here we present a framework for assessing the value of a machine learning model by assessing the incremental value. We developed a machine learning model to identify and classify lung nodules and assessed the incremental value added to existing risk prediction models. Multiple external datasets were used for validation. We found that our image model, trained on a dataset from The Cancer Imaging Archive (TCIA), improves upon existing models that are restricted to patient characteristics, but it was inconclusive about whether it improves on models that consider nodule features. Another interesting finding is the variable performance on different datasets, suggesting population generalization with machine learning models may be more challenging than is often considered.

Survival analysis of pre-operative GBM patients by using quantitative image features

  • Wangaryattawanich, Pattana
  • Wang, Jixin
  • Thomas, Ginu A
  • Chaddad, Ahmad
  • Zinn, Pascal O
  • Colen, Rivka R
2014 Conference Proceedings, cited 1 times
Website
This paper concerns a preliminary study of the relationship between survival time of both overall and progression free survival, and multiple imaging features of patients with glioblastoma. Simulation results showed that specific imaging features were found to have significant prognostic value to predict survival time in glioblastoma patients.

Multicenter imaging outcomes study of The Cancer Genome Atlas glioblastoma patient cohort: imaging predictors of overall and progression-free survival

  • Wangaryattawanich, Pattana
  • Hatami, Masumeh
  • Wang, Jixin
  • Thomas, Ginu
  • Flanders, Adam
  • Kirby, Justin
  • Wintermark, Max
  • Huang, Erich S.
  • Bakhtiari, Ali Shojaee
  • Luedi, Markus M.
  • Hashmi, Syed S.
  • Rubin, Daniel L.
  • Chen, James Y.
  • Hwang, Scott N.
  • Freymann, John
  • Holder, Chad A.
  • Zinn, Pascal O.
  • Colen, Rivka R.
2015 Journal Article, cited 40 times
Website
Despite an aggressive therapeutic approach, the prognosis for most patients with glioblastoma (GBM) remains poor. The aim of this study was to determine the significance of preoperative MRI variables, both quantitative and qualitative, with regard to overall and progression-free survival in GBM.We retrospectively identified 94 untreated GBM patients from the Cancer Imaging Archive who had pretreatment MRI and corresponding patient outcomes and clinical information in The Cancer Genome Atlas. Qualitative imaging assessments were based on the Visually Accessible Rembrandt Images feature-set criteria. Volumetric parameters were obtained of the specific tumor components: contrast enhancement, necrosis, and edema/invasion. Cox regression was used to assess prognostic and survival significance of each image.Univariable Cox regression analysis demonstrated 10 imaging features and 2 clinical variables to be significantly associated with overall survival. Multivariable Cox regression analysis showed that tumor-enhancing volume (P = .03) and eloquent brain involvement (P &lt; .001) were independent prognostic indicators of overall survival. In the multivariable Cox analysis of the volumetric features, the edema/invasion volume of more than 85 000 mm3 and the proportion of enhancing tumor were significantly correlated with higher mortality (Ps = .004 and .003, respectively).Preoperative MRI parameters have a significant prognostic role in predicting survival in patients with GBM, thus making them useful for patient stratification and endpoint biomarkers in clinical trials.

Quantifying lung cancer heterogeneity using novel CT features: a cross-institute study

  • Wang, Z.
  • Yang, C.
  • Han, W.
  • Sui, X.
  • Zheng, F.
  • Xue, F.
  • Xu, X.
  • Wu, P.
  • Chen, Y.
  • Gu, W.
  • Song, W.
  • Jiang, J.
Insights Imaging 2022 Journal Article, cited 0 times
Website
BACKGROUND: Radiomics-based image metrics are not used in the clinic despite the rapidly growing literature. We selected eight promising radiomic features and validated their value in decoding lung cancer heterogeneity. METHODS: CT images of 236 lung cancer patients were obtained from three different institutes, whereupon radiomic features were extracted according to a standardized procedure. The predictive value for patient long-term prognosis and association with routinely used semantic, genetic (e.g., epidermal growth factor receptor (EGFR)), and histopathological cancer profiles were validated. Feature measurement reproducibility was assessed. RESULTS: All eight selected features were robust across repeat scans (intraclass coefficient range: 0.81-0.99), and were associated with at least one of the cancer profiles: prognostic, semantic, genetic, and histopathological. For instance, "kurtosis" had a high predictive value of early death (AUC at first year: 0.70-0.75 in two independent cohorts), negative association with histopathological grade (Spearman's r: - 0.30), and altered expression levels regarding EGFR mutation and semantic characteristics (solid intensity, spiculated shape, juxtapleural location, and pleura tag; all p < 0.05). Combined as a radiomic score, the features had a higher area under curve for predicting 5-year survival (train: 0.855, test: 0.780, external validation: 0.760) than routine characteristics (0.733, 0.622, 0.613, respectively), and a better capability in patient death risk stratification (hazard ratio: 5.828, 95% confidence interval: 2.915-11.561) than histopathological staging and grading. CONCLUSIONS: We highlighted the clinical value of radiomic features. Following confirmation, these features may change the way in which we approach CT imaging and improve the individualized care of lung cancer patients.

Single NMR image super-resolution based on extreme learning machine

  • Wang, Zhiqiong
  • Xin, Junchang
  • Wang, Zhongyang
  • Tian, Shuo
  • Qiu, Xuejun
Physica Medica 2016 Journal Article, cited 0 times
Website
Introduction: The performance limitation of MRI equipment and higher resolution demand of NMR images from radiologists have formed a strong contrast. Therefore, it is important to study the super resolution algorithm suitable for NMR images, using low costs software to replace the expensive equipment-updating. Methods and materials: Firstly, a series of NMR images are obtained from original NMR images with original noise to the lowest resolution images with the highest noise. Then, based on extreme learning machine, the mapping relation model is constructed from lower resolution NMR images with higher noise to higher resolution NMR images with lower noise in each pair of adjacent images in the obtained image sequence. Finally, the optimal mapping model is established by the ensemble way to reconstruct the higher resolution NMR images with lower noise on the basis of original resolution NMR images with original noise. Experiments are carried out by 990111 NMR brain images in datasets NITRC, REMBRANDT, RIDER NEURO MRI, TCGA-GBM and TCGA-LGG. Results: The performance of proposed method is compared with three approaches through 7 indexes, and the experimental results show that our proposed method has a significant improvement. Discussion: Since our method considers the influence of the noise, it has 20% higher in Peak-Signal-to-Noise-Ratio comparison. As our method is sensitive to details, and has a better characteristic retention, it has higher image quality upgrade of 15% in the additional evaluation. Finally, since extreme learning machine has a celerity learning speed, our method is 46.1% faster. Keywords: Extreme learning machine; NMR; Single image; Super-resolution.

Radiomics features based on T2-weighted fluid-attenuated inversion recovery MRI predict the expression levels of CD44 and CD133 in lower-grade gliomas

  • Wang, Z.
  • Tang, X.
  • Wu, J.
  • Zhang, Z.
  • He, K.
  • Wu, D.
  • Chen, S.
  • Xiao, X.
Future Oncol 2021 Journal Article, cited 0 times
Website
Objective: To verify the association between CD44 and CD133 expression levels and the prognosis of patients with lower-grade gliomas (LGGs) and constructing radiomic models to predict those two genes' expression levels before surgery. Materials & methods: Genomic data of patients with LGG and the corresponding T2-weighted fluid-attenuated inversion recovery images were downloaded from the Cancer Genome Atlas and the Cancer Imaging Archive, which were utilized for prognosis analysis, radiomic feature extraction and model construction, respectively. Results & conclusion: CD44 and CD133 expression levels in LGG can significantly affect the prognosis of patients with LGG. Based on the T2-weighted fluid-attenuated inversion recovery images, the radiomic features can effectively predict the expression levels of CD44 and CD133 before surgery.

Using a deep learning prior for accelerating hyperpolarized (13) C MRSI on synthetic cancer datasets

  • Wang, Z.
  • Luo, G.
  • Li, Y.
  • Cao, P.
Magn Reson Med 2024 Journal Article, cited 0 times
Website
PURPOSE: We aimed to incorporate a deep learning prior with k-space data fidelity for accelerating hyperpolarized carbon-13 MRSI, demonstrated on synthetic cancer datasets. METHODS: A two-site exchange model, derived from the Bloch equation of MR signal evolution, was firstly used in simulating training and testing data, that is, synthetic phantom datasets. Five singular maps generated from each simulated dataset were used to train a deep learning prior, which was then employed with the fidelity term to reconstruct the undersampled MRI k-space data. The proposed method was assessed on synthetic human brain tumor images (N = 33), prostate cancer images (N = 72), and mouse tumor images (N = 58) for three undersampling factors and 2.5% additive Gaussian noise. Furthermore, varied levels of Gaussian noise with SDs of 2.5%, 5%, and 10% were added on synthetic prostate cancer data, and corresponding reconstruction results were evaluated. RESULTS: For quantitative evaluation, peak SNRs were approximately 32 dB, and the accuracy was generally improved for 5 to 8 dB compared with those from compressed sensing with L1-norm regularization or total variation regularization. Reasonable normalized RMS error were obtained. Our method also worked robustly against noise, even on a data with noise SD of 10%. CONCLUSION: The proposed singular value decomposition + iterative deep learning model could be considered as a general framework that extended the application of deep learning MRI reconstruction to metabolic imaging. The morphology of tumors and metabolic images could be measured robustly in six times acceleration using our method.

Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network

  • Wang, Z.
  • Liu, C.
  • Cheng, D.
  • Wang, L.
  • Yang, X.
  • Cheng, K. T.
IEEE Trans Med Imaging 2018 Journal Article, cited 127 times
Website
Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.

Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization

  • Wang, Zhiwei
  • Lin, Yi
  • Cheng, Kwang-Ting Tim
  • Yang, Xin
Medical Image Analysis 2020 Journal Article, cited 0 times

CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation

  • Wang, Y. L.
  • Zhao, Z. J.
  • Hu, S. Y.
  • Chang, F. L.
Comput Methods Programs Biomed 2021 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS: We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS: We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION: Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.

Modality-Pairing Learning for Brain Tumor Segmentation

  • Wang, Yixin
  • Zhang, Yao
  • Hou, Feng
  • Liu, Yang
  • Tian, Jiang
  • Zhong, Cheng
  • Zhang, Yang
  • He, Zhiqiang
2021 Book Section, cited 0 times
Automatic brain tumor segmentation from multi-modality Magnetic Resonance Images (MRI) using deep learning methods plays an important role in assisting the diagnosis and treatment of brain tumor. However, previous methods mostly ignore the latent relationship among different modalities. In this work, we propose a novel end-to-end Modality-Pairing learning method for brain tumor segmentation. Paralleled branches are designed to exploit different modality features and a series of layer connections are utilized to capture complex relationships and abundant information among modalities. We also use a consistency loss to minimize the prediction variance between two branches. Besides, learning rate warmup strategy is adopted to solve the problem of the training instability and early over-fitting. Lastly, we use average ensemble of multiple models and some post-processing techniques to get final results. Our method is tested on the BraTS 2020 online testing dataset, obtaining promising segmentation performance, with average dice scores of 0.891, 0.842, 0.816 for the whole tumor, tumor core and enhancing tumor, respectively. We won the second place of the BraTS 2020 Challenge for the tumor segmentation task.

Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography

  • Wang, Yi
  • Zhang, Hao
  • Chae, Kum Ju
  • Choi, Younhee
  • Jin, Gong Yong
  • Ko, Seok-Bum
Multidimensional Systems and Signal Processing 2020 Journal Article, cited 0 times
Website
Computed tomography (CT) is widely used to locate pulmonary nodules for preliminary diagnosis of the lung cancer. However, due to high visual similarities between malignant (cancer) and benign (non-cancer) nodules, distinguishing malignant from malign nodules is not an easy task for a thoracic radiologist. In this paper, a novel convolutional neural network (ConvNet) architecture is proposed to classify the pulmonary nodules as either benign or malignant. Due to the high variance of nodule characteristics in CT scans, such as size and shape, a multi-path, multi-scale architecture is proposed and applied in the proposed ConvNet to improve the classification performance. The multi-scale method utilizes filters with different sizes to more effectively extracted nodule features from local regions, and the multi-path architecture combines features extracted from different ConvNet layers thereby enhancing the nodule features with respect to global regions. The proposed ConvNet is trained and evaluated on the LUNGx Challenge database, and achieves a sensitivity of 0.887 and a specificity of 0.924 with an area under the curve (AUC) of 0.948. The proposed ConvNet achieves a 14% AUC improvement compared to the state-of-the-art unsupervised learning approach. The proposed ConvNet also outperforms the other state-of-the-art ConvNets explicitly designed for pulmonary nodule classification. For clinical usage, the proposed ConvNet could potentially assist the radiologists to make diagnostic decisions in CT screening.

IILS: Intelligent imaging layout system for automatic imaging report standardization and intra-interdisciplinary clinical workflow optimization

  • Wang, Yang
  • Yan, Fangrong
  • Lu, Xiaofan
  • Zheng, Guanming
  • Zhang, Xin
  • Wang, Chen
  • Zhou, Kefeng
  • Zhang, Yingwei
  • Li, Hui
  • Zhao, Qi
  • Zhu, Hu
  • Chen, Fei
  • Gao, Cailiang
  • Qing, Zhao
  • Ye, Jing
  • Li, Aijing
  • Xin, Xiaoyan
  • Li, Danyan
  • Wang, Han
  • Yu, Hongming
  • Cao, Lu
  • Zhao, Chaowei
  • Deng, Rui
  • Tan, Libo
  • Chen, Yong
  • Yuan, Lihua
  • Zhou, Zhuping
  • Yang, Wen
  • Shao, Mingran
  • Dou, Xin
  • Zhou, Nan
  • Zhou, Fei
  • Zhu, Yue
  • Lu, Guangming
  • Zhang, Bing
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: To achieve imaging report standardization and improve the quality and efficiency of the intra-interdisciplinary clinical workflow, we proposed an intelligent imaging layout system (IILS) for a clinical decision support system-based ubiquitous healthcare service, which is a lung nodule management system using medical images. METHODS: We created a lung IILS based on deep learning for imaging report standardization and workflow optimization for the identification of nodules. Our IILS utilized a deep learning plus adaptive auto layout tool, which trained and tested a neural network with imaging data from all the main CT manufacturers from 11,205 patients. Model performance was evaluated by the receiver operating characteristic curve (ROC) and calculating the corresponding area under the curve (AUC). The clinical application value for our IILS was assessed by a comprehensive comparison of multiple aspects. FINDINGS: Our IILS is clinically applicable due to the consistency with nodules detected by IILS, with its highest consistency of 0.94 and an AUC of 90.6% for malignant pulmonary nodules versus benign nodules with a sensitivity of 76.5% and specificity of 89.1%. Applying this IILS to a dataset of chest CT images, we demonstrate performance comparable to that of human experts in providing a better layout and aiding in diagnosis in 100% valid images and nodule display. The IILS was superior to the traditional manual system in performance, such as reducing the number of clicks from 14.45+/-0.38 to 2, time consumed from 16.87+/-0.38s to 6.92+/-0.10s, number of invalid images from 7.06+/-0.24 to 0, and missing lung nodules from 46.8% to 0%. INTERPRETATION: This IILS might achieve imaging report standardization, and improve the clinical workflow therefore opening a new window for clinical application of artificial intelligence. FUND: The National Natural Science Foundation of China.

SGPNet: A Three-Dimensional Multitask Residual Framework for Segmentation and IDH Genotype Prediction of Gliomas

  • Wang, Yao
  • Wang, Yan
  • Guo, Chunjie
  • Zhang, Shuangquan
  • Yang, Lili
  • Rakhshan, Vahid
Computational Intelligence and Neuroscience 2021 Journal Article, cited 0 times
Website
Glioma is the main type of malignant brain tumor in adults, and the status of isocitrate dehydrogenase (IDH) mutation highly affects the diagnosis, treatment, and prognosis of gliomas. Radiographic medical imaging provides a noninvasive platform for sampling both inter and intralesion heterogeneity of gliomas, and previous research has shown that the IDH genotype can be predicted from the fusion of multimodality radiology images. The features of medical images and IDH genotype are vital for medical treatment; however, it still lacks a multitask framework for the segmentation of the lesion areas of gliomas and the prediction of IDH genotype. In this paper, we propose a novel three-dimensional (3D) multitask deep learning model for segmentation and genotype prediction (SGPNet). The residual units are also introduced into the SGPNet that allows the output blocks to extract hierarchical features for different tasks and facilitate the information propagation. Our model reduces 26.6% classification error rates comparing with previous models on the datasets of Multimodal Brain Tumor Segmentation Challenge (BRATS) 2020 and The Cancer Genome Atlas (TCGA) gliomas’ databases. Furthermore, we first practically investigate the influence of lesion areas on the performance of IDH genotype prediction by setting different groups of learning targets. The experimental results indicate that the information of lesion areas is more important for the IDH genotype prediction. Our framework is effective and generalizable, which can serve as a highly automated tool to be applied in clinical decision making.

Deep learning based time-to-event analysis with PET, CT and joint PET/CT for head and neck cancer prognosis

  • Wang, Y.
  • Lombardo, E.
  • Avanzo, M.
  • Zschaek, S.
  • Weingartner, J.
  • Holzgreve, A.
  • Albert, N. L.
  • Marschner, S.
  • Fanetti, G.
  • Franchin, G.
  • Stancanello, J.
  • Walter, F.
  • Corradini, S.
  • Niyazi, M.
  • Lang, J.
  • Belka, C.
  • Riboldi, M.
  • Kurz, C.
  • Landry, G.
Comput Methods Programs Biomed 2022 Journal Article, cited 0 times
Website
OBJECTIVES: Recent studies have shown that deep learning based on pre-treatment positron emission tomography (PET) or computed tomography (CT) is promising for distant metastasis (DM) and overall survival (OS) prognosis in head and neck cancer (HNC). However, lesion segmentation is typically required, resulting in a predictive power susceptible to variations in primary and lymph node gross tumor volume (GTV) segmentation. This study aimed at achieving prognosis without GTV segmentation, and extending single modality prognosis to joint PET/CT to allow investigating the predictive performance of combined- compared to single-modality inputs. METHODS: We employed a 3D-Resnet combined with a time-to-event outcome model to incorporate censoring information. We focused on the prognosis of DM and OS for HNC patients. For each clinical endpoint, five models with PET and/or CT images as input were compared: PET-GTV, PET-only, CT-GTV, CT-only, and PET/CT-GTV models, where -GTV indicates that the corresponding images were masked using the GTV contour. Publicly available delineated CT and PET scans from 4 different Canadian hospitals (293) and the MAASTRO clinic (74) were used for training by 3-fold cross-validation (CV). For independent testing, we used 110 patients from a collaborating institution. The predictive performance was evaluated via Harrell's Concordance Index (HCI) and Kaplan-Meier curves. RESULTS: In a 5-year time-to-event analysis, all models could produce CV HCIs with median values around 0.8 for DM and 0.7 for OS. The best performance was obtained with the PET-only model, achieving a median testing HCI of 0.82 for DM and 0.69 for OS. Compared with the PET/CT-GTV model, the PET-only still had advantages of up to 0.07 in terms of testing HCI. The Kaplan-Meier curves and corresponding log-rank test results also demonstrated significant stratification capability of our models for the testing cohort. CONCLUSION: Deep learning-based DM and OS time-to-event models showed predictive capability and could provide indications for personalized RT. The best predictive performance achieved by the PET-only model suggested GTV segmentation might be less relevant for PET-based prognosis.

Automatic Glioma Grading Based on Two-Stage Networks by Integrating Pathology and MRI Images

  • Wang, Xiyue
  • Yang, Sen
  • Wu, Xiyi
2021 Book Section, cited 0 times
Glioma with a high incidence is one of the most common brain cancers. In the clinic, pathologist diagnoses the types of the glioma by observing the whole-slide images (WSIs) with different magnifications, which is time-consuming, laborious, and experience-dependent. The automatic grading of the glioma based on WSIs can provide aided diagnosis for clinicians. This paper proposes two fully convolutional networks, which are respectively used for WSIs and MRI images to achieve the automatic glioma grading (astrocytoma (lower-grade A), oligodendroglioma (middle-grade O), and glioblastoma (higher-grade G)). The final classification result is the probability average of the two networks. In the clinic and also in our multi-modalities image representation, grade A and O are difficult to distinguish. This work proposes a two-stage training strategy to exclude the distraction of the grade G and focuses on the classification of grade A and O. The experimental result shows that the proposed model achieves high glioma classification performance with the balanced accuracy of 0.889, Cohen’s Kappa of 0.903, and F1-score of 0.943 tested on the validation set.

Additional Value of PET/CT-Based Radiomics to Metabolic Parameters in Diagnosing Lynch Syndrome and Predicting PD1 Expression in Endometrial Carcinoma

  • Wang, X.
  • Wu, K.
  • Li, X.
  • Jin, J.
  • Yu, Y.
  • Sun, H.
Front Oncol 2021 Journal Article, cited 0 times
Website
Purpose: We aim to compare the radiomic features and parameters on 2-deoxy-2-[fluorine-18] fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. We also hope to explore the biologic significance of selected radiomic features. Materials and Methods: We conducted a retrospective cohort study, first using the 18F-FDG PET/CT images and clinical data from 100 patients with endometrial cancer to construct a training group (70 patients) and a test group (30 patients). The metabolic parameters and radiomic features of each tumor were compared between patients with and without Lynch syndrome. An independent cohort of 23 patients with solid tumors was used to evaluate the value of selected radiomic features in predicting the expression of the programmed cell death 1 (PD1), using 18F-FDG PET/CT images and RNA-seq genomic data. Results: There was no statistically significant difference in the standardized uptake values on PET between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. However, there were significant differences between the 2 groups in metabolic tumor volume and total lesion glycolysis (p < 0.005). There was a difference in the radiomic feature of gray level co-occurrence matrix entropy (GLCMEntropy; p < 0.001) between the groups: the area under the curve was 0.94 in the training group (sensitivity, 82.86%; specificity, 97.14%) and 0.893 in the test group (sensitivity, 80%; specificity, 93.33%). In the independent cohort of 23 patients, differences in GLCMEntropy were related to the expression of PD1 (rs =0.577; p < 0.001). Conclusions: In patients with endometrial cancer, higher metabolic tumor volumes, total lesion glycolysis values, and GLCMEntropy values on 18F-FDG PET/CT could suggest a higher risk for Lynch syndrome. The radiomic feature of GLCMEntropy for tumors is a potential predictor of PD1 expression.

Multiple medical image encryption algorithm based on scrambling of region of interest and diffusion of odd-even interleaved points

  • Wang, Xingyuan
  • Wang, Yafei
Expert Systems with Applications 2023 Journal Article, cited 0 times
Website
Due to the security requirement brought by the rapid development of electronic medical, this paper proposes an encryption algorithm for multiple medical images. The algorithm can not only encrypt grayscale medical images of any number and any size at the same time, but also has good encryption effect when applied to color images. Considering the characteristics of medical images, we design an encryption algorithm based on the region of interest (ROI). Firstly, extract the region of interest of the plaintext images and obtain the coordinates, calculate the hash value of the large image composed of all plaintext images. Set the coordinates and hash value as the secret key. This operation makes the whole encryption algorithm closely related to the plaintext images, which greatly enhances the ability to resist chosen plaintext attacks and improves security of the algorithm. In the process of encryption, chaotic sequences generated by Logistic-Tent chaotic system (LTS) are used to perform two scrambling and one diffusion, that is, pixel swapping based on the region of interest, Fisher-Yates scrambling and our newly proposed diffusion algorithm based on odd–even interleaved points. After testing and performance analysis, the algorithm can achieve good encryption effect, can resist various attacks, and has a higher security level and faster encryption speed.

An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images

  • Xinqi Wang
  • Keming Mao
  • Lizhe Wang
  • Peiyi Yang
  • Duo Lu
  • Ping He
Sensors (Basel) 2019 Journal Article, cited 0 times
Website
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.

A deep learning approach to remove contrast from contrast-enhanced CT for proton dose calculation

  • Wang, X.
  • Hao, Y.
  • Duan, Y.
  • Yang, D.
J Appl Clin Med Phys 2024 Journal Article, cited 0 times
Website
PURPOSE: Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans. METHODS: A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation. RESULTS: Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was approximately 4.72, whereas the difference between CECT and the scanned NCECT was approximately 64.52, indicating a approximately 93% reduction in mean HU difference. CONCLUSIONS: A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.

A prognostic analysis method for non-small cell lung cancer based on the computed tomography radiomics

  • Wang, Xu
  • Duan, Huihong
  • Li, Xiaobing
  • Ye, Xiaodan
  • Huang, Gang
  • Nie, Shengdong
Phys Med Biol 2020 Journal Article, cited 0 times
Website
In order to assist doctors in arranging the postoperative treatments and re-examinations for non-small cell lung cancer (NSCLC) patients, this study was initiated to explore a prognostic analysis method for NSCLC based on computed tomography (CT) radiomics. The data of 173 NSCLC patients were collected retrospectively and the clinically meaningful 3-year survival was used as the predictive limit to predict the patient's prognosis survival time range. Firstly, lung tumors were segmented and the radiomics features were extracted. Secondly, the feature weighting algorithm was used to screen and optimize the extracted original feature data. Then, the selected feature data combining with the prognosis survival of patients were used to train machine learning classification models. Finally, a prognostic survival prediction model and radiomics prognostic factors were obtained to predict the prognosis survival time range of NSCLC patients. The classification accuracy rate under cross-validation was up to 88.7% in the prognosis survival analysis model. When verifying on an independent data set, the model also yielded a high prediction accuracy which is up to 79.6%. Inverse different moment, lobulation sign and angular second moment were NSCLC prognostic factors based on radiomics. This study proved that CT radiomics features could effectively assist doctors to make more accurate prognosis survival prediction for NSCLC patients, so as to help doctors to optimize treatment and re-examination for NSCLC patients to extend their survival time.

Data Analysis of the Lung Imaging Database Consortium and Image Database Resource Initiative

  • Wang, Weisheng
  • Luo, Jiawei
  • Yang, Xuedong
  • Lin, Hongli
Academic Radiology 2015 Journal Article, cited 5 times
Website
RATIONALE AND OBJECTIVES: The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) is the largest publicly available computed tomography (CT) image reference data set of lung nodules. In this article, a comprehensive data analysis of the data set and a uniform data model are presented with the purpose of facilitating potential researchers to have an in-depth understanding to and efficient use of the data set in their lung cancer-related investigations. MATERIALS AND METHODS: A uniform data model was designed for representation and organization of various types of information contained in different source data files. A software tool was developed for the processing and analysis of the database, which 1) automatically aligns and graphically displays the nodule outlines marked manually by radiologists onto the corresponding CT images; 2) extracts diagnostic nodule characteristics annotated by radiologists; 3) calculates a variety of nodule image features based on the outlines of nodules, including diameter, volume, and degree of roundness, and so forth; 4) integrates all the extracted nodule information into the uniform data model and stores it in a common and easy-to-access data format; and 5) analyzes and summarizes various feature distributions of nodules in several different categories. Using this data processing and analysis tool, all 1018 CT scans from the data set were processed and analyzed for their statistical distribution. RESULTS: The information contained in different source data files with different formats was extracted and integrated into a new and uniform data model. Based on the new data model, the statistical distributions of nodules in terms of nodule geometric features and diagnostic characteristics were summarized. In the LIDC/IDRI data set, 2655 nodules >/=3 mm, 5875 nodules <3 mm, and 7411 non-nodules are identified, respectively. Among the 2655 nodules, 1) 775, 488, 481, and 911 were marked by one, two, three, or four radiologists, respectively; 2) most of nodules >/=3 mm (85.7%) have a diameter <10.0 mm with the mean value of 6.72 mm; and 3) 10.87%, 31.4%, 38.8%, 16.4%, and 2.6% of nodules were assessed with a malignancy score of 1, 2, 3, 4, and 5, respectively. CONCLUSIONS: This study demonstrates the usefulness of the proposed software tool to the potential users for an in-depth understanding of the LIDC/IDRI data set, therefore likely to be beneficial to their future investigations. The analysis results also demonstrate the distribution diversity of nodules characteristics, therefore being useful as a reference resource for assessing the performance of a new and existing nodule detection and/or segmentation schemes.

Deep Learning for Automatic Identification of Nodule Morphology Features and Prediction of Lung Cancer

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Conference Paper, cited 0 times
Website
Lung Cancer is the most common and deadly cancer in the world. Correct prognosis affects the survival rate of patient. The most important symptom for early diagnosis is nodules images in CT scan. Diagnosis performed in hospital is divided into 2 steps : (1) Firstly, detect nodules from CT scan. (2) Secondly, evaluate the morphological features of nodules and give the diagnostic results. In this work, we proposed an automatic lung cancer prognosis system. The system has 3 steps : (1) In the first step, we trained two models, one based on convolutional neural network (CNN), and the other recurrent neural network (RNN), to detect nodules in CT scan. (2) In the second step, convolutional neural networks (CNN) are trained to evaluate the value of nine morphological features of nodules. (3) In the final step, logistic regression between values of features and cancer probability is trained using XGBoost model. In addition, we give an analysis of which features are important for cancer prediction. Overall, we achieved 82.39% accuracy for lung cancer prediction. By logistic regression analysis, we find that features of diameter, spiculation and lobulation are useful for reducing false positive.

Evaluation of Malignancy of Lung Nodules from CT Image Using Recurrent Neural Network

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Journal Article, cited 0 times
The efficacy of treatment of cancer depends largely on early detection and correct prognosis. It is more important in case of pulmonary cancer, where the detection is based on identifying malignant nodules in the Computed Tomography (CT) scans of the lung. There are two problems for making correct decision about malignancy: (1) At early stage, the nodule size is small (length 5 to 10 mm). As the CT scan covers a volume of 30cm.×30cm.×40cm., manually searching for nodules takes a very long time (approximately 10 minutes for an expert). (2) There are benign nodules and nodules due to other ailments like bronchitis, pneumonia, tuberculosis. To identify whether the nodule is carcinogenic needs long experience and expertise.In recent years, several works have been reported to classify lung cancer using not only the CT scan image, but also other features causing or related to cancer. In all recent works, for CT image analysis, 3-D Convolution Neural Network (CNN) is used to identify cancerous nodules. In spite of various preprocessing used to improve training efficiency, 3-D CNN is extremely slow. The aim of this work is to improve training efficiency by proposing a new deep NN model. It consists of a hierarchical (sliced) structure of recurrent neural network (RNN), where different layers of the hierarchy can be trained simultaneously, decreasing training time. In addition, selective attention (alignment) during training improves convergence rate. The result shows a 3-fold increase in training efficiency, compared to recent state-of-the-art work using 3-D CNN.

Correlation between CT based radiomics features and gene expression data in non-small cell lung cancer

  • Wang, Ting
  • Gong, Jing
  • Duan, Hui-Hong
  • Wang, Li-Jia
  • Ye, Xiao-Dan
  • Nie, Sheng-Dong
Journal of X-ray science and technology 2019 Journal Article, cited 0 times
Website

A multi-model based on radiogenomics and deep learning techniques associated with histological grade and survival in clear cell renal cell carcinoma

  • Wang, S.
  • Zhu, C.
  • Jin, Y.
  • Yu, H.
  • Wu, L.
  • Zhang, A.
  • Wang, B.
  • Zhai, J.
Insights Imaging 2023 Journal Article, cited 0 times
Website
OBJECTIVES: This study aims to evaluate the efficacy of multi-model incorporated by radiomics, deep learning, and transcriptomics features for predicting pathological grade and survival in patients with clear cell renal cell carcinoma (ccRCC). METHODS: In this study, data were collected from 177 ccRCC patients, including radiomics features, deep learning (DL) features, and RNA sequencing data. Diagnostic models were then created using these data through least absolute shrinkage and selection operator (LASSO) analysis. Additionally, a multi-model was developed by combining radiomics, DL, and transcriptomics features. The prognostic performance of the multi-model was evaluated based on progression-free survival (PFS) and overall survival (OS) outcomes, assessed using Harrell's concordance index (C-index). Furthermore, we conducted an analysis to investigate the relationship between the multi-model and immune cell infiltration. RESULTS: The multi-model demonstrated favorable performance in discriminating pathological grade, with area under the ROC curve (AUC) values of 0.946 (95% CI: 0.912-0.980) and 0.864 (95% CI: 0.734-0.994) in the training and testing cohorts, respectively. Additionally, it exhibited statistically significant prognostic performance for predicting PFS and OS. Furthermore, the high-grade group displayed a higher abundance of immune cells compared to the low-grade group. CONCLUSIONS: The multi-model incorporated radiomics, DL, and transcriptomics features demonstrated promising performance in predicting pathological grade and prognosis in patients with ccRCC. CRITICAL RELEVANCE STATEMENT: We developed a multi-model to predict the grade and survival in clear cell renal cell carcinoma and explored the molecular biological significance of the multi-model of different histological grades. KEY POINTS: 1. The multi-model achieved an AUC of 0.864 for assessing pathological grade. 2. The multi-model exhibited an association with survival in ccRCC patients. 3. The high-grade group demonstrated a greater abundance of immune cells.

Radiomics Analysis Based on Magnetic Resonance Imaging for Preoperative Overall Survival Prediction in Isocitrate Dehydrogenase Wild-Type Glioblastoma

  • Wang, S.
  • Xiao, F.
  • Sun, W.
  • Yang, C.
  • Ma, C.
  • Huang, Y.
  • Xu, D.
  • Li, L.
  • Chen, J.
  • Li, H.
  • Xu, H.
Front Neurosci 2021 Journal Article, cited 1 times
Website
Purpose: This study aimed to develop a radiomics signature for the preoperative prognosis prediction of isocitrate dehydrogenase (IDH)-wild-type glioblastoma (GBM) patients and to provide personalized assistance in the clinical decision-making for different patients. Materials and Methods: A total of 142 IDH-wild-type GBM patients classified using the new classification criteria of WHO 2021 from two centers were included in the study and randomly divided into a training set and a test set. Firstly, their clinical characteristics were screened using univariate Cox regression. Then, the radiomics features were extracted from the tumor and peritumoral edema areas on their contrast-enhanced T1-weighted image (CE-T1WI), T2-weighted image (T2WI), and T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) magnetic resonance imaging (MRI) images. Subsequently, inter- and intra-class correlation coefficient (ICC) analysis, Spearman's correlation analysis, univariate Cox, and the least absolute shrinkage and selection operator (LASSO) Cox regression were used step by step for feature selection and the construction of a radiomics signature. The combined model was established by integrating the selected clinical factors. Kaplan-Meier analysis was performed for the validation of the discrimination ability of the model, and the C-index was used to evaluate consistency in the prediction. Finally, a Radiomics + Clinical nomogram was generated for personalized prognosis analysis and then validated using the calibration curve. Results: Analysis of the clinical characteristics resulted in the screening of four risk factors. The combination of ICC, Spearman's correlation, and univariate and LASSO Cox resulted in the selection of eight radiomics features, which made up the radiomics signature. Both the radiomics and combined models can significantly stratify high- and low-risk patients (p < 0.001 and p < 0.05 for the training and test sets, respectively) and obtained good prediction consistency (C-index = 0.74-0.86). The calibration plots exhibited good agreement in both 1- and 2-year survival between the prediction of the model and the actual observation. Conclusion: Radiomics is an independent preoperative non-invasive prognostic tool for patients who were newly classified as having IDH-wild-type GBM. The constructed nomogram, which combined radiomics features with clinical factors, can predict the overall survival (OS) of IDH-wild-type GBM patients and could be a new supplement to treatment guidelines.

Integrating clinical access limitations into iPDT treatment planning with PDT-SPACE

  • Wang, Shuran
  • Saeidi, Tina
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2023 Journal Article, cited 0 times
PDT-SPACE is an open-source software tool that automates interstitial photodynamic therapy treatment planning by providing patient-specific placement of light sources to destroy a tumor while minimizing healthy tissue damage. This work extends PDT-SPACE in two ways. The first enhancement allows specification of clinical access constraints on light source insertion to avoid penetrating critical structures and to minimize surgical complexity. Constraining fiber access to a single burr hole of adequate size increases healthy tissue damage by 10%. The second enhancement generates an initial placement of light sources as a starting point for refinement, rather than requiring entry of a starting solution by the clinician. This feature improves productivity and also leads to solutions with 4.5% less healthy tissue damage. The two features are used in concert to perform simulations of various surgery options of virtual glioblastoma multiforme brain tumors.

AI-based MRI auto-segmentation of brain tumor in rodents, a multicenter study

  • Wang, Shuncong
  • Pang, Xin
  • de Keyzer, Frederik
  • Feng, Yuanbo
  • Swinnen, Johan V.
  • Yu, Jie
  • Ni, Yicheng
2023 Journal Article, cited 0 times
Website
Automatic segmentation of rodent brain tumor on magnetic resonance imaging (MRI) may facilitate biomedical research. The current study aims to prove the feasibility for automatic segmentation by artificial intelligence (AI), and practicability of AI-assisted segmentation. MRI images, including T2WI, T1WI and CE-T1WI, of brain tumor from 57 WAG/Rij rats in KU Leuven and 46 mice from the cancer imaging archive (TCIA) were collected. A 3D U-Net architecture was adopted for segmentation of tumor bearing brain and brain tumor. After training, these models were tested with both datasets after Gaussian noise addition. Reduction of inter-observer disparity by AI-assisted segmentation was also evaluated. The AI model segmented tumor-bearing brain well for both Leuven and TCIA datasets, with Dice similarity coefficients (DSCs) of 0.87 and 0.85 respectively. After noise addition, the performance remained unchanged when the signal-noise ratio (SNR) was higher than two or eight, respectively. For the segmentation of tumor lesions, AI-based model yielded DSCs of 0.70 and 0.61 for Leuven and TCIA datasets respectively. Similarly, the performance is uncompromised when the SNR was over two and eight respectively. AI-assisted segmentation could significantly reduce the inter-observer disparities and segmentation time in both rats and mice. Both AI models for segmenting brain or tumor lesions could improve inter-observer agreement and therefore contributed to the standardization of the following biomedical studies.

MC-Net: multi-scale Swin transformer and complementary self-attention fusion network for pancreas segmentation

  • Wang, Shunan
  • Fan, Jiancong
  • Batista, Paulo
  • Bilas Pachori, Ram
2023 Conference Paper, cited 0 times
Website
The pancreas is located deep in the abdominal cavity, and its structure and adjacent relationship are complex. It is very difficult to treat it accurately. In order to solve the problem of automatic segmentation of pancreatic tissue in CT images, we apply the multi-scale idea of convolution neural network to Transformer, and propose a Multi-Scale Swin Transformer and Complementary Self-Attention Fusion Network for Pancreas Segmentation. Specifically, the multi-scale Swin Transformer module constructs different receptive fields through different window sizes to obtain multi-scale information; the different features of the encoder and decoder are effectively fused through a complementary self-attention fusion module. By comparing experimental evaluations on the NIH-TCIA dataset, our method improves Dice, sensitivity, and IOU by 3.9%, 6.4%, and 5.3% respectively compared to the baseline, which outperforms current state-of-the-art medical image segmentation methods.

Automatic Brain Tumour Segmentation and Biophysics-Guided Survival Prediction

  • Wang, Shuo
  • Dai, Chengliang
  • Mo, Yuanhan
  • Angelini, Elsa
  • Guo, Yike
  • Bai, Wenjia
2020 Book Section, cited 0 times
Gliomas are the most common malignant brain tumours with intrinsic heterogeneity. Accurate segmentation of gliomas and their sub-regions on multi-parametric magnetic resonance images (mpMRI) is of great clinical importance, which defines tumour size, shape and appearance and provides abundant information for preoperative diagnosis, treatment planning and survival prediction. Recent developments on deep learning have significantly improved the performance of automated medical image segmentation. In this paper, we compare several state-of-the-art convolutional neural network models for brain tumour image segmentation. Based on the ensembled segmentation, we present a biophysics-guided prognostic model for patient overall survival prediction which outperforms a data-driven radiomics approach. Our method won the second place of the MICCAI 2019 BraTS Challenge for the overall survival prediction.

Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

  • Wang, Siqiu
Radiation Oncology 2022 Thesis, cited 0 times
Website
Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation.

Direct three-dimensional segmentation of prostate glands with nnU-Net

  • Wang, R.
  • Chow, S. S. L.
  • Serafin, R. B.
  • Xie, W.
  • Han, Q.
  • Baraznenok, E.
  • Lan, L.
  • Bishop, K. W.
  • Liu, J. T. C.
2024 Journal Article, cited 0 times
Website
SIGNIFICANCE: In recent years, we and others have developed non-destructive methods to obtain three-dimensional (3D) pathology datasets of clinical biopsies and surgical specimens. For prostate cancer risk stratification (prognostication), standard-of-care Gleason grading is based on examining the morphology of prostate glands in thin 2D sections. This motivates us to perform 3D segmentation of prostate glands in our 3D pathology datasets for the purposes of computational analysis of 3D glandular features that could offer improved prognostic performance. AIM: To facilitate prostate cancer risk assessment, we developed a computationally efficient and accurate deep learning model for 3D gland segmentation based on open-top light-sheet microscopy datasets of human prostate biopsies stained with a fluorescent analog of hematoxylin and eosin (H&E). APPROACH: For 3D gland segmentation based on our H&E-analog 3D pathology datasets, we previously developed a hybrid deep learning and computer vision-based pipeline, called image translation-assisted segmentation in 3D (ITAS3D), which required a complex two-stage procedure and tedious manual optimization of parameters. To simplify this procedure, we use the 3D gland-segmentation masks previously generated by ITAS3D as training datasets for a direct end-to-end deep learning-based segmentation model, nnU-Net. The inputs to this model are 3D pathology datasets of prostate biopsies rapidly stained with an inexpensive fluorescent analog of H&E and the outputs are 3D semantic segmentation masks of the gland epithelium, gland lumen, and surrounding stromal compartments within the tissue. RESULTS: nnU-Net demonstrates remarkable accuracy in 3D gland segmentations even with limited training data. Moreover, compared with the previous ITAS3D pipeline, nnU-Net operation is simpler and faster, and it can maintain good accuracy even with lower-resolution inputs. CONCLUSIONS: Our trained DL-based 3D segmentation model will facilitate future studies to demonstrate the value of computational 3D pathology for guiding critical treatment decisions for patients with prostate cancer.

S2FLNet: Hepatic steatosis detection network with body shape

  • Wang, Q.
  • Xue, W.
  • Zhang, X.
  • Jin, F.
  • Hahn, J.
Comput Biol Med 2021 Journal Article, cited 0 times
Website
Fat accumulation in the liver cells can increase the risk of cardiac complications and cardiovascular disease mortality. Therefore, a way to quickly and accurately detect hepatic steatosis is critically important. However, current methods, e.g., liver biopsy, magnetic resonance imaging, and computerized tomography scan, are subject to high cost and/or medical complications. In this paper, we propose a deep neural network to estimate the degree of hepatic steatosis (low, mid, high) using only body shapes. The proposed network adopts dilated residual network blocks to extract refined features of input body shape maps by expanding the receptive field. Furthermore, to classify the degree of steatosis more accurately, we create a hybrid of the center loss and cross entropy loss to compact intra-class variations and separate inter-class differences. We performed extensive tests on the public medical dataset with various network parameters. Our experimental results show that the proposed network achieves a total accuracy of over 82% and offers an accurate and accessible assessment for hepatic steatosis.

Pixel-wise body composition prediction with a multi-task conditional generative adversarial network

  • Wang, Q.
  • Xue, W.
  • Zhang, X.
  • Jin, F.
  • Hahn, J.
J Biomed Inform 2021 Journal Article, cited 0 times
Website
The analysis of human body composition plays a critical role in health management and disease prevention. However, current medical technologies to accurately assess body composition such as dual energy X-ray absorptiometry, computed tomography, and magnetic resonance imaging have the disadvantages of prohibitive cost or ionizing radiation. Recently, body shape based techniques using body scanners and depth cameras, have brought new opportunities for improving body composition estimation by intelligently analyzing body shape descriptors. In this paper, we present a multi-task deep neural network method utilizing a conditional generative adversarial network to predict the pixel level body composition using only 3D body surfaces. The proposed method can predict 2D subcutaneous and visceral fat maps in a single network with a high accuracy. We further introduce an interpreted patch discriminator which optimizes the texture accuracy of the 2D fat maps. The validity and effectiveness of our new method are demonstrated experimentally on TCIA and LiTS datasets. Our proposed approach outperforms competitive methods by at least 41.3% for the whole body fat percentage, 33.1% for the subcutaneous and visceral fat percentage, and 4.1% for the regional fat predictions.

Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz

  • Wang, Qingzhu
  • Chen, Xiaoming
  • Wei, Mengying
  • Miao, Zhuang
BioMedical Engineering OnLine 2016 Journal Article, cited 1 times
Website

Simulated MRI Artifacts: Testing Machine Learning Failure Modes

  • Wang, Nicholas C.
  • Noll, Douglas C.
  • Srinivasan, Ashok
  • Gagnon-Bartsch, Johann
  • Kim, Michelle M.
  • Rao, Arvind
2022 Journal Article, cited 0 times
Website
Objective. Seven types of MRI artifacts, including acquisition and preprocessing errors, were simulated to test a machine learning brain tumor segmentation model for potential failure modes. Introduction. Real-world medical deployments of machine learning algorithms are less common than the number of medical research papers using machine learning. Part of the gap between the performance of models in research and deployment comes from a lack of hard test cases in the data used to train a model. Methods. These failure modes were simulated for a pretrained brain tumor segmentation model that utilizes standard MRI and used to evaluate the performance of the model under duress. These simulated MRI artifacts consisted of motion, susceptibility induced signal loss, aliasing, field inhomogeneity, sequence mislabeling, sequence misalignment, and skull stripping failures. Results. The artifact with the largest effect was the simplest, sequence mislabeling, though motion, field inhomogeneity, and sequence misalignment also caused significant performance decreases. The model was most susceptible to artifacts affecting the FLAIR (fluid attenuation inversion recovery) sequence. Conclusion. Overall, these simulated artifacts could be used to test other brain MRI models, but this approach could be used across medical imaging applications.

Proteogenomic and metabolomic characterization of human glioblastoma

  • Wang, Liang-Bo
  • Karpova, Alla
  • Gritsenko, Marina A
  • Kyle, Jennifer E
  • Cao, Song
  • Li, Yize
  • Rykunov, Dmitry
  • Colaprico, Antonio
  • Rothstein, Joseph H
  • Hong, Runyu
Cancer Cell 2021 Journal Article, cited 0 times
Website

Weighted Schatten p-norm minimization for impulse noise removal with TV regularization and its application to medical images

  • Wang, Li
  • Xiao, Di
  • Hou, Wen S.
  • Wu, Xiao Y.
  • Chen, Lin
Biomedical Signal Processing and Control 2021 Journal Article, cited 1 times
Website
Noise of impulse type was common in medical images. In this paper, we modeled the denoising problem for impulse noise by Weighted Schatten p-norm minimization (WSNM) with Robust Principal Component Analysis (RPCA). The anisotropic Total Variation (TV) regularization was incorporated to preserve edge information which was important for clinic detection and diagnosis. The alternating direction method of multipliers (ADMM) algorithm was adopted for solving the formulated nonconvex optimization problem. We tested the performance on both standard natural images and medical images with additive impulse noise in different levels. Experiment results implied its competitiveness compared to traditional denoising algorithms that validated to be state-of-the-art. The propose algorithm restored images with better structure information preservation outperformed the conventional techniques in terms of visual appearances. Quantitative metrics (PSNR, SSIM and FSIM) further objectively demonstrated the effectiveness of the proposed algorithm for impulse noise removal superior to the existing ones.

A multi-objective radiomics model for the prediction of locoregional recurrence in head and neck squamous cell cancer

  • Wang, K.
  • Zhou, Z.
  • Wang, R.
  • Chen, L.
  • Zhang, Q.
  • Sher, D.
  • Wang, J.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Locoregional recurrence (LRR) is the predominant pattern of relapse after nonsurgical treatment of head and neck squamous cell cancer (HNSCC). Therefore, accurately identifying patients with HNSCC who are at high risk for LRR is important for optimizing personalized treatment plans. In this work, we developed a multi-classifier, multi-objective, and multi-modality (mCOM) radiomics-based outcome prediction model for HNSCC LRR. METHODS: In mCOM, we considered sensitivity and specificity simultaneously as the objectives to guide the model optimization. We used multiple classifiers, comprising support vector machine (SVM), discriminant analysis (DA), and logistic regression (LR), to build the model. We used features from multiple modalities as model inputs, comprising clinical parameters and radiomics feature extracted from X-ray computed tomography (CT) images and positron emission tomography (PET) images. We proposed a multi-task multi-objective immune algorithm (mTO) to train the mCOM model and used an evidential reasoning (ER)-based method to fuse the output probabilities from different classifiers and modalities in mCOM. We evaluated the effectiveness of the developed method using a retrospective public pretreatment HNSCC dataset downloaded from The Cancer Imaging Archive (TCIA). The input for our model included radiomics features extracted from pretreatment PET and CT using an open source radiomics software and clinical characteristics such as sex, age, stage, primary disease site, human papillomavirus (HPV) status, and treatment paradigm. In our experiment, 190 patients from two institutions were used for model training while the remaining 87 patients from the other two institutions were used for testing. RESULTS: When we built the predictive model using features from single modality, the multi-classifier (MC) models achieved better performance over the models built with the three base-classifiers individually. When we built the model using features from multiple modalities, the proposed method achieved area under the receiver operating characteristic curve (AUC) values of 0.76 for the radiomics-only model, and 0.77 for the model built with radiomics and clinical features, which is significantly higher than the AUCs of models built with single-modality features. The statistical analysis was performed using MATLAB software. CONCLUSIONS: Comparisons with other methods demonstrated the efficiency of the mTO algorithm and the superior performance of the proposed mCOM model for predicting HNSCC LRR.

Breast cancer cell-derived microRNA-155 suppresses tumor progression via enhancing immune cell recruitment and anti-tumor function

  • Wang, Junfeng
  • Wang, Quanyi
  • Guan, Yinan
  • Sun, Yulu
  • Wang, Xiaozhi
  • Lively, Kaylie
  • Wang, Yuzhen
  • Luo, Ming
  • Kim, Julian A
  • Murphy, E Angela
2022 Journal Article, cited 0 times
Website

Deep learning based image reconstruction algorithm for limited-angle translational computed tomography

  • Wang, Jiaxi
  • Liang, Jun
  • Cheng, Jingye
  • Guo, Yumeng
  • Zeng, Li
PLoS One 2020 Journal Article, cited 0 times
Website

A Novel Brain Tumor Segmentation Approach Based on Deep Convolutional Neural Network and Level Set

  • Wang, Jingjing
  • Gao, Jun
  • Ren, Jinwen
  • Zhao, Yanhua
  • Zhang, Liren
2020 Conference Paper, cited 0 times
In recent years deep convolutional Neural Network (DCNN) gets a big success in brain tumor segmentation. But there are artifacts in the border region of segmentation results using DCNNs. To solve this question, we propose a hybrid model including DCNNs and traditional segmentation methods. First, we use U-Net and ResU-Net network in coarse segmentation. In order to deepen the network levels and improve the network performance, we add residual module to U-Net and comprise the ResU-Net. Second, we use level set in fine segmentation of tumor boundary. We take the intersection of the coarse segmentation outputs of U-Net and ResU-Net as input of level set module. The aim of taking the intersection of U-Net and ResU-Net outputs is to get better initialization information for the level set algorithm and accelerate the evolution of level set functions. The proposed approach is validated on the BraTS 2018 challenge dataset. The metrics used to evaluate the segmentation results are: Dice, Specificity, Sensitivity, Hausdorff distances (HD). We compare our approach with U-Net, ResU-Net and some other methods. The experimental results indicate our approach is better than some other deep networks.

A diagnostic classification of lung nodules using multiple-scale residual network

  • Wang, H.
  • Zhu, H.
  • Ding, L.
  • Yang, K.
2023 Journal Article, cited 0 times
Website
Computed tomography (CT) scans have been shown to be an effective way of improving diagnostic efficacy and reducing lung cancer mortality. However, distinguishing benign from malignant nodules in CT imaging remains challenging. This study aims to develop a multiple-scale residual network (MResNet) to automatically and precisely extract the general feature of lung nodules, and classify lung nodules based on deep learning. The MResNet aggregates the advantages of residual units and pyramid pooling module (PPM) to learn key features and extract the general feature for lung nodule classification. Specially, the MResNet uses the ResNet as a backbone network to learn contextual information and discriminate feature representation. Meanwhile, the PPM is used to fuse features under four different scales, including the coarse scale and the fine-grained scale to obtain more general lung features of the CT image. MResNet had an accuracy of 99.12%, a sensitivity of 98.64%, a specificity of 97.87%, a positive predictive value (PPV) of 99.92%, and a negative predictive value (NPV) of 97.87% in the training set. Additionally, its area under the receiver operating characteristic curve (AUC) was 0.9998 (0.99976-0.99991). MResNet's accuracy, sensitivity, specificity, PPV, NPV, and AUC in the testing set were 85.23%, 92.79%, 72.89%, 84.56%, 86.34%, and 0.9275 (0.91662-0.93833), respectively. The developed MResNet performed exceptionally well in estimating the malignancy risk of pulmonary nodules found on CT. The model has the potential to provide reliable and reproducible malignancy risk scores for clinicians and radiologists, thereby optimizing lung cancer screening management.

RECISTSup: Weakly-Supervised Lesion Volume Segmentation Using RECIST Measurement

  • Wang, H.
  • Yi, F.
  • Wang, J.
  • Yi, Z.
  • Zhang, H.
IEEE Trans Med Imaging 2022 Journal Article, cited 0 times
Website
Lesion volume segmentation in medical imaging is an effective tool for assessing lesion/tumor sizes and monitoring changes in growth. Since manually segmentation of lesion volume is not only time-consuming but also requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Although RECIST measurement is coarse compared with voxel-level annotation, it can reflect the lesion's location, length, and width, resulting in a possibility of segmenting lesion volume directly via RECIST measurement. In this study, a novel weakly-supervised method called RECISTSup is proposed to automatically segment lesion volume via RECIST measurement. Based on RECIST measurement, a new RECIST measurement propagation algorithm is proposed to generate pseudo masks, which are then used to train the segmentation networks. Due to the spatial prior knowledge provided by RECIST measurement, two new losses are also designed to make full use of it. In addition, the automatically segmented lesion results are used to supervise the model training iteratively for further improving segmentation performance. A series of experiments are carried out on three datasets to evaluate the proposed method, including ablation experiments, comparison of various methods, annotation cost analyses, visualization of results. Experimental results show that the proposed RECISTSup achieves the state-of-the-art result compared with other weakly-supervised methods. The results also demonstrate that RECIST measurement can produce similar performance to voxel-level annotation while significantly saving the annotation cost.

Global and Local Multi-scale Feature Fusion Enhancement for Brain Tumor Segmentation and Pancreas Segmentation

  • Wang, Huan
  • Wang, Guotai
  • Liu, Zijian
  • Zhang, Shaoting
2020 Book Section, cited 0 times
The fully convolutional networks (FCNs) have been widely applied in numerous medical image segmentation tasks. However, tissue regions usually have large variations of shape and scale, so the ability of neural networks to learn multi-scale features is important to the segmentation performance. In this paper, we improve the network for multi-scale feature fusion, in the medical image segmentation by introducing two feature fusion modules: i) global attention multi-scale feature fusion module (GMF); ii) local dense multi-scale feature fusion module (LMF). GMF aims to use global context information to guide the recalibration of low-level features from both spatial and channel aspects, so as to enhance the utilization of effective multi-scale features and suppress the noise of low-level features. LMF adopts bottom-up top-down structure to capture context information, to generate semantic features, and to fuse feature information at different scales. LMF can integrate local dense multi-scale context features layer by layer in the network, thus improving the ability of network to encode interdependent relationships among boundary pixels. Based on the above two modules, we propose a novel medical image segmentation framework (GLF-Net). We evaluated the proposed network and modules on challenging brain tumor segmentation and pancreas segmentation datasets, and very competitive performance has been achieved.

3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction

  • Wang, Feifan
  • Jiang, Runzhou
  • Zheng, Liqin
  • Meng, Chun
  • Biswal, Bharat
2020 Book Section, cited 0 times
Past few years have witnessed the prevalence of deep learning in many application scenarios, among which is medical image processing. Diagnosis and treatment of brain tumors requires an accurate and reliable segmentation of brain tumors as a prerequisite. However, such work conventionally requires brain surgeons significant amount of time. Computer vision techniques could provide surgeons a relief from the tedious marking procedure. In this paper, a 3D U-net based deep learning model has been trained with the help of brain-wise normalization and patching strategies for the brain tumor segmentation task in the BraTS 2019 competition. Dice coefficients for enhancing tumor, tumor core, and the whole tumor are 0.737, 0.807 and 0.894 respectively on the validation dataset. These three values on the test dataset are 0.778, 0.798 and 0.852. Furthermore, numerical features including ratio of tumor size to brain size and the area of tumor surface as well as age of subjects are extracted from predicted tumor labels and have been used for the overall survival days prediction task. The accuracy could be 0.448 on the validation dataset, and 0.551 on the final test dataset.

Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT

  • Wang, Dantong
  • Fong, Simon
  • Wong, Raymond K
  • Mohammed, Sabah
  • Fiaidhi, Jinan
  • Wong, Kelvin KL
Sci RepScientific reports 2017 Journal Article, cited 3 times
Website

Improving Generalizability in Limited-Angle CT Reconstruction with Sinogram Extrapolation

  • Wang, Ce
  • Zhang, Haimiao
  • Li, Qian
  • Shang, Kun
  • Lyu, Yuanyuan
  • Dong, Bin
  • Zhou, S. Kevin
2021 Conference Paper, cited 1 times
Website
Computed tomography (CT) reconstruction from X-ray projections acquired within a limited angle range is challenging, especially when the angle range is extremely small. Both analytical and iterative models need more projections for effective modeling. Deep learning methods have gained prevalence due to their excellent reconstruction performances, but such success is mainly limited within the same dataset and does not generalize across datasets with different distributions. Hereby we propose ExtraPolationNetwork for limited-angle CT reconstruction via the introduction of a sinogram extrapolation module, which is theoretically justified. The module complements extra sinogram information and boots model generalizability. Extensive experimental results show that our reconstruction model achieves state-of-the-art performance on NIH-AAPM dataset, similar to existing approaches. More importantly, we show that using such a sinogram extrapolation module significantly improves the generalization capability of the model on unseen datasets (e.g., COVID-19 and LIDC datasets) when compared to existing approaches. Keywords Limited-angle CT reconstruction Sinogram extrapolation Model generalizability

An effective deep network for automatic segmentation of complex lung tumors in CT images

  • Wang, B.
  • Chen, K.
  • Tian, X.
  • Yang, Y.
  • Zhang, X.
Med Phys 2021 Journal Article, cited 0 times
Website
PURPOSE: Accurate segmentation of complex tumors in lung computed tomography (CT) images is essential to improve the effectiveness and safety of lung cancer treatment. However, the characteristics of heterogeneity, blurred boundaries, and large-area adhesion to tissues with similar gray-scale features always make the segmentation of complex tumors difficult. METHODS: This study proposes an effective deep network for the automatic segmentation of complex lung tumors (CLT-Net). The network architecture uses an encoder-decoder model that combines long and short skip connections and a global attention unit to identify target regions using multiscale semantic information. A boundary-aware loss function integrating Tversky loss and boundary loss based on the level-set calculation is designed to improve the network's ability to perceive boundary positions of difficult-to-segment (DTS) tumors. We use a dynamic weighting strategy to balance the contributions of the two parts of the loss function. RESULTS: The proposed method was verified on a dataset consisting of 502 lung CT images containing DTS tumors. The experiments show that the Dice similarity coefficient and Hausdorff distance metric of the proposed method are improved by 13.2% and 8.5% on average, respectively, compared with state-of-the-art segmentation models. Furthermore, we selected three additional medical image datasets with different modalities to evaluate the proposed model. Compared with mainstream architectures, the Dice similarity coefficient is also improved to a certain extent, which demonstrates the effectiveness of our method for segmenting medical images. CONCLUSIONS: Quantitative and qualitative results show that our method outperforms current mainstream lung tumor segmentation networks in terms of Dice similarity coefficient and Hausdorff distance. Note that the proposed method is not limited to the segmentation of complex lung tumors but also performs in different modalities of medical image segmentation.

Methylation of L1RE1, RARB, and RASSF1 function as possible biomarkers for the differential diagnosis of lung cancer

  • Walter, RFH
  • Rozynek, P
  • Casjens, S
  • Werner, R
  • Mairinger, FD
  • Speel, EJM
  • Zur Hausen, A
  • Meier, S
  • Wohlschlaeger, J
  • Theegarten, D
PLoS One 2018 Journal Article, cited 1 times
Website

Segmentation of 71 Anatomical Structures Necessary for the Evaluation of Guideline-Conforming Clinical Target Volumes in Head and Neck Cancers

  • Walter, A.
  • Hoegen-Sassmannshausen, P.
  • Stanic, G.
  • Rodrigues, J. P.
  • Adeberg, S.
  • Jakel, O.
  • Frank, M.
  • Giske, K.
Cancers (Basel) 2024 Journal Article, cited 0 times
Website
The delineation of the clinical target volumes (CTVs) for radiation therapy is time-consuming, requires intensive training and shows high inter-observer variability. Supervised deep-learning methods depend heavily on consistent training data; thus, State-of-the-Art research focuses on making CTV labels more homogeneous and strictly bounding them to current standards. International consensus expert guidelines standardize CTV delineation by conditioning the extension of the clinical target volume on the surrounding anatomical structures. Training strategies that directly follow the construction rules given in the expert guidelines or the possibility of quantifying the conformance of manually drawn contours to the guidelines are still missing. Seventy-one anatomical structures that are relevant to CTV delineation in head- and neck-cancer patients, according to the expert guidelines, were segmented on 104 computed tomography scans, to assess the possibility of automating their segmentation by State-of-the-Art deep learning methods. All 71 anatomical structures were subdivided into three subsets of non-overlapping structures, and a 3D nnU-Net model with five-fold cross-validation was trained for each subset, to automatically segment the structures on planning computed tomography scans. We report the DICE, Hausdorff distance and surface DICE for 71 + 5 anatomical structures, for most of which no previous segmentation accuracies have been reported. For those structures for which prediction values have been reported, our segmentation accuracy matched or exceeded the reported values. The predictions from our models were always better than those predicted by the TotalSegmentator. The sDICE with 2 mm margin was larger than 80% for almost all the structures. Individual structures with decreased segmentation accuracy are analyzed and discussed with respect to their impact on the CTV delineation following the expert guidelines. No deviation is expected to affect the rule-based automation of the CTV delineation.

Muscle and adipose tissue segmentations at the third cervical vertebral level in patients with head and neck cancer

  • Wahid, K. A.
  • Olson, B.
  • Jain, R.
  • Grossberg, A. J.
  • El-Habashy, D.
  • Dede, C.
  • Salama, V.
  • Abobakr, M.
  • Mohamed, A. S. R.
  • He, R.
  • Jaskari, J.
  • Sahlsten, J.
  • Kaski, K.
  • Fuller, C. D.
  • Naser, M. A.
Sci Data 2022 Journal Article, cited 0 times
Website
The accurate determination of sarcopenia is critical for disease management in patients with head and neck cancer (HNC). Quantitative determination of sarcopenia is currently dependent on manually-generated segmentations of skeletal muscle derived from computed tomography (CT) cross-sectional imaging. This has prompted the increasing utilization of machine learning models for automated sarcopenia determination. However, extant datasets currently do not provide the necessary manually-generated skeletal muscle segmentations at the C3 vertebral level needed for building these models. In this data descriptor, a set of 394 HNC patients were selected from The Cancer Imaging Archive, and their skeletal muscle and adipose tissue was manually segmented at the C3 vertebral level using sliceOmatic. Subsequently, using publicly disseminated Python scripts, we generated corresponding segmentations files in Neuroimaging Informatics Technology Initiative format. In addition to segmentation data, additional clinical demographic data germane to body composition analysis have been retrospectively collected for these patients. These data are a valuable resource for studying sarcopenia and body composition analysis in patients with HNC.

Ultralow-parameter denoising: Trainable bilateral filter layers in computed tomography

  • Wagner, F.
  • Thies, M.
  • Gu, M.
  • Huang, Y.
  • Pechmann, S.
  • Patwari, M.
  • Ploner, S.
  • Aust, O.
  • Uderhardt, S.
  • Schett, G.
  • Christiansen, S.
  • Maier, A.
Med Phys 2022 Journal Article, cited 1 times
Website
BACKGROUND: Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE: Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS: This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS: Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS: Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.

Transfer Learning for Brain Tumor Segmentation

  • Wacker, Jonas
  • Ladeira, Marcelo
  • Nascimento, Jose Eduardo Vaz
2021 Book Section, cited 0 times
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery. Magnetic Resonance Imaging (MRI) is used by radiotherapists to manually segment brain lesions and to observe their development throughout the therapy. The manual image segmentation process is time-consuming and results tend to vary among different human raters. Therefore, there is a substantial demand for automatic image segmentation algorithms that produce a reliable and accurate segmentation of various brain tissue types. Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks. They have been successfully applied to the medical context including medical image segmentation. In particular, fully convolutional networks (FCNs) such as the U-Net produce state-of-the-art results in the automatic segmentation of brain tumors. MRI brain scans are volumetric and exist in various co-registered modalities that serve as input channels for these FCN architectures. Training algorithms for brain tumor segmentation on this complex input requires large amounts of computational resources and is prone to overfitting. In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances. We also test our method on a privately obtained clinical dataset.

Deep Learning Based Approach for Multiple Myeloma Detection

  • Vyshnav, M.T.
  • Sowmya, V.
  • Gopalakrishnan, E.A.
  • Variyar V.V., Sajith
  • Krishna Menon, Vijay
  • Soman, K.P.
2020 Conference Paper, cited 2 times
Website
Multiple myeloma cancer is caused by the abnormal growth of plasma cells in the bone marrow. The most commonly used method for diagnosis of multiple myeloma is Bone marrow aspiration, where the aspirate slide images are either observed visually or passed onto existing digital image processing software for the detection of myeloma cells. The current work explores the effectiveness of deep learning based object detection/segmentation algorithms such as Mask-RCNN and unet for the detection of multiple myeloma. The manual polygon annotation of the current dataset is performed using VGG image annotation software. The deep learning models were trained by monitoring the train and validation loss per epoch and the best model was selected based on the minimal loss for the validation data. From the comparison results obtained for both the models, it is observed that Mask-RCNN has competing results than unet and it addresses most of the challenges existing in multiple myeloma segmentation.

Quantification of the spatial distribution of primary tumors in the lung to develop new prognostic biomarkers for locally advanced NSCLC

  • Vuong, Diem
  • Bogowicz, Marta
  • Wee, Leonard
  • Riesterer, Oliver
  • Vlaskou Badra, Eugenia
  • D’Cruz, Louisa Abigail
  • Balermpas, Panagiotis
  • van Timmeren, Janita E.
  • Burgermeister, Simon
  • Dekker, André
  • De Ruysscher, Dirk
  • Unkelbach, Jan
  • Thierstein, Sandra
  • Eboulet, Eric I.
  • Peters, Solange
  • Pless, Miklos
  • Guckenberger, Matthias
  • Tanadini-Lang, Stephanie
Sci RepScientific reports 2021 Journal Article, cited 0 times
Website
The anatomical location and extent of primary lung tumors have shown prognostic value for overall survival (OS). However, its manual assessment is prone to interobserver variability. This study aims to use data driven identification of image characteristics for OS in locally advanced non-small cell lung cancer (NSCLC) patients. Five stage IIIA/IIIB NSCLC patient cohorts were retrospectively collected. Patients were treated either with radiochemotherapy (RCT): RCT1* (n = 107), RCT2 (n = 95), RCT3 (n = 37) or with surgery combined with radiotherapy or chemotherapy: S1* (n = 135), S2 (n = 55). Based on a deformable image registration (MIM Vista, 6.9.2.), an in-house developed software transferred each primary tumor to the CT scan of a reference patient while maintaining the original tumor shape. A frequency-weighted cumulative status map was created for both exploratory cohorts (indicated with an asterisk), where the spatial extent of the tumor was uni-labeled with 2 years OS. For the exploratory cohorts, a permutation test with random assignment of patient status was performed to identify regions with statistically significant worse OS, referred to as decreased survival areas (DSA). The minimal Euclidean distance between primary tumor to DSA was extracted from the independent cohorts (negative distance in case of overlap). To account for the tumor volume, the distance was scaled with the radius of the volume-equivalent sphere. For the S1 cohort, DSA were located at the right main bronchus whereas for the RCT1 cohort they further extended in cranio-caudal direction. In the independent cohorts, the model based on distance to DSA achieved performance: AUCRCT2 [95% CI] = 0.67 [0.55–0.78] and AUCRCT3 = 0.59 [0.39–0.79] for RCT patients, but showed bad performance for surgery cohort (AUCS2 = 0.52 [0.30–0.74]). Shorter distance to DSA was associated with worse outcome (p = 0.0074). In conclusion, this explanatory analysis quantifies the value of primary tumor location for OS prediction based on cumulative status maps. Shorter distance of primary tumor to a high-risk region was associated with worse prognosis in the RCT cohort.

Multi-decoder Networks with Multi-denoising Inputs for Tumor Segmentation

  • Vu, Minh H.
  • Nyholm, Tufve
  • Löfstedt, Tommy
2021 Book Section, cited 0 times
Automatic segmentation of brain glioma from multimodal MRI scans plays a key role in clinical trials and practice. Unfortunately, manual segmentation is very challenging, time-consuming, costly, and often inaccurate despite human expertise due to the high variance and high uncertainty in the human annotations. In the present work, we develop an end-to-end deep-learning-based segmentation method using a multi-decoder architecture by jointly learning three separate sub-problems using a partly shared encoder. We also propose to apply smoothing methods to the input images to generate denoised versions as additional inputs to the network. The validation performance indicates an improvement when using the proposed method. The proposed method was ranked 2nd in the task of Quantification of Uncertainty in Segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2020.

TuNet: End-to-End Hierarchical Brain Tumor Segmentation Using Cascaded Networks

  • Vu, Minh H.
  • Nyholm, Tufve
  • Löfstedt, Tommy
2020 Book Section, cited 0 times
Glioma is one of the most common types of brain tumors; it arises in the glial cells in the human brain and in the spinal cord. In addition to having a high mortality rate, glioma treatment is also very expensive. Hence, automatic and accurate segmentation and measurement from the early stages are critical in order to prolong the survival rates of the patients and to reduce the costs of the treatment. In the present work, we propose a novel end-to-end cascaded network for semantic segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2019 that utilizes the hierarchical structure of the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation modules after each convolution and concatenation block. By utilizing cross-validation, an average ensemble technique, and a simple post-processing technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor core, and enhancing tumor, respectively, on the online test set. The proposed method was ranked among the top in the task of Quantification of Uncertainty in Segmentation.

Auto‐segmentation of organs at risk for head and neck radiotherapy planning: from atlas‐based to deep learning methods

  • Vrtovec, Tomaž
  • Močnik, Domen
  • Strojan, Primož
  • Pernuš, Franjo
  • Ibragimov, Bulat
Medical Physics 2020 Journal Article, cited 2 times
Website

Mobile-based Application for COVID-19 Detection from Lung X-Ray Scans with Artificial Neural Networks (ANN)

  • Vong, Chanvichet
  • Chanchotisatien, Passara
2022 Conference Paper, cited 0 times
Website
In early 2020, the World Health Organization (WHO) identified a novel coronavirus referred to as SARS-CoV-2, which is associated with the now commonly known COVID-19 disease. COVID-19 was shortly later characterized as a pandemic. All countries around the globe have been severely affected and the disease has accumulated a total of over 200 million cases and more than five million deaths in the past two years. Symptoms associated with COVID-19 vary greatly in severity. Some infected with COVID-19 are asymptomatic, while others experience critical disease with life-threatening complications. In this paper, a mobile-based application has been created to help classify Covid-19 and non-Covid-19 lung when given an image of a Chest X-Ray (CXR). A variety of different artificial neural networks (ANN) including our baseline model, InceptionV3, MobileNetV2, MobileNetV3, VGG16, and VGG19 were tested to see which would provide the optimal results. It is concluded that MobileNetV3 gives the best test accuracy of 95.49% and is considered a lightweight model suitable for a mobile-based application.

Iron commensalism of mesenchymal glioblastoma promotes ferroptosis susceptibility upon dopamine treatment

  • Vo, Vu T. A.
  • Kim, Sohyun
  • Hua, Tuyen N. M.
  • Oh, Jiwoong
  • Jeong, Yangsik
Communications Biology 2022 Journal Article, cited 0 times
The heterogeneity of glioblastoma multiforme (GBM) leads to poor patient prognosis. Here, we aim to investigate the mechanism through which GBM heterogeneity is coordinated to promote tumor progression. We find that proneural (PN)-GBM stem cells (GSCs) secreted dopamine (DA) and transferrin (TF), inducing the proliferation of mesenchymal (MES)-GSCs and enhancing their susceptibility toward ferroptosis. PN-GSC-derived TF stimulates MES-GSC proliferation in an iron-dependent manner. DA acts in an autocrine on PN-GSC growth in a DA receptor D1-dependent manner, while in a paracrine it induces TF receptor 1 expression in MES-GSCs to assist iron uptake and thus enhance ferroptotic vulnerability. Analysis of public datasets reveals worse prognosis of patients with heterogeneous GBM with high iron uptake than those with other GBM subtypes. Collectively, the findings here provide evidence of commensalism symbiosis that causes MES-GSCs to become iron-addicted, which in turn provides a rationale for targeting ferroptosis to treat resistant MES GBM.

Inter-rater agreement in glioma segmentations on longitudinal MRI

  • Visser, M.
  • Muller, D. M. J.
  • van Duijn, R. J. M.
  • Smits, M.
  • Verburg, N.
  • Hendriks, E. J.
  • Nabuurs, R. J. A.
  • Bot, J. C. J.
  • Eijgelaar, R. S.
  • Witte, M.
  • van Herk, M. B.
  • Barkhof, F.
  • de Witt Hamer, P. C.
  • de Munck, J. C.
Neuroimage Clin 2019 Journal Article, cited 0 times
Website
BACKGROUND: Tumor segmentation of glioma on MRI is a technique to monitor, quantify and report disease progression. Manual MRI segmentation is the gold standard but very labor intensive. At present the quality of this gold standard is not known for different stages of the disease, and prior work has mainly focused on treatment-naive glioblastoma. In this paper we studied the inter-rater agreement of manual MRI segmentation of glioblastoma and WHO grade II-III glioma for novices and experts at three stages of disease. We also studied the impact of inter-observer variation on extent of resection and growth rate. METHODS: In 20 patients with WHO grade IV glioblastoma and 20 patients with WHO grade II-III glioma (defined as non-glioblastoma) both the enhancing and non-enhancing tumor elements were segmented on MRI, using specialized software, by four novices and four experts before surgery, after surgery and at time of tumor progression. We used the generalized conformity index (GCI) and the intra-class correlation coefficient (ICC) of tumor volume as main outcome measures for inter-rater agreement. RESULTS: For glioblastoma, segmentations by experts and novices were comparable. The inter-rater agreement of enhancing tumor elements was excellent before surgery (GCI 0.79, ICC 0.99) poor after surgery (GCI 0.32, ICC 0.92), and good at progression (GCI 0.65, ICC 0.91). For non-glioblastoma, the inter-rater agreement was generally higher between experts than between novices. The inter-rater agreement was excellent between experts before surgery (GCI 0.77, ICC 0.92), was reasonable after surgery (GCI 0.48, ICC 0.84), and good at progression (GCI 0.60, ICC 0.80). The inter-rater agreement was good between novices before surgery (GCI 0.66, ICC 0.73), was poor after surgery (GCI 0.33, ICC 0.55), and poor at progression (GCI 0.36, ICC 0.73). Further analysis showed that the lower inter-rater agreement of segmentation on postoperative MRI could only partly be explained by the smaller volumes and fragmentation of residual tumor. The median interquartile range of extent of resection between raters was 8.3% and of growth rate was 0.22mm/year. CONCLUSION: Manual tumor segmentations on MRI have reasonable agreement for use in spatial and volumetric analysis. Agreement in spatial overlap is of concern with segmentation after surgery for glioblastoma and with segmentation of non-glioblastoma by non-experts.

3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities

  • Villarini, B.
  • Asaturyan, H.
  • Kurugol, S.
  • Afacan, O.
  • Bell, J. D.
  • Thomas, E. L.
2021 Journal Article, cited 3 times
Website
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.

Classificação Multirrótulo na Anotação Automática de Nódulo Pulmonar Solitário

  • Villani, Leonardo
  • Prati, Ronaldo Cristiano
2012 Conference Proceedings, cited 0 times

An intelligent lung tumor diagnosis system using whale optimization algorithm and support vector machine

  • Vijh, Surbhi
  • Gaur, Deepak
  • Kumar, Sushil
International Journal of System Assurance Engineering and Management 2019 Journal Article, cited 0 times
Medical image processing technique are widely used for detection of tumor to increase the survival rate of patients. The development of computer-aided diagnosis system shows improvement in observing the medical image and determining the treatment stages. The earlier detection of tumor reduces the mortality of lung cancer by increasing the probability of successful treatment. In this paper, the intelligent lung tumor diagnosis system is developed using various image processing technique. The simulated steps involve image enhancement, image segmentation, post-processing, feature extraction, feature selection and classification using support vector machine (SVM) kernel. Gray level co-occurrence matrix method is used for extracting the 19 texture and statistical features of lung computed tomography (CT) image. Whale optimization algorithm (WOA) is considered for selection of best prominent feature subset. The contribution provided in this paper is the development of WOA_SVM to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal. An improved technique is developed using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model. The performance of proposed methodology is evaluated using accuracy, sensitivity and specificity and obtained as 95%, 100% and 92% using radial bias function support vector kernel.

Hyperthermia by Low Intensity Focused Ultrasound

  • Vielma, Manuel
  • Wahl, David
  • Wahl, François
2023 Conference Proceedings, cited 0 times
Website
We present the results of simulations of heating by low-intensity (non-ablating) focused ultrasound. The simulations are aimed at modelling hyperthermia treatment of organs affected by cancer [1] – particularly the prostate. The studies were carried out with the objective of developing low-cost medical devices for use in low- and middle-income countries (LMIC). Our innovation has been to favor the use of free and open-source tools, combining them so as to achieve realistic representations of the relevant tissue layers, regarding their geometric as well as their acoustic and thermal properties. The combination of tools we have selected are available to researchers in LMIC, to favor the emergence local research initiatives. To achieve precision in the shapes and locations of the models, we performed segmentation of Computed Tomography scan images obtained from public databases. The 3D representations thus generated were then inputted as voxelized matrix regions in a calculation grid of pressure field and heat simulations - using open source MATLAB® packages. We report on the results of simulations using this combination of software tools.

Detecting pulmonary diseases using deep features in X-ray images

  • Vieira, P.
  • Sousa, O.
  • Magalhaes, D.
  • Rabelo, R.
  • Silva, R.
Pattern Recognition 2021 Journal Article, cited 0 times
Website
COVID-19 leads to radiological evidence of lower respiratory tract lesions, which support analysis to screen this disease using chest X-ray. In this scenario, deep learning techniques are applied to detect COVID-19 pneumonia in X-ray images, aiding a fast and precise diagnosis. Here, we investigate seven deep learning architectures associated with data augmentation and transfer learning techniques to detect different pneumonia types. We also propose an image resizing method with the maximum window function that preserves anatomical structures of the chest. The results are promising, reaching an accuracy of 99.8% considering COVID-19, normal, and viral and bacterial pneumonia classes. The differentiation between viral pneumonia and COVID-19 achieved an accuracy of 99.8%, and 99.9% of accuracy between COVID-19 and bacterial pneumonia. We also evaluated the impact of the proposed image resizing method on classification performance comparing with the bilinear interpolation; this pre-processing increased the classification rate regardless of the deep learning architectures used. We c ompared our results with ten related works in the state-of-the-art using eight sets of experiments, which showed that the proposed method outperformed them in most cases. Therefore, we demonstrate that deep learning models trained with pre-processed X-ray images could precisely assist the specialist in COVID-19 detection.

Novel Framework for Breast Cancer Classification for Retaining Computational Efficiency and Precise Diagnosis

  • Vidya, K
  • Kurian, MZ
Communications Applied Electronics 2018 Journal Article, cited 0 times
Website
Classification of breast cancer is still an open-end challenge in medical image processing. The existing literatures were reviewed to found that existing solution are more pivotal towards accuracy in classification and less towards achieving computational effectiveness in classification process. Therefore, this paper presents a novel classification approach that bridges the trade-off between computational performances of classifier with its final response towards disease criticality. An analytical framework is built that takes the input of Magnetic Resonance Imaging (MRI) of breast cancer which is subjected to non-linear map-based filter for enhancing pre-processing operation. The algorithm also offers a novel integral transformation scheme that lets the filtered image to get itself transformed followed by precise extraction of foreground and background for assisting in reliable classification. A statistical-based approach is used for extracting feature followed by classifying using unsupervised learning algorithm. The study outcome shows superior performance compared to existing schemes of classification.

Using Radiomics to improve the 2-year survival of Non-Small Cell Lung Cancer Patients

  • Vial, Alanna Heather Therese
2022 Thesis, cited 0 times
Website
This thesis both exploits and further contributes enhancements to the utilization of radiomics (extracted quantitative features of radiological imaging data) for improving cancer survival prediction. Several machine learning methods were compared in this analysis, including but not limited to support vector machines, convolutional neural networks and logistic regression.A technique for analysing prognostic image characteristics, for non-small cell lung cancer based on the edge regions, as well as tissues immediately surrounding visible tumours is developed. Regions external to and neighbouring a tumour were shown to also have prognostic value. By using the additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, which has been determined by examining the outside rind tissue including the tumour compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important for survival analysis. Further, it was found that improved prediction resulted up to some 6 pixels outside the tumour volume, a distance of approximately 5mm outside the original gross tumour volume (GTV), when applying a support vector machine, which achieved the highest accuracy of 71.18%. This research indicates the periphery of the tumour is highly predictive of survival. To our knowledge this is the first study that has concentrically expanded and analysed the NSCLC rind for radiomic analysis.

Assessing the prognostic impact of 3D CT image tumour rind texture features on lung cancer survival modelling

  • Vial, A.
  • Stirling, D.
  • Field, M.
  • Ros, M.
  • Ritz, C.
  • Carolan, M
  • Holloway, L.
  • Miller, A. A.
2017 Conference Paper, cited 1 times
Website
In this paper we examine a technique for developing prognostic image characteristics, termed radiomics, for non-small cell lung cancer based on a tumour edge region-based analysis. Texture features were extracted from the rind of the tumour in a publicly available 3D CT data set to predict two-year survival. The derived models were compared against the previous methods of training radiomic signatures that are descriptive of the whole tumour volume. Radiomic features derived solely from regions external, but neighbouring, the tumour were shown to also have prognostic value. By using additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, upon examining the outside rind including the volume compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important.

Learning Shape Distributions from Large Databases of Healthy Organs: Applications to Zero-Shot and Few-Shot Abnormal Pancreas Detection

  • Vétil, Rebeca
  • Abi-Nader, Clément
  • Bône, Alexandre
  • Vullierme, Marie-Pierre
  • Rohé, Marc-Michel
  • Gori, Pietro
  • Bloch, Isabelle
2022 Conference Proceedings, cited 0 times
Website

Domain Generalization for Prostate Segmentation in Transrectal Ultrasound Images: A Multi-center Study

  • Vesal, S.
  • Gayo, I.
  • Bhattacharya, I.
  • Natarajan, S.
  • Marks, L. S.
  • Barratt, D. C.
  • Fan, R. E.
  • Hu, Y.
  • Sonn, G. A.
  • Rusu, M.
Med Image Anal 2022 Journal Article, cited 0 times
Website
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0+/-0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0+/-0.03; HD95: 3.7 mm and Dice: 82.0+/-0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.

Stable and Discriminatory Radiomic Features from the Tumor and Its Habitat Associated with Progression-Free Survival in Glioblastoma: A Multi-Institutional Study

  • Verma, R.
  • Hill, V. B.
  • Statsevych, V.
  • Bera, K.
  • Correa, R.
  • Leo, P.
  • Ahluwalia, M.
  • Madabhushi, A.
  • Tiwari, P.
American Journal of Neuroradiology 2022 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Glioblastoma is an aggressive brain tumor, with no validated prognostic biomarkers for survival before surgical resection. Although recent approaches have demonstrated the prognostic ability of tumor habitat (constituting necrotic core, enhancing lesion, T2/FLAIR hyperintensity subcompartments) derived radiomic features for glioblastoma survival on treatment-naive MR imaging scans, radiomic features are known to be sensitive to MR imaging acquisitions across sites and scanners. In this study, we sought to identify the radiomic features that are both stable across sites and discriminatory of poor and improved progression-free survival in glioblastoma tumors. MATERIALS AND METHODS: We used 150 treatment-naive glioblastoma MR imaging scans (Gadolinium-T1w, T2w, FLAIR) obtained from 5 sites. For every tumor subcompartment (enhancing tumor, peritumoral FLAIR-hyperintensities, necrosis), a total of 316 three-dimensional radiomic features were extracted. The training cohort constituted studies from 4 sites (n = 93) to select the most stable and discriminatory radiomic features for every tumor subcompartment. These features were used on a hold-out cohort (n = 57) to evaluate their ability to discriminate patients with poor survival from those with improved survival. RESULTS: Incorporating the most stable and discriminatory features within a linear discriminant analysis classifier yielded areas under the curve of 0.71, 0.73, and 0.76 on the test set for distinguishing poor and improved survival compared with discriminatory features alone (areas under the curve of 0.65, 0.54, 0.62) from the necrotic core, enhancing tumor, and peritumoral T2/FLAIR hyperintensity, respectively. CONCLUSIONS: Incorporating stable and discriminatory radiomic features extracted from tumors and associated habitats across multisite MR imaging sequences may yield robust prognostic classifiers of patient survival in glioblastoma tumors.

Medical image thresholding using WQPSO and maximum entropy

  • Venkatesan, Anusuya
  • Parthiban, Latha
2012 Conference Proceedings, cited 1 times
Website
Image thresholding is an important method of image segmentation to find the objects of interest. Maximum entropy is an image thresholding method that exploits entropy of the distribution in gray level of the image. The performance of this method can be improved by using swarm intelligence techniques such as Particle Swarm Optimization (PSO) and Quantum PSO (QPSO). QPSO has attracted the research community due to its simplicity, easy implementation and fast convergence. The convergence of QPSO is faster than PSO and global convergence is guaranteed. In this paper, we propose a new combination of mean updated QPSO referred to as weighted QPSO with maximum entropy to find optimal threshold for magnetic resonance images (MRI). The performance of this method outperforms other existing methods in literature in terms of convergence speed and accuracy.

Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features

  • Velazquez, Emmanuel Rios
  • Meier, Raphael
  • Dunn Jr, William D
  • Alexander, Brian
  • Wiest, Roland
  • Bauer, Stefan
  • Gutman, David A
  • Reyes, Mauricio
  • Aerts, Hugo JWL
Sci RepScientific reports 2015 Journal Article, cited 42 times
Website
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

Robustness of Deep Networks for Mammography: Replication Across Public Datasets

  • Velarde, Osvaldo M.
  • Lin, Clarissa
  • Eskreis-Winkler, Sarah
  • Parra, Lucas C.
2024 Journal Article, cited 0 times
Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (N = 826 exams). On the larger OMI-DB dataset (N = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.

Una metodología para el análisis y selección de características extraídas mediante Deep Learning de imágenes de Tomografía Computerizada de pulmón.

  • Vega Gonzalo, María
2018 Thesis, cited 0 times
Website
Este proyecto se enmarca dentro del proyecto de investigación europeo IASIS, en el cual participa el laboratorio de Análisis de Datos Médicos (MEDAL) del Centro de Tecnología Biómedica de la UPM. El proyecto IASIS pretende estructurar la información médica relacionada con el cáncer de pulmón y la enfermedad de Alzheimer, con el objetivo de analizarla y, a partir del conocimiento extraído, mejorar el diagnóstico y tratamiento de estas enfermedades. El objetivo del presente TFG es establecer una metodología que permita la reducción de la dimensionalidad de características extraídas mediante Deep Learning de imágenes de Tomografía Axial Computerizada. El motivo por el que se desea disminuir el número de variables de los datos, es que la extracción de dichos datos tiene como objetivo utilizarlos para clasificar los nódulos presentes en las imágenes mediante un clasificador. Sin embargo, la alta dimensionalidad de los datos puede perjudicar la precisión de la clasificación, además de suponer un alto coste computacional. (below as google translates:) This project is part of the IASIS European research project, in which the Medical Data Analysis Laboratory (MEDAL) of the Center for Biological Technology of the UPM participates. The IASIS project aims to structure medical information related to lung cancer and Alzheimer's disease, with the aim of analyzing it and, based on the knowledge extracted, improving the diagnosis and treatment of these diseases. The objective of this TFG is to establish a methodology that allows the reduction of the dimensionality of features extracted through Deep Learning of Computerized Axial Tomography images. The reason why we want to reduce the number of data variables is that the extraction of said data is intended to be used to classify the nodules present in the images by means of a classifier. However, the high dimensionality of the data can impair the accuracy of the classification, in addition to assuming a high computational cost.

Addressing architectural distortion in mammogram using AlexNet and support vector machine

  • Vedalankar, Aditi V.
  • Gupta, Shankar S.
  • Manthalkar, Ramchandra R.
Informatics in Medicine Unlocked 2021 Journal Article, cited 0 times
Website
Objective To address the architectural distortion (AD) which is an irregularity in the parenchymal pattern of breast. The nature of AD is extremely complex; still, the study is very much essential because AD is viewed as a primitive sign of breast cancer. In this study, a new convolutional neural network (CNN) based system is developed that performs classification of AD distorted mammograms and other mammograms. Methods In the first part, mammograms undergo pre-processing and image augmentation techniques. In the other half, learned and handcrafted features are retrieved. The AlexNet Pretrained CNN is utilized for extraction of learned features. The support vector machine (SVM) validates the existence of AD. For improved classification, the scheme is tested for various conditions. Results A sophisticated CNN based system is developed for stepwise analysis of AD. The maximum accuracy, sensitivity and specificity yielded as 92%, 81.50% and 90.83% respectively. The results outperform the conventional methods. Conclusion Based on the overall study, it is recommended that a combination of CNN pre-trained network and support vector machine is a good option for identification of AD. The study will motivate researchers to find improved methods of high performance. Further, it will also help the radiologists. Significance The AD can develop up to two years before the growth of any anomaly. The proposed system will play an essential role in the detection of early manifestations of breast cancer. The system will aid society to go for better treatment options for women all over the world and curtail the mortality rate.

Identification and classification of DICOM files with burned-in text content

  • Vcelak, Petr
  • Kryl, Martin
  • Kratochvil, Michal
  • Kleckova, Jana
International Journal of Medical Informatics 2019 Journal Article, cited 0 times
Website
Background Protected health information burned in pixel data is not indicated for various reasons in DICOM. It complicates the secondary use of such data. In recent years, there have been several attempts to anonymize or de-identify DICOM files. Existing approaches have different constraints. No completely reliable solution exists. Especially for large datasets, it is necessary to quickly analyse and identify files potentially violating privacy. Methods Classification is based on adaptive-iterative algorithm designed to identify one of three classes. There are several image transformations, optical character recognition, and filters; then a local decision is made. A confirmed local decision is the final one. The classifier was trained on a dataset composed of 15,334 images of various modalities. Results The false positive rates are in all cases below 4.00%, and 1.81% in the mission-critical problem of detecting protected health information. The classifier's weighted average recall was 94.85%, the weighted average inverse recall was 97.42% and Cohen's Kappa coefficient was 0.920. Conclusion The proposed novel approach for classification of burned-in text is highly configurable and able to analyse images from different modalities with a noisy background. The solution was validated and is intended to identify DICOM files that need to have restricted access or be thoroughly de-identified due to privacy issues. Unlike with existing tools, the recognised text, including its coordinates, can be further used for de-identification.

A repository of grade 1 and 2 meningioma MRIs in a public dataset for radiomics reproducibility tests

  • Vassantachart, April
  • Cao, Yufeng
  • Shen, Zhilei
  • Cheng, Karen
  • Gribble, Michael
  • Ye, Jason C.
  • Zada, Gabriel
  • Hurth, Kyle
  • Mathew, Anna
  • Guzman, Samuel
  • Yang, Wensha
Medical Physics 2023 Journal Article, cited 0 times
Purpose Meningiomas are the most common primary brain tumors in adults with management varying widely based on World Health Organization (WHO) grade. However, there are limited datasets available for researchers to develop and validate radiomic models. The purpose of our manuscript is to report on the first dataset of meningiomas in The Cancer Imaging Archive (TCIA). Acquisition and validation methods The dataset consists of pre-operative MRIs from 96 patients with meningiomas who underwent resection from 2010–2019 and include axial T1post and T2-FLAIR sequences—55 grade 1 and 41 grade 2. Meningioma grade was confirmed based on the 2016 WHO Bluebook classification guideline by two neuropathologists and one neuropathology fellow. The hyperintense T1post tumor and hyperintense T2-FLAIR regions were manually contoured on both sequences and resampled to an isotropic resolution of 1 × 1 × 1 mm3. The entire dataset was reviewed by a certified medical physicist. Data format and usage notes The data was imported into TCIA for storage and can be accessed at https://doi.org/10.7937/0TKV-1A36. The total size of the dataset is 8.8GB, with 47 519 individual Digital Imaging and Communications in Medicine (DICOM) files consisting of 384 image series, and 192 structures. Potential applications Grade 1 and 2 meningiomas have different treatment paradigms and are often treated based on radiologic diagnosis alone. Therefore, predicting grade prior to treatment is essential in clinical decision-making. This dataset will allow researchers to create models to auto-differentiate grade 1 and 2 meningiomas as well as evaluate for other pathologic features including mitotic index, brain invasion, and atypical features. Limitations of this study are the small sample size and inclusion of only two MRI sequences. However, there are no meningioma datasets on TCIA and limited datasets elsewhere although meningiomas are the most common intracranial tumor in adults.

Classification of benign and malignant lung nodules using image processing techniques

  • Vas, Moffy Crispin
  • Dessai, Amita
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website
Cancer is the second leading cause of most number of deaths worldwide after the heart disease, out of which, lung cancer is the leading cause of deaths among all the cancer types. Hence, the lung cancer issue is of global concern and thus this work deals with detection of malignant lung cancer nodules and tries to distinguish it from the benign nodules by processing the Computer tomography (CT) images with the help of Haar wavelet decomposition, Haralick feature extraction followed by artificial neural networks (ANN) .

Multi-centre radiomics for prediction of recurrence following radical radiotherapy for head and neck cancers: Consequences of feature selection, machine learning classifiers and batch-effect harmonization

  • Varghese, Amal Joseph
  • Gouthamchand, Varsha
  • Sasidharan, Balu Krishna
  • Wee, Leonard
  • Sidhique, Sharief K
  • Rao, Julia Priyadarshini
  • Dekker, Andre
  • Hoebers, Frank
  • Devakumar, Devadhas
  • Irodi, Aparna
  • Balasingh, Timothy Peace
  • Godson, Henry Finlay
  • Joel, T
  • Mathew, Manu
  • Gunasingam Isiah, Rajesh
  • Pavamani, Simon Pradeep
  • Thomas, Hannah Mary T
Phys Imaging Radiat Oncol 2023 Journal Article, cited 1 times
Website
BACKGROUND AND PURPOSE: Radiomics models trained with limited single institution data are often not reproducible and generalisable. We developed radiomics models that predict loco-regional recurrence within two years of radiotherapy with private and public datasets and their combinations, to simulate small and multi-institutional studies and study the responsiveness of the models to feature selection, machine learning algorithms, centre-effect harmonization and increased dataset sizes. MATERIALS AND METHODS: 562 patients histologically confirmed and treated for locally advanced head-and-neck cancer (LA-HNC) from two public and two private datasets; one private dataset exclusively reserved for validation. Clinical contours of primary tumours were not recontoured and were used for Pyradiomics based feature extraction. ComBat harmonization was applied, and LASSO-Logistic Regression (LR) and Support Vector Machine (SVM) models were built. 95% confidence interval (CI) of 1000 bootstrapped area-under-the-Receiver-operating-curves (AUC) provided predictive performance. Responsiveness of the models' performance to the choice of feature selection methods, ComBat harmonization, machine learning classifier, single and pooled data was evaluated. RESULTS: LASSO and SelectKBest selected 14 and 16 features, respectively; three were overlapping. Without ComBat, the LR and SVM models for three institutional data showed AUCs (CI) of 0.513 (0.481-0.559) and 0.632 (0.586-0.665), respectively. Performances following ComBat revealed AUCs of 0.559 (0.536-0.590) and 0.662 (0.606-0.690), respectively. Compared to single cohort AUCs (0.562-0.629), SVM models from pooled data performed significantly better at AUC = 0.680. CONCLUSIONS: Multi-institutional retrospective data accentuates the existing variabilities that affect radiomics. Carefully designed prospective, multi-institutional studies and data sharing are necessary for clinically relevant head-and-neck cancer prognostication models.

Radiogenomics of High-Grade Serous Ovarian Cancer: Multireader Multi-Institutional Study from the Cancer Genome Atlas Ovarian Cancer Imaging Research Group

  • Vargas, Hebert Alberto
  • Huang, Erich P
  • Lakhman, Yulia
  • Ippolito, Joseph E
  • Bhosale, Priya
  • Mellnick, Vincent
  • Shinagare, Atul B
  • Anello, Maria
  • Kirby, Justin
  • Fevrier-Sullivan, Brenda
RadiologyRadiology 2017 Journal Article, cited 3 times
Website

An Optimized Deep Learning Technique for Detecting Lung Cancer from CT Images

  • Vanitha, M.
  • Mangayarkarasi, R.
  • Angulakshmi, M.
  • Deepa, M.
2023 Book Section, cited 0 times
Of all the cancer diseases that have existed lung cancer also contributes to human deaths among all the cancers. Today, the number of people getting affected is increasing rapidly. India reports 70,000 cases per year. Currently, the technological improvements in the medical domain help the physician to detect the symptoms associated with the diseases precisely in a cost-effective manner. The asymptomatic nature of the disease makes it impossible to detect in the early stage. For any chronic disease, early prediction is essential for saving lives. In this chapter, a novel optimized CNN-based classifier is presented to alleviate the practical hindrances in the existing techniques such as overfitting Pre-processing, data augmentation, and detection of lung cancer from CT images using CNN is performed on the LIDC-IDRI dataset. The tested results show that the presented CNN-based classifier results are good compared to the results from the machine learning techniques in terms of quantitative metrics with an accuracy of 98% for lung cancer detection.

Brain Tumor Classification using Support Vector Machine

  • Vani, N
  • Sowmya, A
  • Jayamma, N
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Evaluating Glioma Growth Predictions as a Forward Ranking Problem

  • van Garderen, Karin A.
  • van der Voort, Sebastian R.
  • Wijnenga, Maarten M. J.
  • Incekara, Fatih
  • Kapsas, Georgios
  • Gahrmann, Renske
  • Alafandi, Ahmad
  • Smits, Marion
  • Klein, Stefan
2022 Book Section, cited 0 times
The problem of tumor growth prediction is challenging, but promising results have been achieved with both model-driven and statistical methods. In this work, we present a framework for the evaluation of growth predictions that focuses on the spatial infiltration patterns, and specifically evaluating a prediction of future growth. We propose to frame the problem as a ranking problem rather than a segmentation problem. Using the average precision as a metric, we can evaluate the results with segmentations while using the full spatiotemporal prediction. Furthermore, by applying a biophysical tumor growth model to 21 patient cases we compare two schemes for fitting and evaluating predictions. By carefully designing a scheme that separates the prediction from the observations used for fitting the model, we show that a better fit of model parameters does not guarantee a better predictive power.

Combined molecular subtyping, grading, and segmentation of glioma using multi-task deep learning

  • van der Voort, S. R.
  • Incekara, F.
  • Wijnenga, M. M. J.
  • Kapsas, G.
  • Gahrmann, R.
  • Schouten, J. W.
  • Nandoe Tewarie, R.
  • Lycklama, G. J.
  • De Witt Hamer, P. C.
  • Eijgelaar, R. S.
  • French, P. J.
  • Dubbink, H. J.
  • Vincent, Ajpe
  • Niessen, W. J.
  • van den Bent, M. J.
  • Smits, M.
  • Klein, S.
2022 Journal Article, cited 0 times
Website
BACKGROUND: Accurate characterization of glioma is crucial for clinical decision making. A delineation of the tumor is also desirable in the initial decision stages but is time-consuming. Previously, deep learning methods have been developed that can either non-invasively predict the genetic or histological features of glioma, or that can automatically delineate the tumor, but not both tasks at the same time. Here, we present our method that can predict the molecular subtype and grade, while simultaneously providing a delineation of the tumor. METHODS: We developed a single multi-task convolutional neural network that uses the full 3D, structural, pre-operative MRI scans to predict the IDH mutation status, the 1p/19q co-deletion status, and the grade of a tumor, while simultaneously segmenting the tumor. We trained our method using a patient cohort containing 1508 glioma patients from 16 institutes. We tested our method on an independent dataset of 240 patients from 13 different institutes. RESULTS: In the independent test set we achieved an IDH-AUC of 0.90, an 1p/19q co-deletion AUC of 0.85, and a grade AUC of 0.81 (grade II/III/IV). For the tumor delineation, we achieved a mean whole tumor DICE score of 0.84. CONCLUSIONS: We developed a method that non-invasively predicts multiple, clinically relevant features of glioma. Evaluation in an independent dataset shows that the method achieves a high performance and that it generalizes well to the broader clinical population. This first of its kind method opens the door to more generalizable, instead of hyper-specialized, AI methods.

Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm

  • van der Voort, Sebastian R
  • Incekara, Fatih
  • Wijnenga, Maarten MJ
  • Kapsas, Georgios
  • Gardeniers, Mayke
  • Schouten, Joost W
  • Starmans, Martijn PA
  • Tewarie, Rishie Nandoe
  • Lycklama, Geert J
  • French, Pim J
Clinical Cancer Research 2019 Journal Article, cited 0 times

Generating Artificial Artifacts for Motion Artifact Detection in Chest CT

  • van der Ham, Guus
  • Latisenko, Rudolfs
  • Tsiaousis, Michail
  • van Tulder, Gijs
2022 Conference Proceedings, cited 0 times
Website

Eliminating biasing signals in lung cancer images for prognosis predictions with deep learning.

  • van Amsterdam, W. A. C.
  • Verhoeff, J. J. C.
  • de Jong, P. A.
  • Leiner, T.
  • Eijkemans, M. J. C.
NPJ Digit Med 2019 Journal Article, cited 0 times
Website
Deep learning has shown remarkable results for image analysis and is expected to aid individual treatment decisions in health care. Treatment recommendations are predictions with an inherently causal interpretation. To use deep learning for these applications in the setting of observational data, deep learning methods must be made compatible with the required causal assumptions. We present a scenario with real-world medical images (CT-scans of lung cancer) and simulated outcome data. Through the data simulation scheme, the images contain two distinct factors of variation that are associated with survival, but represent a collider (tumor size) and a prognostic factor (tumor heterogeneity), respectively. When a deep network would use all the information available in the image to predict survival, it would condition on the collider and thereby introduce bias in the estimation of the treatment effect. We show that when this collider can be quantified, unbiased individual prognosis predictions are attainable with deep learning. This is achieved by (1) setting a dual task for the network to predict both the outcome and the collider and (2) enforcing a form of linear independence of the activation distributions of the last layer. Our method provides an example of combining deep learning and structural causal models to achieve unbiased individual prognosis predictions. Extensions of machine learning methods for applications to causal questions are required to attain the long-standing goal of personalized medicine supported by artificial intelligence.

Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

  • Vallières, Martin
  • Laberge, Sébastien
  • Diamant, André
  • El Naqa, Issam
Physics in Medicine & Biology 2017 Journal Article, cited 3 times
Website
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice ('span'). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of P