LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images
Aatresh, A. A.
Alabhya, K.
Lal, S.
Kini, J.
Saxena, P. U. P.
Int J Comput Assist Radiol Surg2021Journal Article, cited 0 times
TCGA-LIHC
Convolutional Neural Network (CNN)
Deep learning
H&E-stained slides
Classification
Algorithm Development
PURPOSE: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. METHOD: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets-a novel KMC dataset and the TCGA dataset. RESULTS: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of [Formula: see text] in accuracy and F1-score on the KMC and TCGA-LIHC datasets. CONCLUSION: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of [Formula: see text] on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second.
NS-HGlio: A Generalizable and Repeatable HGG Segmentation and Volumetric measurement AI Algorithm for the Longitudinal MRI Assessment to Inform RANO in Trials and Clinics
Abayazeed, Aly H.
Abbassy, Ahmed
Mueller, Michael
Hill, Michael
Qayati, Mohamed
Mohamed, Shady
Mekhaimar, Mahmoud
Raymond, Catalina
Dubey, Prachi
Nael, Kambiz
Rohatgi, Saurabh
Kapare, Vaishali
Kulkarni, Ashwini
Shiang, Tina
Kumar, Atul
Andratschke, Nicolaus
Willmann, Jonas
Brawanski, Alexander
De Jesus, Reordan
Tuna, Ibrahim
Fung, Steve H.
Landolfi, Joseph C.
Ellingson, Benjamin M.
Reyes, Mauricio
Neuro-Oncology Advances2022Journal Article, cited 0 times
Website
QIN GBM Treatment Response
BraTS-TCGA-GBM
BRAIN
Segmentation
Machine Learning
Convolutional Neural Network (CNN)
High grade glioma
Background; Accurate and repeatable measurement of high-grade glioma (HGG) enhancing (Enh.) and T2/FLAIR hyperintensity/edema (Ed.) is required for monitoring treatment response. 3D measurements can be used to inform the modified Response Assessment in Neuro-oncology criteria (mRANO). We aim to develop an HGG volumetric measurement and visualisation AI algorithm that is generalizable and repeatable.; ; Material and methods; A single 3D-Convoluted Neural Network (CNN), NS-HGlio, to analyse HGG on MRIs using 5-fold cross validation was developed using retrospective (557 MRIs), multicentre (38 sites) and multivendor (32 scanners) dataset divided into training (70%), validation (20%) and testing (10%). Six neuroradiologists created the ground truth (GT). Additional Internal validation (IV, three institutions) using 70 MRIs, External validation (EV, single institution) using 40 MRIs through the Dice Similarity Coefficient (DSC) of Enh., Ed. and Enh. + Ed. (WholeLesion/WL) labels and repeatability testing on 14 subjects from the TCIA MGH-QIN-GBM dataset using volume correlations between timepoints were performed.; ; Results; IV Preoperative median DSC Enh. 0.89 (SD 0.11), Ed. 0.88 (0.28), WL 0.88 (0.11). EV Preoperative median DSC Enh. 0.82 (0.09), Ed. 0.83 (0.11), WL 0.86 (0.06). IV Postoperative median DSC Enh. 0.77 (SD 0.20), Ed 0.78. (SD 0.09), WL 0.78 (SD 0.11). EV Postoperative median DSC Enh. 0.75 (0.21), Ed 0.74 (0.12), WL 0.79 (0.07). Repeatability testing; Intraclass Correlation Coefficient (ICC) of 0.95 Enh. and 0.92 Ed.; ; Conclusion; NS-HGlio is accurate, repeatable, and generalizable. The output can be used for visualization, documentation, treatment response monitoring, radiation planning, intra-operative targeting, and estimation of Residual Tumor Volume (RTV) among others.
Brain Tumor Detection and Classification on MR Images by a Deep Wavelet Auto-Encoder Model
Abd El Kader, Isselmou
Xu, Guizhi
Shuai, Zhang
Saminu, Sani
Javaid, Imran
Ahmad, Isah Salim
Kamhi, Souha
Diagnostics2021Journal Article, cited 16 times
Website
TCGA-GBM
TCGA-LGG
BraTS 2015
Magnetic Resonance Imaging (MRI)
BRAIN
Computer Aided Detection (CADe)
Algorithm Development
Classification
Segmentation
Deep Learning
Wavelet autoencoder
The process of diagnosing brain tumors is very complicated for many reasons, including the brain’s synaptic structure, size, and shape. Machine learning techniques are employed to help doctors to detect brain tumor and support their decisions. In recent years, deep learning techniques have made a great achievement in medical image analysis. This paper proposed a deep wavelet autoencoder model named “DWAE model”, employed to divide input data slice as a tumor (abnormal) or no tumor (normal). This article used a high pass filter to show the heterogeneity of the MRI images and their integration with the input images. A high median filter was utilized to merge slices. We improved the output slices’ quality through highlight edges and smoothened input MR brain images. Then, we applied the seed growing method based on 4-connected since the thresholding cluster equal pixels with input MR data. The segmented MR image slices provide two two-layer using the proposed deep wavelet auto-encoder model. We then used 200 hidden units in the first layer and 400 hidden units in the second layer. The softmax layer testing and training are performed for the identification of the MR image normal and abnormal. The contribution of the deep wavelet auto-encoder model is in the analysis of pixel pattern of MR brain image and the ability to detect and classify the tumor with high accuracy, short time, and low loss validation. To train and test the overall performance of the proposed model, we utilized 2500 MR brain images from BRATS2012, BRATS2013, BRATS2014, BRATS2015, 2015 challenge, and ISLES, which consists of normal and abnormal images. The experiments results show that the proposed model achieved an accuracy of 99.3%, loss validation of 0.1, low FPR and FNR values. This result demonstrates that the proposed DWAE model can facilitate the automatic detection of brain tumors.
A Robust EfficientNetV2-S Classifier for Predicting Acute Lymphoblastic Leukemia Based on Cross Validation
Abd El-Aziz, A. A.
Mahmood, M. A.
Abd El-Ghany, S.
Symmetry-Basel2025Journal Article, cited 1 times
Website
C-NMC 2019
white blood cells
all
leukemia
blood cancer
deep learning
efficientnetv2-s
k fold
cross validation
c-nmc_leukemia
children
This research addresses the challenges of early detection of Acute Lymphoblastic Leukemia (ALL), a life-threatening blood cancer particularly prevalent in children. Manual diagnosis of ALL is often error-prone, time-consuming, and reliant on expert interpretation, leading to delays in treatment. This study proposes an automated binary classification model based on the EfficientNetV2-S architecture to overcome these limitations, enhanced with 5-fold cross-validation (5KCV) for robust performance. A novel aspect of this research lies in leveraging the symmetry concepts of symmetric and asymmetric patterns within the microscopic imagery of white blood cells. Symmetry plays a critical role in distinguishing typical cellular structures (symmetric) from the abnormal morphological patterns (asymmetric) characteristic of ALL. By integrating insights from generative modeling techniques, the study explores how asymmetric distortions in cellular structures can serve as key markers for disease classification. The EfficientNetV2-S model was trained and validated using the normalized C-NMC_Leukemia dataset, achieving exceptional metrics: 97.34% accuracy, recall, precision, specificity, and F1-score. Comparative analysis showed the model outperforms recent classifiers, making it highly effective for distinguishing abnormal white blood cells. This approach accelerates diagnosis, reduces costs, and improves patient outcomes, offering a transformative tool for early ALL detection and treatment planning.
Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks
Abd-Ellah, Mahmoud Khaled
Awad, Ali Ismail
Khalaf, Ashraf AM
Hamed, Hesham FA
EURASIP Journal on Image and Video Processing2018Journal Article, cited 0 times
Website
RIDER Neuro MRI
Convolutional Neural Network (CNN)
Deep Learning
Radiomics
Training Hacks and a Frugal Man’s Net with Application to Glioblastoma Segmentation
Abdallah, Jawher Ben
Marrakchi-Kacem, Linda
Rekik, Islem
2022Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this paper, we investigate the effectiveness of training a sparse Neural Network on a limited number of samples in the context of brain tumor segmentation. Nowadays, Deep Learning architectures are getting deeper, more sophisticated and environmentally unfriendly in an effort to improve their segmentation performance. We use a brain tumor segmentation dataset and apply simple practices to reduce the needed computational resources to allow cheap and fast training. We also present a lighter, cheaper version of the U-Net dubbed Frugal U-Net stemming from our investigation on how far we can push the original U-Net by decreasing its parameter count using Depth-Wise Separable Convolutions instead of regular ones, all the while preserving the minimum levels of accuracy required in Medical Imaging. Our methodology is useful in clinical facilities where high-computation resources are limited.
Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection
Abdelazeem, R. M.
Youssef, D.
El-Azab, J.
Hassab-Elnaby, S.
Agour, M.
PLoS One2020Journal Article, cited 0 times
Website
Brain-Tumor-Progression
We propose a new optical method based on comparative holographic projection for visual comparison between two abnormal follow-up magnetic resonance (MR) exams of glioblastoma patients to effectively visualize and assess tumor progression. First, the brain tissue and tumor areas are segmented from the MR exams using the fast marching method (FMM). The FMM approach is implemented on a computed pixel weight matrix based on an automated selection of a set of initialized target points. Thereafter, the associated phase holograms are calculated for the segmented structures based on an adaptive iterative Fourier transform algorithm (AIFTA). Within this approach, a spatial multiplexing is applied to reduce the speckle noise. Furthermore, hologram modulation is performed to represent two different reconstruction schemes. In both schemes, all calculated holograms are superimposed into a single two-dimensional (2D) hologram which is then displayed on a reflective phase-only spatial light modulator (SLM) for optical reconstruction. The optical reconstruction of the first scheme displays a 3D map of the tumor allowing to visualize the volume of the tumor after treatment and at the progression. Whereas, the second scheme displays the follow-up exams in a side-by-side mode highlighting tumor areas, so the assessment of each case can be fast achieved. The proposed system can be used as a valuable tool for interpretation and assessment of the tumor progression with respect to the treatment method providing an improvement in diagnosis and treatment planning.
Malignancy Classification of Lung Nodule Based on Accumulated Multi Planar Views and Canonical Correlation Analysis
Appearance of a small round or oval shaped in a Computed Tomography (CT) scan of lung is an alarm to suspicion of lung cancer. In order to avoid the misdiagnose of lung cancer at early stage, Computer Aided Diagnosis (CAD) assists oncologists to classify pulmonary nodules as malignant (cancerous) or benign (noncancerous). This paper introduces a novel approach for pulmonary nodules classification employing three accumulated views (top, front, and side) of CT slices and Canonical Correlation Analysis (CCA). Nodule is extracted from 2D CT slice to obtain the Region of Interest (ROI) patch. All patches from sequential slices are accumulated from three different views. Vector representation of each view is correlated with two training sets, malignant and benign sets, employing CCA in spatial and Radon Transform (RT) domain. According to the correlation coefficients, each view is classified and the final classification decision is taken based on the priority decision. For training and testing, 1010 patients are downloaded from Lung Image Database Consortium (LIDC). The final results show that the proposed method achieved the best performance with an accuracy of 90.93% compared with existing methods.
Robust Computer-Aided Detection of Pulmonary Nodules from Chest Computed Tomography
Abduh, Zaid
Wahed, Manal Abdel
Kadah, Yasser M
Journal of Medical Imaging and Health Informatics2016Journal Article, cited 5 times
Website
LIDC-IDRI
Computer Assisted Detection (CAD)
Classification
LUNG
Detection of pulmonary nodules in chest computed tomography scans play an important role in the early diagnosis of lung cancer. A simple yet effective computer-aided detection system is developed to distinguish pulmonary nodules in chest CT scans. The proposed system includes feature extraction, normalization, selection and classification steps. One hundred forty-nine gray level statistical features are extracted from selected regions of interest. A min-max normalization method is used followed by sequential forward feature selection technique with logistic regression model used as criterion function that selected an optimal set of five features for classification. The classification step was done using nearest neighbor and support vector machine (SVM) classifiers with separate training and testing sets. Several measures to evaluate the system performance were used including the area under ROC curve (AUC), sensitivity, specificity, precision, accuracy, F1 score and Cohen-k factor. Excellent performance with high sensitivity and specificity is reported using data from two reference datasets as compared to previous work.
Automated classification of acute leukemia on a heterogeneous dataset using machine learning and deep learning techniques
Abhishek, Arjun
Jha, Rajib Kumar
Sinha, Ruchi
Jha, Kamlesh
Biomedical Signal Processing and Control2022Journal Article, cited 2 times
Website
SN-AM
Pathomics
Acute myeloid leukemia
Acute lymphoblastic leukemia
Computer Aided Detection (CADe)
Classification
Machine learning
Deep learning
Support Vector Machine (SVM)
TIFF
Today, artificial intelligence and deep learning techniques constitute a prominent part in the area of medical sciences. These techniques help doctors detect diseases early and reduce their burden as well as chances of errors.; However, experiments based on deep learning techniques require large and well-annotated dataset. This paper introduces a novel dataset of 500 peripheral blood smear images, containing normal, Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia images. The dataset comprises almost 1700 cancerous blood cells. The size of the dataset is increased by adding images of a publicly available dataset and forming a heterogeneous dataset.; The heterogeneous dataset is used for the automated binary classification task, which is one of the major tasks of the proposed work. The proposed work perform binary as well as three-class classification tasks involving state-of-the-art techniques based on machine learning and deep learning. For binary classification, the proposed work achieved an accuracy of 97% when fully connected layers along with the last three convolutional layers of VGG16 are fine tuned and 98% for DenseNet121 along with support vector machine. For three-class classification task, an accuracy of 95% is obtained for ResNet50 along with support vector machine. The preparation of the novel dataset is done under the opinion of various expertise that will help the scientific community for medical research supported by machine learning models.
Comparison of MR Preprocessing Strategies and Sequences for Radiomics-Based MGMT Prediction
Hypermethylation of the O6-methylguanine-DNA-methyltransferase (MGMT) promoter in glioblastoma (GBM) is a predictive biomarker associated with improved treatment outcome. In clinical practice, MGMT methylation status is determined by biopsy or after surgical removal of the tumor. This study aims to investigate the feasibility of non-invasive medical imaging based “radio-genomic” surrogate markers of MGMT methylation status.; ; The imaging dataset of the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) challenge allows exploring radiomics strategies for MGMT prediction in a large and very heterogeneous dataset that represents a variety of real-world imaging conditions including different imaging protocols and devices. To characterize and optimize MGMT prediction strategies under these conditions, we examined different image preprocessing approaches and their effect on the average prediction performance of simple radiomics models.; ; We found features derived from FLAIR images to be most informative for MGMT prediction, particularly if aggregated over the entire (enhancing and non-enhancing) tumor with or without inclusion of the edema. Our results also indicate that the imaging characteristics of the tumor region can distort MR-bias-field correction in a way that negatively affects the prediction performance of the derived models.
Clinical implementation of artificial intelligence in neuroradiology with development of a novel workflow-efficient picture archiving and communication system-based automated brain tumor segmentation and radiomic feature extraction
Aboian, M.
Bousabarah, K.
Kazarian, E.
Zeevi, T.
Holler, W.
Merkaj, S.
Cassinelli Petersen, G.
Bahar, R.
Subramanian, H.
Sunku, P.
Schrickel, E.
Bhawnani, J.
Zawalich, M.
Mahajan, A.
Malhotra, A.
Payabvash, S.
Tocino, I.
Lin, M.
Westerhoff, M.
Front Neurosci2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
IBSI
PACS (picture archiving and communication system)
artificial intelligence (AL)
brain tumor
feature extraction
glioma
machine learning (ML)
segmentation
Purpose: Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction. Materials and methods: An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations. Results: UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 +/- 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study. Conclusion: Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
Clinical implementation of artificial intelligence in neuroradiology with development of a novel workflow-efficient picture archiving and communication system-based automated brain tumor segmentation and radiomic feature extraction
Aboian, Mariam
Bousabarah, Khaled
Kazarian, Eve
Zeevi, Tal
Holler, Wolfgang
Merkaj, Sara
Petersen, Gabriel Cassinelli
Bahar, Ryan
Subramanian, Harry
Sunku, Pranay
Schrickel, Elizabeth
Bhawnani, Jitendra
Zawalich, Mathew
Mahajan, Amit
Malhotra, Ajay
Payabvash, Sam
Tocino, Irena
Lin, MingDe
Westerhoff, Malte
Frontiers in Neuroscience2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
Purpose: Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction.
Materials and methods: An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations.
Results: UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study.
Conclusion: Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
Brain Tumor Segmentation Based on Deep Learning's Feature Representation
Aboussaleh, Ilyasse
Riffi, Jamal
Mahraz, Adnane Mohamed
Tairi, Hamid
Journal of Imaging2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
BraTS 2017
Challenge
Classification
Brain tumor is considered as one of the most serious causes of death in the world. Thus, it is very important to detect it as early as possible. In order to predict and segment the tumor, many approaches have been proposed. However, they suffer from different problems such as the necessity of the intervention of a specialist, the long required run-time and the choice of the appropriate feature extractor. To address these issues, we proposed an approach based on convolution neural network architecture aiming at predicting and segmenting simultaneously a cerebral tumor. The proposal was divided into two phases. Firstly, aiming at avoiding the use of the labeled image that implies a subject intervention of the specialist, we used a simple binary annotation that reflects the existence of the tumor or not. Secondly, the prepared image data were fed into our deep learning model in which the final classification was obtained; if the classification indicated the existence of the tumor, the brain tumor was segmented based on the feature representations generated by the convolutional neural network architectures. The proposed method was trained on the BraTS 2017 dataset with different types of gliomas. The achieved results show the performance of the proposed approach in terms of accuracy, precision, recall and Dice similarity coefficient. Our model showed an accuracy of 91% in tumor classification and a Dice similarity coefficient of 82.35% in tumor segmentation.
Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier
Abraham, Bejoy
Nair, Madhu S
Biocybernetics and Biomedical Engineering2018Journal Article, cited 0 times
Website
PROSTATEx
Prostate Cancer
machine learning
Computer Aided Diagnosis (CADx)
Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder
Abraham, Bejoy
Nair, Madhu S
Computerized Medical Imaging and Graphics2018Journal Article, cited 1 times
Website
PROSTATEx-2 2017 challenge
Classification
Automated grading of prostate cancer using convolutional neural network and ordinal class classifier
Abraham, Bejoy
Nair, Madhu S.
Informatics in Medicine Unlocked2019Journal Article, cited 0 times
Website
Soft Tissue Sarcoma
PROSTATEx-2 2017 challenge
VGG-16 Convolutional Neural Network
Convolutional Neural Network (CNN)
Prostate Cancer (PCa) is one of the most prominent cancer among men. Early diagnosis and treatment planning are significant in reducing the mortality rate due to PCa. Accurate prediction of grade is required to ensure prompt treatment for cancer. Grading of prostate cancer can be considered as an ordinal class classification problem. This paper presents a novel method for the grading of prostate cancer from multiparametric magnetic resonance images using VGG-16 Convolutional Neural Network and Ordinal Class Classifier with J48 as the base classifier. Multiparametric magnetic resonance images of the PROSTATEx-2 2017 grand challenge dataset are employed for this work. The method achieved a moderate quadratic weighted kappa score of 0.4727 in the grading of PCa into 5 grade groups, which is higher than state-of-the-art methods. The method also achieved a positive predictive value of 0.9079 in predicting clinically significant prostate cancer.
Computer-aided grading of prostate cancer from MRI images using Convolutional Neural Networks
Abraham, Bejoy
Nair, Madhu S.
2019Journal Article, cited 0 times
PROSTATEx
Grading of prostate cancer is usually done using Transrectal Ultrasound (TRUS) biopsy followed by microscopic examination of histological images by the pathologist. TRUS is a painful procedure which leads to infections of severe nature. In the recent past, Magnetic Resonance Imaging (MRI) has emerged as a modality which can be used for the diagnosis of prostate cancer without subjecting patients to biopsies. A novel method for grading of prostate cancer based on MRI utilizing Convolutional Neural Networks (CNN) and LADTree classifier is explored in this paper. T2 weighted (T2W), high B-value Diffusion Weighted (BVALDW) and Apparent Diffusion Coefficient (ADC) MRI images obtained from the training dataset of PROSTATEx-2 2017 challenge are used for this study. A quadratic weighted Cohen’s kappa score of 0.3772 is attained in predicting different grade groups of cancer and a positive predictive value of 81.58% in predicting high-grade cancer. The method also attained an unweighted kappa score of 0.3993, and weighted Area Under Receiver Operating Characteristic Curve (AUC), accuracy and F-score of 0.74, 58.04 and 0.56, respectively. The above-mentioned results are better than that obtained by the winning method of PROSTATEx-2 2017 challenge.
Multimodal Segmentation with MGF-Net and the Focal Tversky Loss Function
In neuro-imaging, MRI is commonly used to acquire multiple sequences simultaneously, including T1, T2 and FLAIR. Multimodal image segmentation involves learning an optimal, joint representation of these sequences for accurate delineation of the region of interest. The most commonly utilized fusion scheme for multimodal segmentation is early fusion, where each modality sequence is treated as an independent channel. In this work, we propose a fusion architecture termed the Moment Gated Fusion (MGF) network which combines feature moments from individual modality sequences for the segmentation task. We supervise our network with a variant of the focal Tversky loss function. Our architecture promotes explain-ability, light-weight CNN design and has achieved 0.687, 0.843 and 0.751 DSC scores on the BraTs 2019 test cohort which is competitive with the commonly used vanilla U-Net.
A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM
Abu Baker, Ayman A.
Ghadi, Yazeed
International Journal of Electrical and Computer Engineering (IJECE)2020Journal Article, cited 0 times
Website
LIDC-IDRI
Support Vector Machine (SVM)
A novel cancerous nodules detection algorithm for computed tomography images (CT - images ) is presented in this paper. CT -images are large size images with high resolution. In some cases, number of cancerous lung nodule lesions may missed by the radiologist due to fatigue. A CAD system that is proposed in this paper can help the radiologist in detecting cancerous nodules in CT -images. The proposed algorithm is divided to four stages. In the first stage, an enhancement algorithm is implement to highlight the suspicious regions. Then in the second stage, the region of interest will be detected. ; The adaptive SVM and wavelet transform techniques are used to reduce the detected false positive regions. This algorithm is evaluated using 60 cases (normal and cancerous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 94.5% and with FP ratio 7 cluster/image.
Repeatability of Automated Image Segmentation with BraTumIA in Patients with Recurrent Glioblastoma
Abu Khalaf, N.
Desjardins, A.
Vredenburgh, J. J.
Barboriak, D. P.
AJNR Am J Neuroradiol2021Journal Article, cited 0 times
Website
RIDER Neuro MRI
Machine Learning
Segmentation
BRAIN
Algorithm Development
GBM
BACKGROUND AND PURPOSE: Despite high interest in machine-learning algorithms for automated segmentation of MRIs of patients with brain tumors, there are few reports on the variability of segmentation results. The purpose of this study was to obtain benchmark measures of repeatability for a widely accessible software program, BraTumIA (Versions 1.2 and 2.0), which uses a machine-learning algorithm to segment tumor features on contrast-enhanced brain MR imaging. MATERIALS AND METHODS: Automatic segmentation of enhancing tumor, tumor edema, nonenhancing tumor, and necrosis was performed on repeat MR imaging scans obtained approximately 2 days apart in 20 patients with recurrent glioblastoma. Measures of repeatability and spatial overlap, including repeatability and Dice coefficients, are reported. RESULTS: Larger volumes of enhancing tumor were obtained on later compared with earlier scans (mean, 26.3 versus 24.2 mL for BraTumIA 1.2; P < .05; and 24.9 versus 22.9 mL for BraTumIA 2.0, P < .01). In terms of percentage change, repeatability coefficients ranged from 31% to 46% for enhancing tumor and edema components and from 87% to 116% for nonenhancing tumor and necrosis. Dice coefficients were highest (>0.7) for enhancing tumor and edema components, intermediate for necrosis, and lowest for nonenhancing tumor and did not differ between software versions. Enhancing tumor and tumor edema were smaller, and necrotic tumor larger using BraTumIA 2.0 rather than 1.2. CONCLUSIONS: Repeatability and overlap metrics varied by segmentation type, with better performance for segmentations of enhancing tumor and tumor edema compared with other components. Incomplete washout of gadolinium contrast agents could account for increasing enhancing tumor volumes on later scans.
Automated lung tumor detection and diagnosis in CT Scans using texture feature analysis and SVM
Adams, Tim
Dörpinghaus, Jens
Jacobs, Marc
Steinhage, Volker
Communication Papers of the Federated Conference on Computer Science and Information Systems2018Journal Article, cited 0 times
Website
SPIE-AAPM Lungx
Haralick texture features
support vector machine (SVM)
Radiomics
Technical note: Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability, and application to patient‐specific CT dosimetry
Adamson, Philip M.
Bhattbhatt, Vrunda
Principi, Sara
Beriwal, Surabhi
Strain, Linda S.
Offe, Michael
Wang, Adam S.
Vo, Nghia‐Jack
Schmidt, Taly Gilat
Jordan, Petr
Medical Physics2022Journal Article, cited 0 times
Pediatric-CT-SEG
PURPOSE: This study developed and evaluated a fully convolutional network (FCN) for pediatric CT organ segmentation and investigated the generalizability of the FCN across image heterogeneities such as CT scanner model protocols and patient age. We also evaluated the autosegmentation models as part of a software tool for patient-specific CT dose estimation.
METHODS: A collection of 359 pediatric CT datasets with expert organ contours were used for model development and evaluation. Autosegmentation models were trained for each organ using a modified FCN 3D V-Net. An independent test set of 60 patients was withheld for testing. To evaluate the impact of CT scanner model protocol and patient age heterogeneities, separate models were trained using a subset of scanner model protocols and pediatric age groups. Train and test sets were split to answer questions about the generalizability of pediatric FCN autosegmentation models to unseen age groups and scanner model protocols, as well as the merit of scanner model protocol or age-group-specific models. Finally, the organ contours resulting from the autosegmentation models were applied to patient-specific dose maps to evaluate the impact of segmentation errors on organ dose estimation.
RESULTS: Results demonstrate that the autosegmentation models generalize to CT scanner acquisition and reconstruction methods which were not present in the training dataset. While models are not equally generalizable across age groups, age-group-specific models do not hold any advantage over combining heterogeneous age groups into a single training set. Dice similarity coefficient (DSC) and mean surface distance results are presented for 19 organ structures, for example, median DSC of 0.52 (duodenum), 0.74 (pancreas), 0.92 (stomach), and 0.96 (heart). The FCN models achieve a mean dose error within 5% of expert segmentations for all 19 organs except for the spinal canal, where the mean error was 6.31%.
CONCLUSIONS: Overall, these results are promising for the adoption of FCN autosegmentation models for pediatric CT, including applications for patient-specific CT dose estimation.
DNA-methylome-assisted classification of patients with poor prognostic subventricular zone associated IDH-wildtype glioblastoma
Adeberg, S.
Knoll, M.
Koelsche, C.
Bernhardt, D.
Schrimpf, D.
Sahm, F.
Konig, L.
Harrabi, S. B.
Horner-Rieber, J.
Verma, V.
Bewerunge-Hudler, M.
Unterberg, A.
Sturm, D.
Jungk, C.
Herold-Mende, C.
Wick, W.
von Deimling, A.
Debus, J.
Rieken, S.
Abdollahi, A.
Acta Neuropathol2022Journal Article, cited 0 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Methylation markers
Glioblastoma (GBM) derived from the "stem cell" rich subventricular zone (SVZ) may constitute a therapy-refractory subgroup of tumors associated with poor prognosis. Risk stratification for these cases is necessary but is curtailed by error prone imaging-based evaluation. Therefore, we aimed to establish a robust DNA methylome-based classification of SVZ GBM and subsequently decipher underlying molecular characteristics. MRI assessment of SVZ association was performed in a retrospective training set of IDH-wildtype GBM patients (n = 54) uniformly treated with postoperative chemoradiotherapy. DNA isolated from FFPE samples was subject to methylome and copy number variation (CNV) analysis using Illumina Platform and cnAnalysis450k package. Deep next-generation sequencing (NGS) of a panel of 130 GBM-related genes was conducted (Agilent SureSelect/Illumina). Methylome, transcriptome, CNV, MRI, and mutational profiles of SVZ GBM were further evaluated in a confirmatory cohort of 132 patients (TCGA/TCIA). A 15 CpG SVZ methylation signature (SVZM) was discovered based on clustering and random forest analysis. One third of CpG in the SVZM were associated with MAB21L2/LRBA. There was a 14.8% (n = 8) discordance between SVZM vs. MRI classification. Re-analysis of these patients favored SVZM classification with a hazard ratio (HR) for OS of 2.48 [95% CI 1.35-4.58], p = 0.004 vs. 1.83 [1.0-3.35], p = 0.049 for MRI classification. In the validation cohort, consensus MRI based assignment was achieved in 62% of patients with an intraclass correlation (ICC) of 0.51 and non-significant HR for OS (2.03 [0.81-5.09], p = 0.133). In contrast, SVZM identified two prognostically distinct subgroups (HR 3.08 [1.24-7.66], p = 0.016). CNV alterations revealed loss of chromosome 10 in SVZM- and gains on chromosome 19 in SVZM- tumors. SVZM- tumors were also enriched for differentially mutated genes (p < 0.001). In summary, SVZM classification provides a novel means for stratifying GBM patients with poor prognosis and deciphering molecular mechanisms governing aggressive tumor phenotypes.
An Enhanced Framework Employing Feature Fusion for Effective Classification of Digital Breast Tomosynthesis Scans
Breast cancer remains a prevalent health concern, with high incidence rates globally. It is impossible to overestimate the significance of early breast cancer detection since it not only enhances patient outcomes and treatment efficacy but also considerably lowers the disease's total burden and increases the chances of a favourable outcome. Three-dimensional images of the breast tissue are provided by Digital Breast Tomosynthesis (DBT), which has become a highly effective imaging method in the fight against breast cancer. The complicated nature of breast anatomy and the existence of minor abnormalities make it difficult to classify DBT scans accurately. This paper presents an enhanced framework that combines deep learning models with feature fusion and selection models to categorise Digital Breast Tomosynthesis (DBT) data into benign, malignant, and normal. The proposed system integrates Histogram of Oriented Gradients (HOG) with HSV colour scheme to enhance the extraction of the most prominent features. Breast lesions in DBT scans can be discriminated more effectively because of the collaborative use of the feature fusion and selection models. In addition to our previously developed deep learning model, Mod _ AlexN et, two pre-trained models-ResNet-50 and SqueezeNet-were used to train the DBT dataset. A sequential sequence of fusion and selection processes was implemented once the features were extracted from the deep learning models. To categorise the selected features, several classifiers were subsequently employed. The proposed integrated Mod _ AlexN et system demonstrated superior performance compared to other systems in terms of classification accuracy, sensitivity, precision F1-score, and specificity across various classifiers. Our developed integrated system demonstrated improvement rates of 49.35% and 25.04% in terms of sensitivity, compared to ResNet-50 and SqueezeNet-based systems, respectively.
Exploring the Diagnostic Potential of Radiomics-Based PET Image Analysis for T-Stage Tumor Diagnosis
Cancer is a leading cause of death globally, and early detection is crucial for better outcomes. This research aims to improve Region Of Interest (ROI) segmentation and feature extraction in medical image analysis using Radiomics techniques
with 3D Slicer, Pyradiomics, and Python. Dimension reduction methods, including PCA, K-means, t-SNE, ISOMAP, and Hierarchical Clustering, were applied to high dimensional features to enhance interpretability and efficiency. The study assessed the ability of the reduced feature set to predict T-staging, an essential component of the TNM system for cancer diagnosis. Multinomial logistic regression models were developed and evaluated using MSE, AIC, BIC, and Deviance Test. The dataset consisted of CT and PET-CT DICOM images from 131 lung cancer patients. Results showed that PCA identified 14 features, Hierarchical Clustering 17, t-SNE 58, and ISOMAP 40, with texture-based features being the most critical. This study highlights the potential of integrating Radiomics and unsupervised learning techniques to enhance cancer prediction from medical images.
Classification and Segmentation of Brain Tumor Using EfficientNet-B7 and U-Net
Adinegoro, Antonius Fajar
Sutapa, Gusti Ngurah
Gunawan, Anak Agung Ngurah
Anggarani, Ni Kadek Nova
Suardana, Putu
Kasmawan, I. Gde Antha
Asian Journal of Research in Computer Science2023Journal Article, cited 0 times
Website
TCGA-LGG
Brain-Tumor-Progression
Transfer learning
Tumors are caused by uncontrolled growth of abnormal cells. Magnetic Resonance Imaging (MRI) is modality that is widely used to produce highly detailed brain images. In addition, a surgical biopsy of the suspected tissue (tumor) is required to obtain more information about the type of tumor. Biopsy takes 10 to 15 days for laboratory testing. Based on a study conducted by Brady in 2016, errors in radiology practice are common, with an estimated daily error rate of 3-5%. Therefore, using the application of artificial intelligence, is expected to simplify and improve the accuracy of doctor's diagnose.
A quantitative analysis of imaging features in lung CT images using the RW-T hybrid segmentation model
Adiraju, RamaVasantha
Elias, Susan
Multimedia Tools and Applications2023Journal Article, cited 0 times
Website
LungCT-Diagnosis
Segmentation
LUNG
Automatic Segmentation
Ground truth
Radiomic features
Lung cancer is the leading cause of cancer death worldwide. A lung nodule is the most common symptom of lung cancer. The analysis of lung cancer relies heavily on the segmentation of nodules, which aids in optimal treatment planning. However, because there are several lung nodules, accurate segmentation remains challenging. We propose an RW-T hybrid approach capable of segmenting all types of nodules, primarily externally attached nodules (juxta-pleural and juxta-vascular), and estimate the effect of nodule segmentation techniques to assess the quantitative Computer Tomography (CT) imaging features in lung adenocarcinoma. On 301 lung CT images from 40 patients with lung adenocarcinoma cases from the LungCT- Diagnosis dataset publicly available in The Cancer Imaging Archive, we used a random-walk strategy and a thresholding method to implement nodule segmentation (TCIA). We extracted two quantitative CT features from the segmented nodule using morphological techniques: convexity and entropy scores. The proposed method’s resultant segmented nodules are compared to the single-click ensemble segmentation method and validated using ground-truth segmented nodules. Our proposed segmentation approach had a high level of agreement with ground truth delineations, with a dice-similarity coefficient of 0.7884, compared to single-click ensemble segmentation, with a dice-similarity metric of 0.6407.
A survey on lung CT datasets and research trends
Adiraju, Rama Vasantha
Elias, Susan
Research on Biomedical Engineering2021Journal Article, cited 0 times
NSCLC-Radiomics
SPIE-AAPM Lung CT Challenge
PurposeLung cancer is the most dangerous of all forms of cancer and it has the highest occurrence rate, world over. Early detection of lung cancer is a difficult task. Medical images generated by computer tomography (CT) are being used extensively for lung cancer analysis and research. However, it is essential to have a well-organized image database in order to design a reliable computer-aided diagnosis (CAD) tool. Identifying the most appropriate dataset for the research is another big challenge.Literarture reviewThe objective of this paper is to present a review of literature related to lung CT datasets. The Cancer Imaging Archive (TCIA) consortium collates different types of cancer datasets and permits public access through an integrated search engine. This survey summarizes the research work done using lung CT datasets maintained by TCIA. The motivation to present this survey was to help the research community in selecting the right lung dataset and to provide a comprehensive summary of the research developments in the field.
Defining a Radiomic Response Phenotype: A Pilot Study using targeted therapy in NSCLC
Aerts, Hugo JWL
Grossmann, Patrick
Tan, Yongqiang
Oxnard, Geoffrey G
Rizvi, Naiyer
Schwartz, Lawrence H
Zhao, Binsheng
Scientific RepoRtS2016Journal Article, cited 40 times
Website
RIDER
radiomics
NSCLC
lung
Medical imaging plays a fundamental role in oncology and drug development, by providing a non-invasive method to visualize tumor phenotype. Radiomics can quantify this phenotype comprehensively by applying image-characterization algorithms, and may provide important information beyond tumor size or burden. In this study, we investigated if radiomics can identify a gefitinib response-phenotype, studying high-resolution computed-tomography (CT) imaging of forty-seven patients with early-stage non-small cell lung cancer before and after three weeks of therapy. On the baseline-scan, radiomic-feature Laws-Energy was significantly predictive for EGFR-mutation status (AUC = 0.67, p = 0.03), while volume (AUC = 0.59, p = 0.27) and diameter (AUC = 0.56, p = 0.46) were not. Although no features were predictive on the post-treatment scan (p > 0.08), the change in features between the two scans was strongly predictive (significant feature AUC-range = 0.74-0.91). A technical validation revealed that the associated features were also highly stable for test-retest (mean +/- std: ICC = 0.96 +/- 0.06). This pilot study shows that radiomic data before treatment is able to predict mutation status and associated gefitinib response non-invasively, demonstrating the potential of radiomics-based phenotyping to improve the stratification and response assessment between tyrosine kinase inhibitors (TKIs) sensitive and resistant patient populations.
Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach
Aerts, H. J.
Velazquez, E. R.
Leijenaar, R. T.
Parmar, C.
Grossmann, P.
Carvalho, S.
Bussink, J.
Monshouwer, R.
Haibe-Kains, B.
Rietveld, D.
Hoebers, F.
Rietbergen, M. M.
Leemans, C. R.
Dekker, A.
Quackenbush, J.
Gillies, R. J.
Lambin, P.
Nat Commun2014Journal Article, cited 1029 times
Website
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
radiomic features
Computed Tomography (CT)
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost.
From Handcrafted to Deep-Learning-Based Cancer Radiomics
Afshar, Parnian
Mohammadi, Arash
Plataniotis, Konstantinos N.
Oikonomou, Anastasia
Benali, Habib
2019Journal Article, cited 0 times
LIDC-IDRI
Recent advancements in signal processing (SP) and machine learning, coupled with electronic medical record keeping in hospitals and the availability of extensive sets of medical images through internal/external communication systems, have resulted in a recent surge of interest in radiomics. Radiomics, an emerging and relatively new research field, refers to extracting semiquantitative and/or quantitative features from medical images with the goal of developing predictive and/or prognostic models. In the near future, it is expected to be a critical component for integrating image-derived information used for personalized treatment. The conventional radiomics workflow is typically based on extracting predesigned features (also referred to as handcrafted or engineered features) from a segmented region of interest (ROI). Nevertheless, recent advancements in deep learning have inspired trends toward deep-learning-based radiomics (DLRs) (also referred to as discovery radiomics). In addition to the advantages of these two approaches, there are also hybrid solutions that exploit the potential of multiple data sources. Considering the variety of approaches to radiomics, further improvements require a comprehensive and integrated sketch, which is the goal of this article. This article provides a unique interdisciplinary perspective on radiomics by discussing state-of-the-art SP solutions in the context of radiomics.
3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.
Multi-Atlas Image Soft Segmentation via Computation of the Expected Label Value
Aganj, Iman
Fischl, Bruce
IEEE Transactions on Medical Imaging2021Journal Article, cited 0 times
Pancreas-CT
The use of multiple atlases is common in medical image segmentation. This typically requires deformable registration of the atlases (or the average atlas) to the new image, which is computationally expensive and susceptible to entrapment in local optima. We propose to instead consider the probability of all possible atlas-to-image transformations and compute the expected label value (ELV), thereby not relying merely on the transformation deemed "optimal" by the registration method. Moreover, we do so without actually performing deformable registration, thus avoiding the associated computational costs. We evaluate our ELV computation approach by applying it to brain, liver, and pancreas segmentation on datasets of magnetic resonance and computed tomography images.
An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk
Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET)2019Journal Article, cited 0 times
CBIS-DDSM
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.
Automatic mass detection in mammograms using deep convolutional neural networks
Agarwal, Richa
Diaz, Oliver
Lladó, Xavier
Yap, Moi Hoon
Martí, Robert
Journal of Medical Imaging2019Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Mammography
Computer Aided Detection (CADe)
machine learning
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation.; First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset).; We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.
Patient-Wise Versus Nodule-Wise Classification of Annotated Pulmonary Nodules using Pathologically Confirmed Cases
Aggarwal, Preeti
Vig, Renu
Sardana, HK
Journal of Computers2013Journal Article, cited 5 times
Website
LIDC-IDRI
Classification
Computer Aided Detection (CADe)
LUNG
This paper presents a novel framework for combining well known shape, texture, size and resolution informatics descriptor of solitary pulmonary nodules (SPNs) detected using CT scan. The proposed methodology evaluates the performance of classifier in differentiating benign, malignant as well as metastasis SPNs with 246 chests CT scan of patients. Both patient-wise as well as nodule-wise available diagnostic report of 80 patients was used in differentiating the SPNs and the results were compared. For patient-wise data, generated a model with efficiency of 62.55% with labeled nodules and using semi-supervised approach, labels of rest of the unknown nodules were predicted and finally classification accuracy of 82.32% is achieved with all labeled nodules. For nodule-wise data, ground truth database of labeled nodules is expanded from a very small ground truth using content based image retrieval (CBIR) method and achieved a precision of 98%. Proposed methodology not only avoids unnecessary biopsies but also efficiently label unknown nodules using pre-diagnosed cases which can certainly help the physicians in diagnosis.
Boundary Aware Semantic Segmentation using Pyramid-dilated Dense U-Net for Lung Segmentation in Computed Tomography Images
Agnes, S. Akila
2023Journal Article, cited 0 times
LCTSC
Segmentation
lung
Aim: The main objective of this work is to propose an efficient segmentation model for accurate and robust lung segmentation from computed tomography (CT) images, even when the lung contains abnormalities such as juxtapleural nodules, cavities, and consolidation.
Methodology: A novel deep learning-based segmentation model, pyramid-dilated dense U-Net (PDD-U-Net), is proposed to directly segment lung regions from the whole CT image. The model is integrated with pyramid-dilated convolution blocks to capture and preserve multi-resolution spatial features effectively. In addition, shallow and deeper stream features are embedded in the nested U-Net structure at the decoder side to enhance the segmented output. The effect of three loss functions is investigated in this paper, as the medical image analysis method requires precise boundaries. The proposed PDD-U-Net model with shape-aware loss function is tested on the lung CT segmentation challenge (LCTSC) dataset with standard lung CT images and the lung image database consortium-image database resource initiative (LIDC-IDRI) dataset containing both typical and pathological lung CT images.
Results: The performance of the proposed method is evaluated using Intersection over Union, dice coefficient, precision, recall, and average Hausdorff distance metrics. Segmentation results showed that the proposed PDD-U-Net model outperformed other segmentation methods and achieved a 0.983 dice coefficient for the LIDC-IDRI dataset and a 0.994 dice coefficient for the LCTSC dataset.
Conclusions: The proposed PDD-U-Net model with shape-aware loss function is an effective and accurate method for lung segmentation from CT images, even in the presence of abnormalities such as cavities, consolidation, and nodules. The model's integration of pyramid-dilated convolution blocks and nested U-Net structure at the decoder side, along with shape-aware loss function, contributed to its high segmentation accuracy. This method could have significant implications for the computer-aided diagnosis system, allowing for quick and accurate analysis of lung regions.
Efficient multiscale fully convolutional UNet model for segmentation of 3D lung nodule from CT image
Agnes, S. A.
Anitha, J.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
LIDC-IDRI
Convolutional Neural Network (CNN)
Deep learning
maxout aggregation
multiscale fully convolutional UNet
semantic segmentation
3D segmentation
Purpose: Segmentation of lung nodules in chest CT images is essential for image-driven lung cancer diagnosis and follow-up treatment planning. Manual segmentation of lung nodules is subjective because the approach depends on the knowledge and experience of the specialist. We proposed a multiscale fully convolutional three-dimensional UNet (MF-3D UNet) model for automatic segmentation of lung nodules in CT images. Approach: The proposed model employs two strategies, fusion of multiscale features with Maxout aggregation and trainable downsampling, to improve the performance of nodule segmentation in 3D CT images. The fusion of multiscale (fine and coarse) features with the Maxout function allows the model to retain the most important features while suppressing the low-contribution features. The trainable downsampling process is used instead of fixed pooling-based downsampling. Results: The performance of the proposed MF-3D UNet model is examined by evaluating the model with CT scans obtained from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. A quantitative and visual comparative analysis of the proposed work with various customized UNet models is also presented. The comparative analysis shows that the proposed model yields reliable segmentation results compared with other methods. The experimental result of 3D MF-UNet shows encouraging results in the segmentation of different types of nodules, including juxta-pleural, solitary pulmonary, and non-solid nodules, with an average Dice similarity coefficient of 0.83 +/- 0.05 , and it outperforms other CNN-based segmentation models. Conclusions: The proposed model accurately segments the nodules using multiscale feature aggregation and trainable downsampling approaches. Also, 3D operations enable precise segmentation of complex nodules using inter-slice connections.
Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)
Agnes, S Akila
Anitha, J
Peter, J Dinesh
Neural Computing and Applications2018Journal Article, cited 0 times
Website
LIDC-IDRI
fuzzy C-means clustering (FCM)
Deep learning
Convolutional Neural Network (CNN)
lung Segmentation
Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising
Agostinelli, Forest
Anderson, Michael R
Lee, Honglak
2013Conference Proceedings, cited 118 times
Website
Head-Neck Cetuximab
Algorithm Development
Image denoising
Machine Learning
Deep Learning
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.
Prediction of Overall Survival of Brain Tumor Patients
Automated brain tumor segmentation plays an important role in the diagnosis and prognosis of the patient. In addition, features from the tumorous brain help in predicting patients' overall survival. The main focus of this paper is to segment tumor from BRATS 2018 benchmark dataset and use age, shape and volumetric features to predict overall survival of patients. The random forest classifier achieves overall survival accuracy of 59% on the test dataset and 67% on the dataset with resection status as gross total resection. The proposed approach uses fewer features but achieves better accuracy than state-of-the-art methods.
The paper demonstrates the use of the fully convolutional neural network for glioma segmentation on the BraTS 2019 dataset. Three-layers deep encoder-decoder architecture is used along with dense connection at the encoder part to propagate the information from the coarse layers to deep layers. This architecture is used to train three tumor sub-components separately. Sub-component training weights are initialized with whole tumor weights to get the localization of the tumor within the brain. In the end, three segmentation results were merged to get the entire tumor segmentation. Dice Similarity of training dataset with focal loss implementation for whole tumor, tumor core, and enhancing tumor is 0.92, 0.90, and 0.79, respectively. Radiomic features from the segmentation results predict survival. Along with these features, age and statistical features are used to predict the overall survival of patients using random forest regressors. The overall survival prediction method outperformed the other methods for the validation dataset on the leaderboard with 58.6% accuracy. This finding is consistent with the performance on the test set of BraTS 2019 with 57.9% accuracy.
3D Semantic Segmentation of Brain Tumor for Overall Survival Prediction
Glioma, a malignant brain tumor, requires immediate treatment to improve the survival of patients. The heterogeneous nature of Glioma makes the segmentation difficult, especially for sub-regions like necrosis, enhancing tumor, non-enhancing tumor, and edema. Deep neural networks like full convolution neural networks and an ensemble of fully convolution neural networks are successful for Glioma segmentation. The paper demonstrates the use of a 3D fully convolution neural network with a three-layer encoder-decoder approach. The dense connections within the layer help in diversified feature learning. The network takes 3D patches from T1, T2, T1c, and FLAIR modalities as input. The loss function combines dice loss and focal loss functions. The Dice similarity coefficient for training and validation set is 0.88, 0.83, 0.78 and 0.87, 0.75, 0.76 for the whole tumor, tumor core and enhancing tumor, respectively. The network achieves comparable performance with other state-of-the-art ensemble approaches. The random forest regressor trains on the shape, volumetric, and age features extracted from ground truth for overall survival prediction. The regressor achieves an accuracy of 56.8% and 51.7% on the training and validation sets.
Utilizing U-Net architectures with auxiliary information for scatter correction in CBCT across different field-of-view settings
Cone-beam computed tomography (CBCT) has become a vital imaging technique in various medical fields but scatter artifacts are a major limitation in CBCT scanning. This challenge is exacerbated by the use of large flat panel 2D detectors. The scatter-to-primary ratio increases significantly with the increase in the size of FOV being scanned. Several deep learning methods, particularly U-Net architectures, have shown promising capabilities in estimating the scatter directly from the CBCT projections. However, the influence of varying FOV sizes on these deep learning models remains unexplored. Having a single neural network for the scatter estimation of varying FOV projections can be of significant importance towards real clinical applications. This study aims to train and evaluate the performance of a U-Net network on a simulated dataset with varying FOV sizes. We further propose a new method (Aux-Net) by providing auxiliary information, such as FOV size, to the U-Net encoder. We validate our method on 30 different FOV sizes and compare it with the U-Net. Our study demonstrates that providing auxiliary information to the network enhances the generalization capability of the U-Net. Our findings suggest that this novel approach outperforms the baseline U-Net, offering a significant step towards practical application in real clinical settings where CBCT systems are employed to scan a wide range of FOVs.
CbcErDL: Classification of breast cancer from mammograms using enhance image reduction and deep learning framework
Agrawal, Rohit
Singh, Navneet Pratap
Shelke, Nitin Arvind
Tripathi, Kuldeep Narayan
Singh, Ranjeet Kumar
Multimedia Tools and Applications2024Journal Article, cited 0 times
Website
CBIS-DDSM
Breast cancer is a major health concern for women worldwide, and early detection is vital to improve treatment outcomes. While existing techniques in mammogram classification have demonstrated promising results, their limitations become apparent when applied to larger datasets. The decline in performance with increased dataset size highlights the need for further research and advancements in the field to enhance the scalability and generalizability of these techniques. In this study, we propose a framework to classify breast cancer from mammograms using techniques such as mammogram enhancement, discrete cosine transform (DCT) dimensionality reduction, and deep convolutional neural network (DCNN). The first step is to improve the mammogram display to improve the visibility of key features and reduce noise. For this, we use 2-stage Contrast Limited Adaptive Histogram Equalization (CLAHE). DCT is then used to enhance mammograms to reduce residual data. It can provide effective reduction while preserving important diagnostic information. In this way, we reduce the computational complexity and increase the results of subsequent classification algorithms. Finally, DCNN is used on size-reduced DCT coefficients to learn feature discrimination and classification of mammograms. DCNN architectures have been optimized with various techniques to improve their performance, including regularization and hyperparameter tuning. We perform experiments on the DDSM dataset, a large dataset containing approximately 55,000 mammogram images, and demonstrate the effectiveness of the proposed method. We assess the proposed model’s performance by computing the precision, recall, accuracy, F1-Score, and area under the receiver operating characteristic curve (AUC). We achieve Precision and Recall values of 0.929 and 0.963, respectively. The classification accuracy of the proposed models is 0.963. Moreover, the F1-Score and AUC values are 0.962 and 0.987, respectively. These results are better as compared to the standard techniques and the techniques from the literature. The proposed approach has the potential to assist radiologists in accurately diagnosing breast cancer, thereby facilitating early detection and timely intervention.
Convolutional Neural Network featuring VGG-16 Model for Glioma Classification
Agus, Minarno Eko
Bagas, Sasongko Yoni
Yuda, Munarko
Hanung, Nugroho Adi
Ibrahim, Zaidah
JOIV : International Journal on Informatics Visualization2022Journal Article, cited 0 times
Website
REMBRANDT
Magnetic Resonance Imaging (MRI)
VGG-16 Convolutional Neural Network
Machine Learning
BRAIN
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI) is a body sensing technique that can produce detailed images of the condition of organs and tissues. Specifically related to brain tumors, the resulting images can be analyzed using image detection techniques so that tumor stages can be classified automatically. Detection of brain tumors requires a high level of accuracy because it is related to the effectiveness of medical actions and patient safety. So far, the Convolutional Neural Network (CNN) or its combination with GA has given good results. For this reason, in this study, we used a similar method but with a variant of the VGG-16 architecture. VGG-16 variant adds 16 layers by modifying the dropout layer (using softmax activation) to reduce overfitting and avoid using a lot of hyper-parameters. We also experimented with using augmentation techniques to anticipate data limitations. Experiment using data The Cancer Imaging Archive (TCIA) - The Repository of Molecular Brain Neoplasia Data (REMBRANDT) contains MRI images of 130 patients with different ailments, grades, races, and ages with 520 images. The tumor type was Glioma, and the images were divided into grades II, III, and IV, with the composition of 226, 101, and 193 images, respectively. The data is divided by 68% and 32% for training and testing purposes. We found that VGG-16 was more effective for brain tumor image classification, with an accuracy of up to 100%.
AATSN: Anatomy Aware Tumor Segmentation Network for PET-CT volumes and images using a lightweight fusion-attention mechanism
Ahmad, I.
Xia, Y.
Cui, H.
Islam, Z. U.
Comput Biol Med2023Journal Article, cited 1 times
Website
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) provides metabolic information, while Computed Tomography (CT) provides the anatomical context of the tumors. Combined PET-CT segmentation helps in computer-assisted tumor diagnosis, staging, and treatment planning. Current state-of-the-art models mainly rely on early or late fusion techniques. These methods, however, rarely learn PET-CT complementary features and cannot efficiently co-relate anatomical and metabolic features. These drawbacks can be removed by intermediate fusion; however, it produces inaccurate segmentations in the case of heterogeneous textures in the modalities. Furthermore, it requires massive computation. In this work, we propose AATSN (Anatomy Aware Tumor Segmentation Network), which extracts anatomical CT features, and then intermediately fuses with PET features through a fusion-attention mechanism. Our anatomy-aware fusion-attention mechanism fuses the selective useful CT and PET features instead of fusing the full features set. Thus this not only improves the network performance but also requires lesser resources. Furthermore, our model is scalable to 2D images and 3D volumes. The proposed model is rigorously trained, tested, evaluated, and compared to the state-of-the-art through several ablation studies on the largest available datasets. We have achieved a 0.8104 dice score and 2.11 median HD95 score in a 3D setup, while 0.6756 dice score in a 2D setup. We demonstrate that AATSN achieves a significant performance gain while being lightweight at the same time compared to the state-of-the-art methods. The implications of AATSN include improved tumor delineation for diagnosis, analysis, and radiotherapy treatment.
Assessment of the global noise algorithm for automatic noise measurement in head CT examinations
Ahmad, M.
Tan, D.
Marisetty, S.
Med Phys2021Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Computed Tomography (CT)
Image processing
quality control
PURPOSE: The global noise (GN) algorithm has been previously introduced as a method for automatic noise measurement in clinical CT images. The accuracy of the GN algorithm has been assessed in abdomen CT examinations, but not in any other body part until now. This work assesses the GN algorithm accuracy in automatic noise measurement in head CT examinations. METHODS: A publicly available image dataset of 99 head CT examinations was used to evaluate the accuracy of the GN algorithm in comparison to reference noise values. Reference noise values were acquired using a manual noise measurement procedure. The procedure used a consistent instruction protocol and multiple observers to mitigate the influence of intra- and interobserver variation, resulting in precise reference values. Optimal GN algorithm parameter values were determined. The GN algorithm accuracy and the corresponding statistical confidence interval were determined. The GN measurements were compared across the six different scan protocols used in this dataset. The correlation of GN to patient head size was also assessed using a linear regression model, and the CT scanner's X-ray beam quality was inferred from the model fit parameters. RESULTS: Across all head CT examinations in the dataset, the range of reference noise was 2.9-10.2 HU. A precision of +/-0.33 HU was achieved in the reference noise measurements. After optimization, the GN algorithm had a RMS error 0.34 HU corresponding to a percent RMS error of 6.6%. The GN algorithm had a bias of +3.9%. Statistically significant differences in GN were detected in 11 out of the 15 different pairs of scan protocols. The GN measurements were correlated with head size with a statistically significant regression slope parameter (p < 10(-7) ). The CT scanner X-ray beam quality estimated from the slope parameter was 3.5 cm water HVL (2.8-4.8 cm 95% CI). CONCLUSION: The GN algorithm was validated for application in head CT examinations. The GN algorithm was accurate in comparison to reference manual measurement, with errors comparable to interobserver variation in manual measurement. The GN algorithm can detect noise differences in examinations performed on different scanner models or using different scan protocols. The trend in GN across patients of different head sizes closely follows that predicted by a physical model of X-ray attenuation.
RD2A: densely connected residual networks using ASPP for brain tumor segmentation
Ahmad, Parvez
Jin, Hai
Qamar, Saqib
Zheng, Ran
Saeed, Adnan
Multimedia Tools and Applications2021Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BRAIN
Automatic segmentation
Machine Learning
The variations among shapes, sizes, and locations of tumors are obstacles for accurate automatic segmentation. U-Net is a simplified approach for automatic segmentation. Generally, the convolutional or the dilated convolutional layers are used for brain tumor segmentation. However, existing segmentation methods of the significant dilation rates degrade the final accuracy. Moreover, tuning parameters and imbalance ratio between the different tumor classes are the issues for segmentation. The proposed model, known as Residual-Dilated Dense Atrous-Spatial Pyramid Pooling (RD2A) 3D U-Net, is found adequate to solve these issues. The RD2A is the combination of the residual connections, dilation, and dense ASPP to preserve more contextual information of small sizes of tumors at each level encoder path. The multi-scale contextual information minimizes the ambiguities among the tissues of the white matter (WM) and gray matter (GM) of the infant’s brain MRI. The BRATS 2018, BRATS 2019, and iSeg-2019 datasets are used on different evaluation metrics to validate the RD2A. In the BRATS 2018 validation dataset, the proposed model achieves the average dice scores of 90.88, 84.46, and 78.18 for the whole tumor, the tumor core, and the enhancing tumor, respectively. We also evaluated on iSeg-2019 testing set, where the proposed approach achieves the average dice scores of 79.804, 77.925, and 80.569 for the cerebrospinal fluid (CSF), the gray matter (GM), and the white matter (WM), respectively. Furthermore, the presented work also obtains the mean dice scores of 90.35, 82.34, and 71.93 for the whole tumor, the tumor core, and the enhancing tumor, respectively on the BRATS 2019 validation dataset. Experimentally, it is found that the proposed approach is ideal for exploiting the full contextual information of the 3D brain MRI datasets.
The accurate automatic segmentation of brain tumors enhances the probability of survival rate. Convolutional Neural Network (CNN) is a popular automatic approach for image evaluations. CNN provides excellent results against classical machine learning algorithms. In this paper, we present a unique approach to incorporate contexual information from multiple brain MRI labels. To address the problems of brain tumor segmentation, we implement combined strategies of residual-dense connections, multiple rates of an atrous convolutional layer on popular 3D U-Net architecture. To train and validate our proposed algorithm, we used BRATS 2019 different datasets. The results are promising on the different evaluation metrics.
MS UNet: Multi-scale 3D UNet for Brain Tumor Segmentation
Ahmad, Parvez
Qamar, Saqib
Shen, Linlin
Rizvi, Syed Qasim Afser
Ali, Aamir
Chetty, Girija
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Deep convolutional neural network (DCNN)
A deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates multi-scale features from image data. The multi-scaled features play an essential role in brain tumor segmentation. Researchers presented numerous multi-scale strategies that have been excellent for the segmentation task. This paper proposes a multi-scale strategy that can further improve the final segmentation accuracy. We propose three multi-scale strategies in MS UNet. Firstly, we utilize densely connected blocks in the encoder and decoder for multi-scale features. Next, the proposed residual-inception blocks extract local and global information by merging features of different kernel sizes. Lastly, we utilize the idea of deep supervision for multiple depths at the decoder. We validate the MS UNet on the BraTS 2021 validation dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 91.938%, 86.268%, and 82.409%, respectively.
Context Aware 3D UNet for Brain Tumor Segmentation
Ahmad, Parvez
Qamar, Saqib
Shen, Linlin
Saeed, Adnan
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates features from both encoder and decoder paths to extract multi-contextual information from image data. The multi-scaled features play an essential role in brain tumor segmentation. However, the limited use of features can degrade the performance of the UNet approach for segmentation. In this paper, we propose a modified UNet architecture for brain tumor segmentation. In the proposed architecture, we used densely connected blocks in both encoder and decoder paths to extract multi-contextual information from the concept of feature reusability. In addition, residual-inception blocks (RIB) are used to extract the local and global information by merging features of different kernel sizes. We validate the proposed architecture on the multi-modal brain tumor segmentation challenge (BRATS) 2020 testing dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 89.12%, 84.74%, and 79.12%, respectively.
Tumor Lesion Segmentation from 3D PET Using a Machine Learning Driven Active Surface
Discovery of a Generalization Gap of Convolutional Neural Networks on COVID-19 X-Rays Classification
Ahmed, Kaoutar Ben
Goldgof, Gregory M.
Paul, Rahul
Goldgof, Dmitry B.
Hall, Lawrence O.
IEEE Access2021Journal Article, cited 0 times
COVID-19-AR
A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.
Enhancing Brain Tumor Classification: A Comparative Study of Single-Model and Multi-Model Fusion Approaches
Brain tumors are the leading cause of death world-wide. Deep learning has been successful in previous tasks like classification. However, it's being limited by the reliance on a single imaging modality which isn't enough, where a single modality can provide higher performance but is unreliable for accurate treatment and diagnosis. This study aims to improve brain tumor classification using deep learning and fusion techniques of multiple modalities. The study employs three fusion approaches: image-level fusion, feature-level fusion, and wavelet-based fusion. Extensive experiments were conducted on the BRATS2020 dataset. Initially, we train and evaluate the performance of 21 baseline models, encompassing 20 CNN-based architectures alongside the vision transformer model. Moreover, we identify the highest-performing models within each class for fusion. Furthermore, inspired by the baseline models, we dive deeper, introducing each modality as input to its respective best-performing model and fusing the outputs for multi-modality model-level fusion. Finally, we employ wavelet-based fusion to optimize information integration, implementing Discrete Wavelet Transform on our dataset. Model-level fusion outperformed image fusion across all evaluation metrics by 1 % accuracy, 4.7% precision, 6.6 % recall, and 0.7% F1-score.
Artificial neural network-assisted prediction of radiobiological indices in head and neck cancer
Ahmed, S. B. S.
Naeem, S.
Khan, A. M. H.
Qureshi, B. M.
Hussain, A.
Aydogan, B.
Muhammad, W.
Front Artif Intell2024Journal Article, cited 0 times
Website
Head-Neck-CT-Atlas
HNSCC-3DCT-RT
Artificial Neural Network (ANN)
head and neck cancer
normal tissue complication probability
radiation therapy
tumor control probability
BACKGROUND AND PURPOSE: We proposed an artificial neural network model to predict radiobiological parameters for the head and neck squamous cell carcinoma patients treated with radiation therapy. The model uses the tumor specification, demographics, and radiation dose distribution to predict the tumor control probability and the normal tissue complications probability. These indices are crucial for the assessment and clinical management of cancer patients during treatment planning. METHODS: Two publicly available datasets of 31 and 215 head and neck squamous cell carcinoma patients treated with conformal radiation therapy were selected. The demographics, tumor specifications, and radiation therapy treatment parameters were extracted from the datasets used as inputs for the training of perceptron. Radiobiological indices are calculated by open-source software using dosevolume histograms from radiation therapy treatment plans. Those indices were used as output in the training of a single-layer neural network. The distribution of data used for training, validation, and testing purposes was 70, 15, and 15%, respectively. RESULTS: The best performance of the neural network was noted at epoch number 32 with the mean squared error of 0.0465. The accuracy of the prediction of radiobiological indices by the artificial neural network in training, validation, and test phases were determined to be 0.89, 0.87, and 0.82, respectively. We also found that the percentage volume of parotid inside the planning target volume is the significant parameter for the prediction of normal tissue complications probability. CONCLUSION: We believe that the model has significant potential to predict radiobiological indices and help clinicians in treatment plan evaluation and treatment management of head and neck squamous cell carcinoma patients.
Increased robustness in reference region model analysis of DCE MRI using two‐step constrained approaches
Ahmed, Zaki
Levesque, Ives R
Magnetic Resonance in Medicine2016Journal Article, cited 1 times
Website
DCE-MRI
Algorithm development
QIN Breast DCE-MRI
An extended reference region model for DCE‐MRI that accounts for plasma volume
Ahmed, Zaki
Levesque, Ives R
NMR in Biomedicine2018Journal Article, cited 0 times
Website
DCE-MRI
TCGA-GBM
reference region model (RRM)
extended reference region model (ERRM)
constrained ERRM (CERRM)
Pharmacokinetic modeling of dynamic contrast-enhanced MRI using a reference region and input function tail
Ahmed, Z.
Levesque, I. R.
Magn Reson Med2020Journal Article, cited 0 times
Website
TCGA-GBM
Dynamic Contrast-Enhanced (DCE)-MRI
Dynamic contrast-enhanced magnetic resonance imaging
Extended Tofts model (ETM)
PURPOSE: Quantitative analysis of dynamic contrast-enhanced MRI (DCE-MRI) requires an arterial input function (AIF) which is difficult to measure. We propose the reference region and input function tail (RRIFT) approach which uses a reference tissue and the washout portion of the AIF. METHODS: RRIFT was evaluated in simulations with 100 parameter combinations at various temporal resolutions (5-30 s) and noise levels (sigma = 0.01-0.05 mM). RRIFT was compared against the extended Tofts model (ETM) in 8 studies from patients with glioblastoma multiforme. Two versions of RRIFT were evaluated: one using measured patient-specific AIF tails, and another assuming a literature-based AIF tail. RESULTS: RRIFT estimated the transfer constant K trans and interstitial volume v e with median errors within 20% across all simulations. RRIFT was more accurate and precise than the ETM at temporal resolutions slower than 10 s. The percentage error of K trans had a median and interquartile range of -9 +/- 45% with the ETM and -2 +/- 17% with RRIFT at a temporal resolution of 30 s under noiseless conditions. RRIFT was in excellent agreement with the ETM in vivo, with concordance correlation coefficients (CCC) of 0.95 for K trans , 0.96 for v e , and 0.73 for the plasma volume v p using a measured AIF tail. With the literature-based AIF tail, the CCC was 0.89 for K trans , 0.93 for v e and 0.78 for v p . CONCLUSIONS: Quantitative DCE-MRI analysis using the input function tail and a reference tissue yields absolute kinetic parameters with the RRIFT method. This approach was viable in simulation and in vivo for temporal resolutions as low as 30 s.
Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail
Ahmed, Zaki
Levesque, Ives R
Magnetic Resonance in Medicine2020Journal Article, cited 0 times
Website
TCGA-GBM
ResMLP_GGR: Residual Multilayer Perceptrons- Based Genotype-Guided Recurrence Prediction of Non-small Cell Lung Cancer
Ai, Yang
Li, Yinhao
Chen, Yen-Wei
Aonpong, Panyanat
Han, Xianhua
Journal of Image and Graphics2023Journal Article, cited 1 times
Website
NSCLC Radiogenomics
Deep Learning
Non-Small Cell Lung Cancer (NSCLC)
Predictive model
Radiogenomics
residual neural network
Algorithm Development
Imaging features
Non-small Cell Lung Cancer (NSCLC) is one of the malignant tumors with the highest morbidity and mortality. The postoperative recurrence rate in patients with NSCLC is high, which directly endangers the lives of patients. In recent years, many studies have used Computed Tomography (CT) images to predict NSCLC recurrence. Although this approach is inexpensive, it has low prediction accuracy. Gene expression data can achieve high accuracy. However, gene acquisition is expensive and invasive, and cannot meet the recurrence prediction requirements of all patients. In this study, a low-cost, high-accuracy residual multilayer perceptrons-based genotype-guided recurrence (ResMLP_GGR) prediction method is proposed that uses a gene estimation model to guide recurrence prediction. First, a gene estimation model is proposed to construct a mapping function of mixed features (handcrafted and deep features) and gene data to estimate the genetic information of tumor heterogeneity. Then, from gene estimation data obtained using a regression model, representations related to recurrence are learned to realize NSCLC recurrence prediction. In the testing phase, NSCLC recurrence prediction can be achieved with only CT images. The experimental results show that the proposed method has few parameters, strong generalization ability, and is suitable for small datasets. Compared with state-of-the-art methods, the proposed method significantly improves recurrence prediction accuracy by 3.39% with only 1% of parameters.
SAMA: A Self-and-Mutual Attention Network for Accurate Recurrence Prediction of Non-Small Cell Lung Cancer Using Genetic and CT Data
Ai, Y.
Liu, J.
Li, Y.
Wang, F.
Du, X.
Jain, R. K.
Lin, L.
Chen, Y. W.
IEEE J Biomed Health Inform2024Journal Article, cited 2 times
Website
TCGA-LUAD
TCGA-LUSC
Accurate preoperative recurrence prediction for non-small cell lung cancer (NSCLC) is a challenging issue in the medical field. Existing studies primarily conduct image and molecular analyses independently or directly fuse multimodal information through radiomics and genomics, which fail to fully exploit and effectively utilize the highly heterogeneous cross-modal information at different levels and model the complex relationships between modalities, resulting in poor fusion performance and becoming the bottleneck of precise recurrence prediction. To address these limitations, we propose a novel unified framework, the Self-and-Mutual Attention (SAMA) Network, designed to efficiently fuse and utilize macroscopic CT images and microscopic gene data for precise NSCLC recurrence prediction, integrating handcrafted features, deep features, and gene features. Specifically, we design a Self-and-Mutual Attention Module that performs three-stage fusion: the self-enhancement stage enhances modality-specific features; the gene-guided and CT-guided cross-modality fusion stages perform bidirectional cross-guidance on the self-enhanced features, complementing and refining each modality, enhancing heterogeneous feature expression; and the optimized feature aggregation stage ensures the refined interactive features for precise prediction. Extensive experiments on both publicly available datasets from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) demonstrate that our method achieves state-of-the-art performance and exhibits broad applicability to various cancers.
Brain Tumor Segmentation and Classification Using ResNet50 and U-Net with TCGA-LGG and TCIA MRI Scans
Glioblastoma is a type of malignant tumor that varies significantly in size, shape, and location. The study of this type of tumor, one of which is about predicting the patient’s survival ability, is beneficial for the treatment of patients. However, the supporting data for the survival prediction model are minimal, so the best methods are needed for handling it. In this study, we propose an architecture for predicting patient survival using MobileNet combined with a linear survival prediction model (SPM). Several variations of MobileNet are tested to obtain the best results. Variations tested include modification of MobileNet V1 with freeze or unfreeze layers, and modification of MobileNet V2 with freeze or unfreeze layers connected to SPM. The dataset used for the trial came from BraTS 2020. A modification based on the MobileNet V2 architecture with the freezing layer was selected from the test results. The results of testing this proposed architecture with 95 training data and 23 validation data resulted in an MSE Loss of 78374.17. The online test results with the validation dataset 29 resulted in an MSE loss value of 149764.866 with an accuracy of 0.345. Testing with the testing dataset resulted in increased accuracy of 0.402. These results are promising for better architectural development.
Unet3D with Multiple Atrous Convolutions Attention Block for Brain Tumor Segmentation
Brain tumor segmentation by computer computing is still an exciting challenge. UNet architecture has been widely used for medical image segmentation with several modifications. Attention blocks have been used to modify skip connections on the UNet architecture and result in improved performance. In this study, we propose the development of UNet for brain tumor image segmentation by modifying its contraction and expansion block by adding Attention, adding multiple atrous convolutions, and adding a residual pathway that we call Multiple Atrous convolutions Attention Block (MAAB). The expansion part is also added with the formation of pyramid features taken from each level to produce the final segmentation output. The architecture is trained using patches and batch 2 to save GPU memory usage. Online validation of the segmentation results from the BraTS 2021 validation dataset resulted in dice performance of 78.02, 80.73, and 89.07 for ET, TC, and WT. These results indicate that the proposed architecture is promising for further development.
Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.
Map-Reduce based tipping point scheduler for parallel image processing
Akhtar, Mohammad Nishat
Saleh, Junita Mohamad
Awais, Habib
Bakar, Elmi Abu
Expert Systems with Applications2020Journal Article, cited 0 times
Website
LIDC-IDRI
Algorithm Development
Segmentation
Nowadays, Big Data image processing is very much in need due to its proven success in the field of business information system, medical science and social media. However, as the days are passing by, the computation of Big Data images is becoming more complex which ultimately results in complex resource management and higher task execution time. Researchers have been using a combination of CPU and GPU based computing to cut down the execution time, however, when it comes to scaling of compute nodes, then the combination of CPU and GPU based computing still remains a challenge due to the high communication cost factor. In order to tackle this issue, the Map-Reduce framework has come out to be a viable option as its workflow optimization could be enhanced by changing its underlying job scheduling mechanism. This paper presents a comparative study of job scheduling algorithms which could be deployed over various Big Data based image processing application and also proposes a tipping point scheduling algorithm to optimize the workflow for job execution on multiple nodes. The evaluation of the proposed scheduling algorithm is done by implementing parallel image segmentation algorithm to detect lung tumor for up to 3GB size of image dataset. In terms of performance comprising of task execution time and throughput, the proposed tipping point scheduler has come out to be the best scheduler followed by the Map-Reduce based Fair scheduler. The proposed tipping point scheduler is 1.14 times better than Map-Reduce based Fair scheduler and 1.33 times better than Map-Reduced based FIFO scheduler in terms of task execution time and throughput. In terms of speedup comparison between single node and multiple nodes, the proposed tipping point scheduler attained a speedup of 4.5 X for multi-node architecture.; ; Keywords:; Job scheduler; Workflow optimization; Map-Reduce; Tipping point scheduler; Parallel image segmentation; Lung tumor
Automatic Detection and Segmentation of Colorectal Cancer with Deep Residual Convolutional Neural Network
Akilandeswari, A.
Sungeetha, D.
Joseph, C.
Thaiyalnayaki, K.
Baskaran, K.
Jothi Ramalingam, R.
Al-Lohedan, H.
Al-Dhayan, D. M.
Karnan, M.
Meansbo Hadish, K.
Evid Based Complement Alternat Med2022Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Deep convolutional neural network (DCNN)
Early and automatic detection of colorectal tumors is essential for cancer analysis, and the same is implemented using computer-aided diagnosis (CAD). A computerized tomography (CT) image of the colon is being used to identify colorectal carcinoma. Digital imaging and communication in medicine (DICOM) is a standard medical imaging format to process and analyze images digitally. Accurate detection of tumor cells in the complex digestive tract is necessary for optimal treatment. The proposed work is divided into two phases. The first phase involves the segmentation, and the second phase is the extraction of the colon lesions with the observed segmentation parameters. A deep convolutional neural network (DCNN) based residual network approach for the colon and polyps' segmentation from the CT images is applied over the 2D CT images. The residual stack block is being added to the hidden layers with short skip nuance, which helps to retain spatial information. ResNet-enabled CNN is employed in the current work to achieve complete boundary segmentation of the colon cancer region. The results obtained through segmentation serve as features for further extraction and classification of benign as well as malignant colon cancer. Performance evaluation metrics indicate that the proposed network model has effectively segmented and classified colorectal tumors with dice scores of 91.57% (on average), sensitivity = 98.28, specificity = 98.68, and accuracy = 98.82.
A review of lung cancer screening and the role of computer-aided detection
Al Mohammad, B
Brennan, PC
Mello-Thoms, C
Clinical Radiology2017Journal Article, cited 23 times
Website
LIDC-IDRI
Lung screening
Radiologist performance in the detection of lung cancer using CT
Al Mohammad, B
Hillis, SL
Reed, W
Alakhras, M
Brennan, PC
Clinical Radiology2019Journal Article, cited 2 times
Website
LIDC-IDRI
Lung Cancer
CT
Automated multi-class high-grade glioma segmentation based on T1Gd and FLAIR images
Al-Bashir, Areen K.
Al Obeid, Abeer N.
Al-Abed, Mohammad A.
Athamneh, Imad S.
Banihani, Maysoon A. R.
Al Abdi, Rabah M.
Informatics in Medicine Unlocked2024Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2019
Glioma is the most prevalent primary malignant brain tumor. Segmentation of glioma regions using magnetic resonance imaging (MRI) is essential for treatment planning. However, segmentation of glioma regions is usually based on four MRI modalities, which are T1, T2, T1Gd, and FLAIR. Acquiring these four modalities will increase patients' time inside the scanner and drive up the segmentation process's processing time. Nevertheless, not all these modalities are acquired in some cases due to the limited available time on the MRI scanner or uncooperative patients. Therefore, U-Net-based fully convolutional neural networks were employed for automated segmentation to answer the urgent question: does a smaller number of MRI modalities limit the segmentation accuracy? The proposed approach was trained, validated, and tested on 100 high-grade glioma (HGG) cases twice, once with all MRI sequences and second with only FLAIR and T1Gd. The results on the test set showed that the baseline U-Net model gave a mean Dice score of 0.9166 and 0.9190 on all MRI sequences using FLAIR and T1Gd, respectively. To check for possible performance improvement of the U-Net on FLAIR and T1Gd modalities, an ensemble of pre-trained VGG16, VGG19, and ResNet50 as modified U-Net encoders were employed for automated glioma segmentation based on T1Gd and FLAIR modalities only and compared with the baseline U-Net. The proposed models were trained, validated, and tested on 259 high-grade gliomas (HGG) cases. The results showed that the proposed baseline U-Net model and the ensemble of pre-trained VGG16, VGG19, or ResNet50 as modified U-Net encoders have a mean Dice score of 0.9395, 0.9360, 0.9359, and 0.9356, respectively. The results were also compared to other studies based on four MRI modalities. The work indicates that FLAIR and T1Gd are the most prominent contributors to the segmentation process. The proposed baseline U-Net is robust enough for segmenting HGG sub-tumoral structures and competitive with other state-of-the-art works.
Breast Cancer Diagnostic System Based on MR images Using KPCA-Wavelet Transform and Support Vector Machine
Glioblastomas brain tumour segmentation based on convolutional neural networks
Al-Hadidi, Moh'd Rasoul
AlSaaidah, Bayan
Al-Gawagzeh, Mohammed
International Journal of Electrical and Computer Engineering (IJECE)2020Journal Article, cited 0 times
REMBRANDT
Machine Learning
Brain tumour segmentation can improve diagnostics efficiency, rise the prediction rate and treatment planning. This will help the doctors and experts in their work. Where many types of brain tumour may be classified easily, the gliomas tumour is challenging to be segmented because of the diffusion between the tumour and the surrounding edema. Another important challenge with this type of brain tumour is that the tumour may grow anywhere in the brain with different shape and size. Brain cancer presents one of the most famous diseases over the world, which encourage the researchers to find a high-throughput system for tumour detection and classification. Several approaches have been proposed to design automatic detection and classification systems. This paper presents an integrated framework to segment the gliomas brain tumour automatically using pixel clustering for the MRI images foreground and background and classify its type based on deep learning mechanism, which is the convolutional neural network. In this work, a novel segmentation and classification system is proposed to detect the tumour cells and classify the brain image if it is healthy or not. After collecting data for healthy and non-healthy brain images, satisfactory results are found and registered using computer vision approaches. This approach can be used as a part of a bigger diagnosis system for breast tumour detection and manipulation.
Automated liver tissues delineation techniques: A systematic survey on machine learning current trends and future orientations
Al-Kababji, Ayman
Bensaali, Faycal
Dakua, Sarada Prasad
Himeur, Yassine
2023Journal Article, cited 0 times
CT-ORG
Pancreas-CT
Machine Learning
Machine learning and computer vision techniques have grown rapidly in recent years due to their automation, suitability, and ability to generate astounding results. Hence, in this paper, we survey the key studies that are published between 2014 and 2022, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and they are further partitioned if the amount of work that falls under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites containing masks of the aforementioned tissues are thoroughly discussed, highlighting the organizers’ original contributions and those of other researchers. Also, the metrics used excessively in the literature are mentioned in our review, stressing their relevance to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing, such as the scarcity of many studies on the vessels’ segmentation challenge and why their absence needs to be dealt with sooner than later.
A Novel Approach to Improving Brain Image Classification Using Mutual Information-Accelerated Singular Value Decomposition
Al-Saffar, Zahraa A
Yildirim, Tülay
IEEE Access2020Journal Article, cited 0 times
Website
REMBRANDT
machine learning
A hybrid approach based on multiple Eigenvalues selection (MES) for the automated grading of a brain tumor using MRI
Al-Saffar, Z. A.
Yildirim, T.
Comput Methods Programs Biomed2021Journal Article, cited 5 times
Website
REMBRANDT
Algorithms
Radiomic features
BRAIN
Magnetic Resonance Imaging (MRI)
Artificial neural network (ANN)
Classification
Segmentation
Clustering
Image processing
Machine learning
Mutual information (MI)
Singular value decomposition (SVD)
Support vector machine (SVM)
BACKGROUND AND OBJECTIVE: The manual segmentation, identification, and classification of brain tumor using magnetic resonance (MR) images are essential for making a correct diagnosis. It is, however, an exhausting and time consuming task performed by clinical experts and the accuracy of the results is subject to their point of view. Computer aided technology has therefore been developed to computerize these procedures. METHODS: In order to improve the outcomes and decrease the complications involved in the process of analysing medical images, this study has investigated several methods. These include: a Local Difference in Intensity - Means (LDI-Means) based brain tumor segmentation, Mutual Information (MI) based feature selection, Singular Value Decomposition (SVD) based dimensionality reduction, and both Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) based brain tumor classification. Also, this study has presented a new method named Multiple Eigenvalues Selection (MES) to choose the most meaningful features as inputs to the classifiers. This combination between unsupervised and supervised techniques formed an effective system for the grading of brain glioma. RESULTS: The experimental results of the proposed method showed an excellent performance in terms of accuracy, recall, specificity, precision, and error rate. They are 91.02%,86.52%, 94.26%, 87.07%, and 0.0897 respectively. CONCLUSION: The obtained results prove the significance and effectiveness of the proposed method in comparison to other state-of-the-art techniques and it can have in the contribution to an early diagnosis of brain glioma.
SwarmDeepSurv: swarm intelligence advances deep survival network for prognostic radiomics signatures in four solid cancers
Al-Tashi, Qasem
Saad, Maliazurina B.
Sheshadri, Ajay
Wu, Carol C.
Chang, Joe Y.
Al-Lazikani, Bissan
Gibbons, Christopher
Vokes, Natalie I.
Zhang, Jianjun
Lee, J. Jack
Heymach, John V.
Jaffray, David
Mirjalili, Seyedali
Wu, Jia
Patterns2023Journal Article, cited 0 times
Website
Non-Small Cell Lung Cancer (NSCLC)
TCGA-LUAD
TCGA-LUSC
NSCLC Radiogenomics
NSCLC-Radiomics
Head and neck squamous cell carcinoma (HNSCC)
HEAD-NECK-RADIOMICS-HN1
ISPY1/ACRIN 6657
TCGA-GBM
PyRadiomics
Cox proportional hazard model
Radiomics
cancer
survival analysis
Swarm
Imaging features
Transfer learning
Survival models exist to study relationships between biomarkers and treatment effects. Deep learning-powered survival models supersede the classical Cox proportional hazards (CoxPH) model, but substantial performance drops were observed on high-dimensional features because of irrelevant/redundant information. To fill this gap, we proposed SwarmDeepSurv by integrating swarm intelligence algorithms with the deep survival model. Furthermore, four objective functions were designed to optimize prognostic prediction while regularizing selected feature numbers. When testing on multicenter sets (n = 1,058) of four different cancer types, SwarmDeepSurv was less prone to overfitting and achieved optimal patient risk stratification compared with popular survival modeling algorithms. Strikingly, SwarmDeepSurv selected different features compared with classical feature selection algorithms, including the least absolute shrinkage and selection operator (LASSO), with nearly no feature overlapping across these models. Taken together, SwarmDeepSurv offers an alternative approach to model relationships between radiomics features and survival endpoints, which can further extend to study other input data types including genomics.
Brain Tumor Segmentation from Multiparametric MRI Using a Multi-encoder U-Net Architecture
This paper describes our submission to Task 1 of the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021, where the goal is to segment brain glioblastoma sub-regions in multi-parametric MRI scans. Glioblastoma patients have a very high mortality rate; robust and precise segmentation of the whole tumor, tumor core, and enhancing tumor subregions plays a vital role in patient management. We design a novel multi-encoder, shared decoder U-Net architecture aimed at reducing the effect of signal artefacts that can appear in single channels of the MRI recordings. We train multiple such models on the training images made available from the challenge organizers, collected from 1251 subjects. The ensemble-model achieves Dice Scores of 0.9274 +/- 0.0930, 0.8717+/- 0.2456, and 0.8750 +/- 0.1798; and Hausdorff distances of 4.77 +/- 17.05 , 17.97 +/- 71.54, and 10.66 +/ 55.52 ; for whole tumor, tumor core, and enhancing tumor, respectively; on the 570 test subjects assessed by the organizer. We investigate the robustness of our automated segmentation system and discuss its possible relevance to existing and future clinical workflows for tumor evaluation and radiation therapy planning.
FEATURE EXTRACTION OF LUNG CANCER USING IMAGE ANALYSIS TECHNIQUES
ALAYUE, L.T.
GOSHU, B.S.
TAJU, ENDRIS
Romanian Journal of Biophysics2022Journal Article, cited 0 times
Website
TCGA-LUSC
Computed Tomography (CT)
Lung Cancer
Computer Aided Detection (CADe)
MATLAB
Lung cancer is one of the most life-threatening diseases. It is a medical problem that needs accurate diagnosis and timely treatment by healthcare professionals. Although CT is preferred over other imaging modalities, visual interpretation of CT scan images may be subject to error and can cause a delay in lung cancer detection. Therefore, image processing techniques are widely used for early-stage detection of lung tumors. This study was conducted to perform pre-processing, segmentation, and feature extraction of lung CT images using image processing techniques. We used the MATLAB programming language to devise a stepwise approach that included image acquisition, pre-processing, segmentation, and features extraction. A total of 14 lung CT scan images in the age group of 55–75 years were downloaded from an open access repository. The analyzed images were grayscale, 8 bits, with a resolution ranging from 151 213 to 721 900, and Digital Imaging and Communications in Medicine (DICOM) format. In the pre-processing stage median filter was used to remove noise from the original image since it preserved the edges of the image, whereas segmentation was done through edge detection and threshold analysis. The results show that solid tumors were detected in three CT images corresponding to patients aged between 71 and 75 years old. Our study indicates that image processing plays a significant role in lung cancer recognition and early-stage treatment. Health professionals need to work closely with medical physicists to improve the accuracy of diagnosis.
Quantitative assessment of colorectal morphology: Implications for robotic colonoscopy
Alazmani, A
Hood, A
Jayne, D
Neville, A
Culmer, P
Medical Engineering & Physics2016Journal Article, cited 11 times
Website
CT COLONOGRAPHY
Segmentation
This paper presents a method of characterizing the distribution of colorectal morphometrics. It uses three-dimensional region growing and topological thinning algorithms to determine and visualize the luminal volume and centreline of the colon, respectively. Total and segmental lengths, diameters, volumes, and tortuosity angles were then quantified. The effects of body orientations on these parameters were also examined. Variations in total length were predominately due to differences in the transverse colon and sigmoid segments, and did not significantly differ between body orientations. The diameter of the proximal colon was significantly larger than the distal colon, with the largest value at the ascending and cecum segments. The volume of the transverse colon was significantly the largest, while those of the descending colon and rectum were the smallest. The prone position showed a higher frequency of high angles and consequently found to be more torturous than the supine position. This study yielded a method for complete segmental measurements of healthy colorectal anatomy and its tortuosity. The transverse and sigmoid colons were the major determinant in tortuosity and morphometrics between body orientations. Quantitative understanding of these parameters may potentially help to facilitate colonoscopy techniques, accuracy of polyp spatial distribution detection, and design of novel endoscopic devices.
Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing
AlBadawy, E. A.
Saha, A.
Mazurowski, M. A.
Med Phys2018Journal Article, cited 5 times
Website
TCGA-GBM
MICCAI BraTS challenge
Convolutional neural network (CNN)
FMRIB Software Library (FSL)
Dice similarity coefficient
Average Hausdorff Distance
BRAIN
Segmentation
Glioblastoma Multiforme (GBM)
magnetic resonance imaging (MRI)
BACKGROUND AND PURPOSE: Convolutional neural networks (CNNs) are commonly used for segmentation of brain tumors. In this work, we assess the effect of cross-institutional training on the performance of CNNs. METHODS: We selected 44 glioblastoma (GBM) patients from two institutions in The Cancer Imaging Archive dataset. The images were manually annotated by outlining each tumor component to form ground truth. To automatically segment the tumors in each patient, we trained three CNNs: (a) one using data for patients from the same institution as the test data, (b) one using data for the patients from the other institution and (c) one using data for the patients from both of the institutions. The performance of the trained models was evaluated using Dice similarity coefficients as well as Average Hausdorff Distance between the ground truth and automatic segmentations. The 10-fold cross-validation scheme was used to compare the performance of different approaches. RESULTS: Performance of the model significantly decreased (P < 0.0001) when it was trained on data from a different institution (dice coefficients: 0.68 +/- 0.19 and 0.59 +/- 0.19) as compared to training with data from the same institution (dice coefficients: 0.72 +/- 0.17 and 0.76 +/- 0.12). This trend persisted for segmentation of the entire tumor as well as its individual components. CONCLUSIONS: There is a very strong effect of selecting data for training on performance of CNNs in a multi-institutional setting. Determination of the reasons behind this effect requires additional comprehensive investigation.
Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments
Boundary extraction for object region segmentation is one of the most challenging tasks in image processing and computer vision areas. The complexity of large variations in the appearance of the object and the background in a typical image causes the performance degradation of existing segmentation algorithms. One of the goals of computer vision studies is to produce algorithms to segment object regions to produce accurate object boundaries that can be utilized in feature extraction and classification.; ; This dissertation research considers the incorporation of prior knowledge of intensity/color of objects of interest within segmentation framework to enhance the performance of object region and boundary extraction of targets in unconstrained environments. The information about intensity/color of object of interest is taken from small patches as seeds that are fed to learn a neural network. The main challenge is accounting for the projection transformation between the limited amount of prior information and the appearance of the real object of interest in the testing data. We address this problem by the use of a Self-organizing Map (SOM) which is an unsupervised learning neural network. The segmentation process is achieved by the construction of a local fitted image level-set cost function, in which, the dynamic variable is a Best Matching Unit (BMU) coming from the SOM map.; ; The proposed method is demonstrated on the PASCAL 2011 challenging dataset, in which, images contain objects with variations of illuminations, shadows, occlusions and clutter. In addition, our method is tested on different types of imagery including thermal, hyperspectral, and medical imagery. Metrics illustrate the effectiveness and accuracy of the proposed algorithm in improving the efficiency of boundary extraction and object region detection.; ; In order to reduce computational time, a lattice Boltzmann Method (LBM) convergence criteria is used along with the proposed self-organized active contour model for producing faster and effective segmentation. The lattice Boltzmann method is utilized to evolve the level-set function rapidly and terminate the evolution of the curve at the most optimum region. Experiments performed on our test datasets show promising results in terms of time and quality of the segmentation when compared to other state-of-the-art learning-based active contour model approaches. Our method is more than 53% faster than other state-of-the-art methods. Research is in progress to employ Time Adaptive Self- Organizing Map (TASOM) for improved segmentation and utilize the parallelization property of the LBM to achieve real-time segmentation.
Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.
Automatic intensity windowing of mammographic images based on a perceptual metric
Albiol, Alberto
Corbi, Alberto
Albiol, Francisco
Medical Physics2017Journal Article, cited 0 times
Website
Algorithm Development
Computer Aided Diagnosis (CADx)
BI-RADS
mutual information
Mammography
Gabor filter
BREAST
Radiomic feature
PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.
Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network
Aldoj, Nader
Lukas, Steffen
Dewey, Marc
Penzkofer, Tobias
Eur Radiol2020Journal Article, cited 1 times
Website
PROSTATEx
Convolutional Neural Network (CNN)
Deep learning
Multi-parametric MRI
Prostate
OBJECTIVE: To present a deep learning-based approach for semi-automatic prostate cancer classification based on multi-parametric magnetic resonance (MR) imaging using a 3D convolutional neural network (CNN). METHODS: Two hundred patients with a total of 318 lesions for which histological correlation was available were analyzed. A novel CNN was designed, trained, and validated using different combinations of distinct MRI sequences as input (e.g., T2-weighted, apparent diffusion coefficient (ADC), diffusion-weighted images, and K-trans) and the effect of different sequences on the network's performance was tested and discussed. The particular choice of modeling approach was justified by testing all relevant data combinations. The model was trained and validated using eightfold cross-validation. RESULTS: In terms of detection of significant prostate cancer defined by biopsy results as the reference standard, the 3D CNN achieved an area under the curve (AUC) of the receiver operating characteristics ranging from 0.89 (88.6% and 90.0% for sensitivity and specificity respectively) to 0.91 (81.2% and 90.5% for sensitivity and specificity respectively) with an average AUC of 0.897 for the ADC, DWI, and K-trans input combination. The other combinations scored less in terms of overall performance and average AUC, where the difference in performance was significant with a p value of 0.02 when using T2w and K-trans; and 0.00025 when using T2w, ADC, and DWI. Prostate cancer classification performance is thus comparable to that reported for experienced radiologists using the prostate imaging reporting and data system (PI-RADS). Lesion size and largest diameter had no effect on the network's performance. CONCLUSION: The diagnostic performance of the 3D CNN in detecting clinically significant prostate cancer is characterized by a good AUC and sensitivity and high specificity. KEY POINTS: * Prostate cancer classification using a deep learning model is feasible and it allows direct processing of MR sequences without prior lesion segmentation. * Prostate cancer classification performance as measured by AUC is comparable to that of an experienced radiologist. * Perfusion MR images (K-trans), followed by DWI and ADC, have the highest effect on the overall performance; whereas T2w images show hardly any improvement.
Radiogenomics in renal cell carcinoma
Alessandrino, Francesco
Shinagare, Atul B
Bossé, Dominick
Choueiri, Toni K
Krajewski, Katherine M
Abdominal Radiology2018Journal Article, cited 0 times
Website
TCGA_RCC
renal cancer
radiogenomics
Automatic Segmentation and Overall Survival Prediction in Gliomas Using Fully Convolutional Neural Network and Texture Analysis
In this paper, we use a Fully Convolutional Neural Network (FCNN) for the segmentation of gliomas from Magnetic Resonance Images (MRI). A fully automatic, voxel based classification was achieved by training a 23 layer deep FCNN on 2-D slices extracted from patient volumes. The network was trained on slices extracted from 130 patients and validated on 50 patients. For the task of survival prediction, texture and shape based features were extracted from T1 post contrast volume to train an Extremely Gradient Boosting (XGBoost) regressor. On the BraTS 2017 validation set, the proposed scheme achieved a mean whole tumor, tumor core and active dice score of 0.83, 0.69 and 0.69 respectively, while for the task of overall survival prediction, the proposed scheme achieved an accuracy of 52%.
Applying Deep Transfer Learning to Assess the Impact of Imaging Modalities on Colon Cancer Detection
Alhazmi, Wael
Turki, Turki
Diagnostics2023Journal Article, cited 1 times
Website
TCGA-COAD
ACRIN 6664
Deep Learning
colon cancer
Deep Feature Selection and Decision Level Fusion for Lungs Nodule Classification
Ali, Imdad
Muzammil, Muhammad
Haq, Ihsan Ul
Khaliq, Amir A.
Abdullah, Suheel
IEEE Access2021Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
The existence of pulmonary nodules exhibits the presence of lung cancer. The Computer-Aided Diagnostic (CAD) and classification of such nodules in CT images lead to improve the lung cancer screening. The classic CAD systems utilize nodule detector and feature-based classifier. In this work, we proposed a decision level fusion technique to improve the performance of the CAD system for lung nodule classification. First, we evaluated the performance of Support Vector Machine (SVM) and AdaBoostM2 algorithms based on the deep features from the state-of-the-art transferable architectures (such as; VGG-16, VGG-19, GoogLeNet, Inception-V3, ResNet-18, ResNet-50, ResNet-101 and InceptionResNet-V2). Then, we analyzed the performance of SVM and AdaBoostM2 classifier as a function of deep features. We also extracted the deep features by identifying the optimal layers which improved the performance of the classifiers. The classification accuracy is increased from 76.88% to 86.28% for ResNet-101 and 67.37% to 83.40% for GoogLeNet. Similarly, the error rate is also reduced significantly. Moreover, the results showed that SVM is more robust and efficient for deep features as compared to AdaBoostM2. The results are based on 4-fold cross-validation and are presented for publicly available LUNGx challenge dataset. We showed that the proposed technique outperforms as compared to state-of-the-art techniques and achieved accuracy score was 90.46 ± 0.25%.
A Feasibility Study on Deep Learning Based Brain Tumor Segmentation Using 2D Ellipse Box Areas
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS’17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts’ time annotating tumors and a small drop in segmentation performance.
Prediction of glioma-subtypes: comparison of performance on a DL classifier using bounding box areas versus annotated tumors
Ali, M. B.
Gu, I. Y.
Lidemar, A.
Berger, M. S.
Widhalm, G.
Jakola, A. S.
BMC Biomed Eng2022Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Radiogenomics
Magnetic Resonance Imaging (MRI)
1p/19q codeletion
Brain tumor
Deep learning
Ellipse bounding box
IDH genotype
Convolutional Neural Networks (CNN)
BACKGROUND: For brain tumors, identifying the molecular subtypes from magnetic resonance imaging (MRI) is desirable, but remains a challenging task. Recent machine learning and deep learning (DL) approaches may help the classification/prediction of tumor subtypes through MRIs. However, most of these methods require annotated data with ground truth (GT) tumor areas manually drawn by medical experts. The manual annotation is a time consuming process with high demand on medical personnel. As an alternative automatic segmentation is often used. However, it does not guarantee the quality and could lead to improper or failed segmented boundaries due to differences in MRI acquisition parameters across imaging centers, as segmentation is an ill-defined problem. Analogous to visual object tracking and classification, this paper shifts the paradigm by training a classifier using tumor bounding box areas in MR images. The aim of our study is to see whether it is possible to replace GT tumor areas by tumor bounding box areas (e.g. ellipse shaped boxes) for classification without a significant drop in performance. METHOD: In patients with diffuse gliomas, training a deep learning classifier for subtype prediction by employing tumor regions of interest (ROIs) using ellipse bounding box versus manual annotated data. Experiments were conducted on two datasets (US and TCGA) consisting of multi-modality MRI scans where the US dataset contained patients with diffuse low-grade gliomas (dLGG) exclusively. RESULTS: Prediction rates were obtained on 2 test datasets: 69.86% for 1p/19q codeletion status on US dataset and 79.50% for IDH mutation/wild-type on TCGA dataset. Comparisons with that of using annotated GT tumor data for training showed an average of 3.0% degradation (2.92% for 1p/19q codeletion status and 3.23% for IDH genotype). CONCLUSION: Using tumor ROIs, i.e., ellipse bounding box tumor areas to replace annotated GT tumor areas for training a deep learning scheme, cause only a modest decline in performance in terms of subtype prediction. With more data that can be made available, this may be a reasonable trade-off where decline in performance may be counteracted with more data.
A novel federated deep learning scheme for glioma and its subtype classification
Ali, Muhaddisa Barat
Gu, Irene Yu-Hua
Berger, Mitchel S.
Jakola, Asgeir Store
Frontiers in Neuroscience2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Deep Learning
MRI
glioma
Background: Deep learning (DL) has shown promising results in molecular-based classification of glioma subtypes from MR images. DL requires a large number of training data for achieving good generalization performance. Since brain tumor datasets are usually small in size, combination of such datasets from different hospitals are needed. Data privacy issue from hospitals often poses a constraint on such a practice. Federated learning (FL) has gained much attention lately as it trains a central DL model without requiring data sharing from different hospitals.
Method: We propose a novel 3D FL scheme for glioma and its molecular subtype classification. In the scheme, a slice-based DL classifier, EtFedDyn, is exploited which is an extension of FedDyn, with the key differences on using focal loss cost function to tackle severe class imbalances in the datasets, and on multi-stream network to exploit MRIs in different modalities. By combining EtFedDyn with domain mapping as the pre-processing and 3D scan-based post-processing, the proposed scheme makes 3D brain scan-based classification on datasets from different dataset owners. To examine whether the FL scheme could replace the central learning (CL) one, we then compare the classification performance between the proposed FL and the corresponding CL schemes. Furthermore, detailed empirical-based analysis were also conducted to exam the effect of using domain mapping, 3D scan-based post-processing, different cost functions and different FL schemes.
Results: Experiments were done on two case studies: classification of glioma subtypes (IDH mutation and wild-type on TCGA and US datasets in case A) and glioma grades (high/low grade glioma HGG and LGG on MICCAI dataset in case B). The proposed FL scheme has obtained good performance on the test sets (85.46%, 75.56%) for IDH subtypes and (89.28%, 90.72%) for glioma LGG/HGG all averaged on five runs. Comparing with the corresponding CL scheme, the drop in test accuracy from the proposed FL scheme is small (-1.17%, -0.83%), indicating its good potential to replace the CL scheme. Furthermore, the empirically tests have shown that an increased classification test accuracy by applying: domain mapping (0.4%, 1.85%) in case A; focal loss function (1.66%, 3.25%) in case A and (1.19%, 1.85%) in case B; 3D post-processing (2.11%, 2.23%) in case A and (1.81%, 2.39%) in case B and EtFedDyn over FedAvg classifier (1.05%, 1.55%) in case A and (1.23%, 1.81%) in case B with fast convergence, which all contributed to the improvement of overall performance in the proposed FL scheme.
Conclusion: The proposed FL scheme is shown to be effective in predicting glioma and its subtypes by using MR images from test sets, with great potential of replacing the conventional CL approaches for training deep networks. This could help hospitals to maintain their data privacy, while using a federated trained classifier with nearly similar performance as that from a centrally trained one. Further detailed experiments have shown that different parts in the proposed 3D FL scheme, such as domain mapping (make datasets more uniform) and post-processing (scan-based classification), are essential.
Glioma Segmentation Using Ensemble of 2D/3D U-Nets and Survival Prediction Using Multiple Features Fusion
Automatic segmentation of gliomas from brain Magnetic Resonance Imaging (MRI) volumes is an essential step for tumor detection. Various 2D Convolutional Neural Network (2D-CNN) and its 3D variant, known as 3D-CNN based architectures, have been proposed in previous studies, which are used to capture contextual information. The 3D models capture depth information, making them an automatic choice for glioma segmentation from 3D MRI images. However, the 2D models can be trained in a relatively shorter time, making their parameter tuning relatively easier. Considering these facts, we tried to propose an ensemble of 2D and 3D models to utilize their respective benefits better. After segmentation, prediction of Overall Survival (OS) time was performed on segmented tumor sub-regions. For this task, multiple radiomic and image-based features were extracted from MRI volumes and segmented sub-regions. In this study, radiomic and image-based features were fused to predict the OS time of patients. Experimental results on BraTS 2020 testing dataset achieved a dice score of 0.79 on Enhancing Tumor (ET), 0.87 on Whole Tumor (WT), and 0.83 on Tumor Core (TC). For OS prediction task, results on BraTS 2020 testing leaderboard achieved an accuracy of 0.57, Mean Square Error (MSE) of 392,963.189, Median SE of 162,006.3, and Spearman R correlation score of −0.084.
Automated apparent diffusion coefficient analysis for genotype prediction in lower grade glioma: association with the T2-FLAIR mismatch sign
Aliotta, E.
Dutta, S. W.
Feng, X.
Tustison, N. J.
Batchala, P. P.
Schiff, D.
Lopes, M. B.
Jain, R.
Druzgal, T. J.
Mukherjee, S.
Patel, S. H.
J Neurooncol2020Journal Article, cited 0 times
Website
TCGA-LGG
Radiomics
Radiogenomics
BRAIN
PURPOSE: The prognosis of lower grade glioma (LGG) patients depends (in large part) on both isocitrate dehydrogenase (IDH) gene mutation and chromosome 1p/19q codeletion status. IDH-mutant LGG without 1p/19q codeletion (IDHmut-Noncodel) often exhibit a unique imaging appearance that includes high apparent diffusion coefficient (ADC) values not observed in other subtypes. The purpose of this study was to develop an ADC analysis-based approach that can automatically identify IDHmut-Noncodel LGG. METHODS: Whole-tumor ADC metrics, including fractional tumor volume with ADC > 1.5 x 10(-3)mm(2)/s (VADC>1.5), were used to identify IDHmut-Noncodel LGG in a cohort of N = 134 patients. Optimal threshold values determined in this dataset were then validated using an external dataset containing N = 93 cases collected from The Cancer Imaging Archive. Classifications were also compared with radiologist-identified T2-FLAIR mismatch sign and evaluated concurrently to identify added value from a combined approach. RESULTS: VADC>1.5 classified IDHmut-Noncodel LGG in the internal cohort with an area under the curve (AUC) of 0.80. An optimal threshold value of 0.35 led to sensitivity/specificity = 0.57/0.93. Classification performance was similar in the validation cohort, with VADC>1.5 >/= 0.35 achieving sensitivity/specificity = 0.57/0.91 (AUC = 0.81). Across both groups, 37 cases exhibited positive T2-FLAIR mismatch sign-all of which were IDHmut-Noncodel. Of these, 32/37 (86%) also exhibited VADC>1.5 >/= 0.35, as did 23 additional IDHmut-Noncodel cases which were negative for T2-FLAIR mismatch sign. CONCLUSION: Tumor subregions with high ADC were a robust indicator of IDHmut-Noncodel LGG, with VADC>1.5 achieving > 90% classification specificity in both internal and validation cohorts. VADC>1.5 exhibited strong concordance with the T2-FLAIR mismatch sign and the combination of both parameters improved sensitivity in detecting IDHmut-Noncodel LGG.
Challenges in predicting glioma survival time in multi-modal deep networks
Prediction of cancer survival time is of considerable interest in medicine as it leads to better patient care and reduces health care costs. In this study, we propose a multi-path multimodal neural network that predicts Glioblastoma Multiforme (GBM) survival time at the 14 months threshold. We obtained image, gene expression, and SNP variants from whole-exome sequences all from the The Cancer Genome Atlas portal for a total of 126 patients. We perform a 10-fold cross-validation experiment on each of the data sources separately as well as the model with all data combined. From post-contrast Tl MRI data, we used 3D scans and 2D slices that we selected manually to show the tumor region. We find that the model with 2D MRI slices and genomic data combined gives the highest accuracies over individual sources but by a modest margin. We see considerable variation in accuracies across the 10 folds and that our model achieves 100% accuracy on the training data but lags behind in test accuracy. With dropout our training accuracy falls considerably. This shows that predicting glioma survival time is a challenging task but it is unclear if this is also a symptom of insufficient data. A clear direction here is to augment our data that we plan to explore with generative models. Overall we present a novel multi-modal network that incorporates SNP, gene expression, and MRI image data for glioma survival time prediction.
Strong semantic segmentation for Covid-19 detection: Evaluating the use of deep learning models as a performant tool in radiography
Allioui, Hanane
Mourdi, Youssef
Sadgal, Mohamed
2022Journal Article, cited 0 times
LCTSC
INTRODUCTION: With the increasing number of Covid-19 cases as well as care costs, chest diseases have gained increasing interest in several communities, particularly in medical and computer vision. Clinical and analytical exams are widely recognized techniques for diagnosing and handling Covid-19 cases. However, strong detection tools can help avoid damage to chest tissues. The proposed method provides an important way to enhance the semantic segmentation process using combined potential deep learning (DL) modules to increase consistency. Based on Covid-19 CT images, this work hypothesized that a novel model for semantic segmentation might be able to extract definite graphical features of Covid-19 and afford an accurate clinical diagnosis while optimizing the classical test and saving time.
METHODS: CT images were collected considering different cases (normal chest CT, pneumonia, typical viral causes, and Covid-19 cases). The study presents an advanced DL method to deal with chest semantic segmentation issues. The approach employs a modified version of the U-net to enable and support Covid-19 detection from the studied images.
RESULTS: The validation tests demonstrated competitive results with important performance rates: Precision (90.96% ± 2.5) with an F-score of (91.08% ± 3.2), an accuracy of (93.37% ± 1.2), a sensitivity of (96.88% ± 2.8) and a specificity of (96.91% ± 2.3). In addition, the visual segmentation results are very close to the Ground truth.
CONCLUSION: The findings of this study reveal the proof-of-principle for using cooperative components to strengthen the semantic segmentation modules for effective and truthful Covid-19 diagnosis.
IMPLICATIONS FOR PRACTICE: This paper has highlighted that DL based approach, with several modules, may be contributing to provide strong support for radiographers and physicians, and that further use of DL is required to design and implement performant automated vision systems to detect chest diseases.
King Abdullah International Medical Research Center (KAIMRC)’s breast cancer big images data set
Almazroa, Ahmed A.
Bin Saleem, Ghaida
Alotaibi, Aljoharah
Almasloukh, Mudhi
Al Otaibi, Um Klthoum
Al Balawi, Wejdan
Alabdulmajeed, Ghufran
Alamri, Suhailah
Alsomaie, Barrak
Fahim, Mohammed
Alluhaydan, Najd
Almatar, Hessa
Park, Brian J.
Deserno, Thomas M.
2022Conference Paper, cited 0 times
BREAST-DIAGNOSIS
CBIS-DDSM
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
BCS-DBT
BREAST
The purpose of this project is to prepare image data set to develop AI systems to serve screening and diagnosis of breast cancer research field. Whereas early detection could have a positive impact on decreasing mortality, as this could offer more options for successful intervention and therapies to reduce the chance of malignant and metastatic progression. Six students, one research technologist, and one consultant in radiology collected the images and the patients’ information. The images extracted from three imaging modalities: the Hologic 3D Mammography, Philips and Super Sonic ultrasound Machines, and GE and Philips machines for MRI. The cases were graded by a trained radiologist. A total of 3085 DICOM format images have collected for the period between 2008 – 2020 for 890 females patients ages 18 to 85. The largest portion in the data is dedicated for mammograms (51.3%), and then ultrasound (31.7%), and MRI exams (17%). There were 593 malignant cases while the benign cases were 2492 cases. The diagnosis was confirmed by biopsy technique after mammogram and ultrasound exams. The data will be continually collected in the future to serve the artificial intelligence research field and the public health community. The updated information about the data will be available on: https://kaimrc.med.sa/?page_id=11767072
Segmentation of male pelvic organs on computed tomography with a deep neural network fine-tuned by a level-set method
Almeida, Gonçalo
Figueira, Ana Rita
Lencart, Joana
Tavares, João Manuel R S
Computers in Biology and Medicine2021Journal Article, cited 0 times
PROSTATEx
Computed Tomography (CT) imaging is used in Radiation Therapy planning, where the treatment is carefully tailored to each patient in order to maximize radiation dose to the target while decreasing adverse effects to nearby healthy tissues. A crucial step in this process is manual organ contouring, which if performed automatically could considerably decrease the time to starting treatment and improve outcomes. Computerized segmentation of male pelvic organs has been studied for decades and deep learning models have brought considerable advances to the field, but improvements are still demanded. A two-step framework for automatic segmentation of the prostate, bladder and rectum is presented: a convolutional neural network enhanced with attention gates performs an initial segmentation, followed by a region-based active contour model to fine-tune the segmentations to each patient's specific anatomy. The framework was evaluated on a large collection of planning CTs of patients who had Radiation Therapy for prostate cancer. The Surface Dice Coefficient improved from 79.41 to 81.00% on segmentation of the prostate, 94.03-95.36% on the bladder and 82.17-83.68% on the rectum, comparing the proposed framework with the baseline convolutional neural network. This study shows that traditional image segmentation algorithms can help improve the immense gains that deep learning models have brought to the medical imaging segmentation field.
Versatile Convolutional Networks Applied to Computed Tomography and Magnetic Resonance Image Segmentation
Almeida, Gonçalo
Tavares, João Manuel R. S.
Journal of Medical Systems2021Journal Article, cited 0 times
LCTSC
Segmentation
Deep Learning
Medical image segmentation has seen positive developments in recent years but remains challenging with many practical obstacles to overcome. The applications of this task are wide-ranging in many fields of medicine, and used in several imaging modalities which usually require tailored solutions. Deep learning models have gained much attention and have been lately recognized as the most successful for automated segmentation. In this work we show the versatility of this technique by means of a single deep learning architecture capable of successfully performing segmentation on two very different types of imaging: computed tomography and magnetic resonance. The developed model is fully convolutional with an encoder-decoder structure and high-resolution pathways which can process whole three-dimensional volumes at once, and learn directly from the data to find which voxels belong to the regions of interest and localize those against the background. The model was applied to two publicly available datasets achieving equivalent results for both imaging modalities, as well as performing segmentation of different organs in different anatomic regions with comparable success.
A Framework for Performance Optimization of Internet of Things Applications
A framework to support optimised application placement across the cloud-edge continuum is described, making use of the Optimized-Greedy Nominator Heuristic (EO-GNH). The framework can be employed across a range of different Internet of Things (IoT) applications, such as smart agriculture and healthcare. The framework uses asynchronous MapReduce and parallel meta-heuristics to support the management of IoT applications, focusing on metrics such as execution performance, resource utilization and system resilience. We evaluate EO-GNH using service quality achieved through real-time resource management, across multiple application domains. Performance analysis and optimisation of EO-GNH has also been carried out to demonstrate how it can be configured for use across different IoT usage contexts.
Predicting methylation class from diffusely infiltrating adult gliomas using multi-modality MRI data
Alom, Zahangir
Tran, Quynh T.
Bag, Asim K.
Lucas, John T.
Orr, Brent A.
Neuro-Oncology Advances2023Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
Radiomics
Radiogenomics
IDH mutation
glioma
Magnetic Resonance Imaging (MRI)
DNA methylation profiling
Brain tumor
Classification
supervised deep neural network
Background; Radiogenomic studies of adult-type diffuse gliomas have used magnetic resonance imaging (MRI) data to infer tumor attributes, including abnormalities such as IDH-mutation status and 1p19q deletion. This approach is effective but doesn't generalize to tumor types that lack highly recurrent alterations. Tumors have intrinsic DNA methylation patterns and can be grouped into stable methylation classes even when lacking recurrent mutations or copy number changes. The purpose of this study was to prove the principle that a tumor's DNA-methylation class could be used as a predictive feature for radiogenomic modeling.; ; Methods; Using a custom DNA methylation-based classification model, molecular classes were assigned to diffuse gliomas in The Cancer Genome Atlas (TCGA) dataset. We then constructed and validated machine learning models to predict a tumor’s methylation family or subclass from matched multisequence MRI data using either extracted radiomic features or directly from MRI images.; ; Results; For models using extracted radiomic features, we demonstrated top accuracies above 90% for predicting IDH-glioma and GBM-IDHwt methylation families, IDH-mutant tumor methylation subclasses, or GBM-IDHwt molecular subclasses. Classification models utilizing MRI images directly demonstrated average accuracies of 80.6 % for predicting methylation families, compared to 87.2% and 89.0% for differentiating IDH-mutated astrocytomas from oligodendrogliomas and glioblastoma molecular subclasses, respectively.; ; Conclusions; These findings demonstrate that MRI-based machine learning models can effectively predict the methylation class of brain tumors. Given appropriate datasets, this approach could generalize to most brain tumor types, expanding the number and types of tumors that could be used to develop radiomic or radiogenomic models.
Simulating the behaviour of glioblastoma multiforme based on patient MRI during treatments
Alonzo, Flavien
Serandour, Aurelien A.
Saad, Mazen
2022Journal Article, cited 0 times
CPTAC-GBM
Glioblastoma multiforme is a brain cancer that still shows poor prognosis for patients despite the active research for new treatments. In this work, the goal is to model and simulate the evolution of tumour associated angiogenesis and the therapeutic response to glioblastoma multiforme. Multiple phenomena are modelled in order to fit different biological pathways, such as the cellular cycle, apoptosis, hypoxia or angiogenesis. This leads to a nonlinear system with 4 equations and 4 unknowns: the density of tumour cells, the O2$$\text {O}_{2}$$ concentration, the density of endothelial cells and the vascular endothelial growth factor concentration. This system is solved numerically on a mesh fitting the geometry of the brain and the tumour of a patient based on a 2D slice of MRI. We show that our numerical scheme is positive, and we give the energy estimates on the discrete solution to ensure its existence. The numerical scheme uses nonlinear control volume finite elements in space and is implicit in time. Numerical simulations have been done using the different standard treatments: surgery, chemotherapy and radiotherapy, in order to conform to the behaviour of a tumour in response to treatments according to empirical clinical knowledge. We find that our theoretical model exhibits realistic behaviours.
Nakagami-Fuzzy imaging framework for precise lesion segmentation in MRI
Alpar, Orcan
Dolezal, Rafael
Ryska, Pavel
Krejcar, Ondrej
Pattern Recognition2022Journal Article, cited 0 times
Website
CPTAC-GBM
MRI
Segmentation
GLCM and CNN Deep Learning Model for Improved MRI Breast Tumors Detection
Alsalihi, Aya A
Aljobouri, Hadeel K
ALTameemi, Enam Azez Khalel
International Journal of Online & Biomedical Engineering2022Journal Article, cited 0 times
Website
BREAST-DIAGNOSIS
Deep convolution neural network
Leukemia Detection Performance: A Comparative Study of EfficientNetB3 and EfficientNetB5
Leukemia, a complex hematologic malignancy, demands accurate and timely detection for effective treatment. This study investigates the performance of two deep learning models, EfficientN etB3 and EfficientN etB5, in the context of leukemia detection. Leveraging a dataset comprising diverse leukemia cell images, we conduct a comprehensive comparative analysis to evaluate the efficacy of these models. Our study delves into the complexity of detection sensitivity, specificity, and overall accuracy through careful consideration of experimentation and performance metrics assessment. The findings highlight distinguishing differences in detection capabilities between EfficientN etB3 and EfficientN etB5, shedding light on their strengths and weaknesses. Insights collected from this research Endeavor contribute to advancing the field of leukemia detection but also offer valuable guidance for healthcare practitioners and researchers aiming to leverage deep learning techniques for improved disease diagnosis and management.
Leukemia Classification Using EfficientNetB5: A Deep Learning Approach
Alshoraihy, Aseel
Ibrahim, Anagheem
Issa, Housam Hasan Bou
2024Conference Paper, cited 0 times
C-NMC 2019
Pathomics
Deep Learning
Blood cancer
Convolutional Neural Network (CNN)
Classification
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
Leukemia is a critical disease that requires early and accurate diagnosis. Leukemia is a type of blood cancer mainly occurring when bone marrow builds extra white blood cells in the human body. This disease affects adults and is a common cancer type among children. This paper presents a deep-learning approach using EfficientNetB5 to classify Leukemia using The Cancer Imaging Archive (TCIA) with more than 10,000 images from 118 patients. The achieved confusion matrix will contribute to improving the research in diagnosing cancer.
Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey
Altini, Nicola
Prencipe, Berardino
Cascarano, Giacomo Donato
Brunetti, Antonio
Brunetti, Gioacchino
Triggiani, Vito
Carnimeo, Leonarda
Marino, Francescomaria
Guerriero, Andrea
Villani, Laura
Scardapane, Arnaldo
Bevilacqua, Vitoantonio
Neurocomputing2022Journal Article, cited 0 times
CT-ORG
Pancreas-CT
Deep Learning approaches for automatic segmentation of organs from CT scans and MRI are providing promising results, leading towards a revolution in the radiologists’ workflow. Precise delineations of abdominal organs boundaries reveal fundamental for a variety of purposes: surgical planning, volumetric estimation (e.g. Total Kidney Volume – TKV – assessment in Autosomal Dominant Polycystic Kidney Disease – ADPKD), diagnosis and monitoring of pathologies. Fundamental imaging techniques exploited for these tasks are Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), which enable clinicians to perform 3D analyses of all Regions of Interests (ROIs). In the realm of existing methods for segmentation and classification of these zones, Convolutional Neural Networks (CNNs) are emerging as the reference approach. In the last five years an enormous research effort has been done about the possibility of applying CNNs in Medical Imaging, resulting in more than 8000 documents on Scopus and more than 80000 results on Google Scholar. The high accuracy provided by those systems cannot be denied as motivation of all obtained results, though there are still problems to be addressed with. In this survey, major article databases, as Scopus, for instance, were systematically investigated for different kinds of Deep Learning approaches in segmentation of abdominal organs with a particular focus on liver, kidney and spleen. In this work, approaches are accurately classified, both by relevance of each organ (for instance, segmentation of liver has specific properties, if compared to other organs) and by type of computational approach, as well as the architecture of the employed network. For this purpose, a case study of segmentation for each of these organs is presented.
Interpretable radiomics method for predicting human papillomavirus status in oropharyngeal cancer using Bayesian networks
Altinok, Oya
Guvenis, Albert
Physica Medica2023Journal Article, cited 0 times
Oropharyngeal-Radiomics-Outcomes
Human Papillomavirus Viruses
OBJECTIVES: To develop a simple interpretable Bayesian Network (BN) to classify HPV status in patients with oropharyngeal cancer.
METHODS: Two hundred forty-six patients, 216 of whom were HPV positive, were used in this study. We extracted 851 radiomics markers from patients' contrast-enhanced Computed Tomography (CT) images. Mens eX Machina (MXM) approach selected two most relevant predictors: sphericity and max2DDiameterRow. The area under the curve (AUC) demonstrated BN model performance in 30% of the data reserved for testing. A Support Vector Machine (SVM) based method was also implemented for comparison purposes.
RESULTS: The Mens eX Machina (MXM) approach selected two most relevant predictors: sphericity and max2DDiameterRow. Areas under the Curves (AUC) were found 0.78 and 0.72 on the training and test data, respectively. When using support vector machine (SVM) and 25 features, the AUC was found 0.83 on the test data.
CONCLUSIONS: The straightforward structure and power of interpretability of our BN model will help clinicians make treatment decisions and enable the non-invasive detection of HPV status from contrast-enhanced CT images. Higher accuracy can be obtained using more complex structures at the expense of lower interpretability.
ADVANCES IN KNOWLEDGE: Radiomics is being studied lately as a simple imaging data based HPV status detection technique which can be an alternative to laboratory approaches. However, it generally lacks interpretability. This work demonstrated the feasibility of using Bayesian networks based radiomics for predicting HPV positivity in an interpretable way.
Robust Detection of Circles in the Vessel Contours and Application to Local Probability Density Estimation
Identifying Cross-Scale Associations between Radiomic and Pathomic Signatures of Non-Small Cell Lung Cancer Subtypes: Preliminary Results
Alvarez-Jimenez, Charlems
Sandino, Alvaro A.
Prasanna, Prateek
Gupta, Amit
Viswanath, Satish E.
Romero, Eduardo
Cancers2020Journal Article, cited 0 times
NSCLC-Radiomics-Genomics
(1) Background: Despite the complementarity between radiology and histopathology, both from a diagnostic and a prognostic perspective, quantitative analyses of these modalities are usually performed in disconnected silos. This work presents initial results for differentiating two major non-small cell lung cancer (NSCLC) subtypes by exploring cross-scale associations between Computed Tomography (CT) images and corresponding digitized pathology images. (2) Methods: The analysis comprised three phases, (i) a multi-resolution cell density quantification to identify discriminant pathomic patterns for differentiating adenocarcinoma (ADC) and squamous cell carcinoma (SCC), (ii) radiomic characterization of CT images by using Haralick descriptors to quantify tumor textural heterogeneity as represented by gray-level co-occurrences to discriminate the two pathological subtypes, and (iii) quantitative correlation analysis between the multi-modal features to identify potential associations between them. This analysis was carried out using two publicly available digitized pathology databases (117 cases from TCGA and 54 cases from CPTAC) and a public radiological collection of CT images (101 cases from NSCLC-R). (3) Results: The top-ranked cell density pathomic features from the histopathology analysis were correlation, contrast, homogeneity, sum of entropy and difference of variance; which yielded a cross-validated AUC of 0.72 ± 0.02 on the training set (CPTAC) and hold-out validation AUC of 0.77 on the testing set (TCGA). Top-ranked co-occurrence radiomic features within NSCLC-R were contrast, correlation and sum of entropy which yielded a cross-validated AUC of 0.72 ± 0.01. Preliminary but significant cross-scale associations were identified between cell density statistics and CT intensity values using matched specimens available in the TCGA cohort, which were used to significantly improve the overall discriminatory performance of radiomic features in differentiating NSCLC subtypes (AUC = 0.78 ± 0.01). (4) Conclusions: Initial results suggest that cross-scale associations may exist between digital pathology and CT imaging which can be used to identify relevant radiomic and histopathology features to accurately distinguish lung adenocarcinomas from squamous cell carcinomas.
Fully Automatic Deep Learning Framework for Pancreatic Ductal Adenocarcinoma Detection on Computed Tomography
Alves, N.
Schuurmans, M.
Litjens, G.
Bosma, J. S.
Hermans, J.
Huisman, H.
Cancers (Basel)2022Journal Article, cited 0 times
Website
Pancreas-CT
Deep Learning
U-Net
Pancreatic ductal adenocarcinoma
PANCREAS
Early detection improves prognosis in pancreatic ductal adenocarcinoma (PDAC), but is challenging as lesions are often small and poorly defined on contrast-enhanced computed tomography scans (CE-CT). Deep learning can facilitate PDAC diagnosis; however, current models still fail to identify small (<2 cm) lesions. In this study, state-of-the-art deep learning models were used to develop an automatic framework for PDAC detection, focusing on small lesions. Additionally, the impact of integrating the surrounding anatomy was investigated. CE-CT scans from a cohort of 119 pathology-proven PDAC patients and a cohort of 123 patients without PDAC were used to train a nnUnet for automatic lesion detection and segmentation (nnUnet_T). Two additional nnUnets were trained to investigate the impact of anatomy integration: (1) segmenting the pancreas and tumor (nnUnet_TP), and (2) segmenting the pancreas, tumor, and multiple surrounding anatomical structures (nnUnet_MS). An external, publicly available test set was used to compare the performance of the three networks. The nnUnet_MS achieved the best performance, with an area under the receiver operating characteristic curve of 0.91 for the whole test set and 0.88 for tumors <2 cm, showing that state-of-the-art deep learning can detect small PDAC and benefits from anatomy information.
Comparative Analysis of Lossless Image Compression Algorithms based on Different Types of Medical Images
In the medical field, there is a demand for highspeed transmission and efficient storage of medical images between healthcare organizations. Therefore, image compression techniques are essential in that field. In this study, we conducted an experimental comparison between two famous lossless algorithms: lossless Discrete Cosine Transform (DCT) and lossless Haar Wavelet Transform (HWT). Covering three different datasets that contain different types of medical images: MRI, CT, and gastrointestinal endoscopic images; with different image formats PNG, JPG and TIF. According to the conducted experiments, in terms of compressed image size and compression ratio, we found that DCT outperforms HWT regarding PNG and TIF format which represent CT-grey and MRI-color images. And regarding JPG format which represents the gastrointestinal endoscopic color images, DCT performs well when grey-scale images are used; where HWT outperforms DCT when color images are used. However, HWT outperforms DCT in compression time regarding all the image types and formats.
SAM-UNETR: Clinically Significant Prostate Cancer Segmentation Using Transfer Learning From Large Model
Alzate-Grisales, Jesus Alejandro
Mora-Rubio, Alejandro
García-García, Francisco
Tabares-Soto, Reinel
De La Iglesia-Vayá, Maria
IEEE Access2023Journal Article, cited 0 times
PROSTATEx
prostate
Deep Learning
Prostate cancer (PCa) is one of the leading causes of cancer-related mortality among men worldwide. Accurate and efficient segmentation of clinically significant prostate cancer (csPCa) regions from magnetic resonance imaging (MRI) plays a crucial role in diagnosis, treatment planning, and monitoring of the disease, however, this is a challenging task even for the specialized clinicians. This study presents SAM-UNETR, a novel model for segmenting csPCa regions from MRI images. SAM-UNETR combines a transformer-encoder from the Segment Anything Model (SAM), a versatile segmentation model trained on 11 million images, with a residual-convolution decoder inspired by UNETR. The model uses multiple image modalities and applies prostate zone segmentation, normalization, and data augmentation as preprocessing steps. The performance of SAM-UNETR is compared with three other models using the same strategy and preprocessing. The results show that SAM-UNETR achieves superior reliability and accuracy in csPCa segmentation, especially when using transfer learning for the image encoder. This demonstrates the adaptability of large-scale models for different tasks. SAM-UNETR attains a Dice Score of 0.467 and an AUROC of 0.77 for csPCa prediction.
Kidney Tumor Detection and Classification Based on Deep Learning Approaches: A New Dataset in CT Scans
Alzu’bi, Dalia
Abdullah, Malak
Hmeidi, Ismail
AlAzab, Rami
Gharaibeh, Maha
El-Heis, Mwaffaq
Almotairi, Khaled H.
Forestiero, Agostino
Hussein, Ahmad MohdAziz
Abualigah, Laith
Kumar, Senthil
Journal of Healthcare Engineering2022Journal Article, cited 0 times
Website
TCGA-KIRC
TCGA-KICH
TCGA-KIRP
CPTAC-CCRCC
Algorithm Development
Classification
C4KC-KiTS
Retrospective Studies
Kidney tumor (KT) is one of the diseases that have affected our society and is the seventh most common tumor in both men and women worldwide. The early detection of KT has significant benefits in reducing death rates, producing preventive measures that reduce effects, and overcoming the tumor. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of deep learning (DL) can save diagnosis time, improve test accuracy, reduce costs, and reduce the radiologist’s workload. In this paper, we present detection models for diagnosing the presence of KTs in computed tomography (CT) scans. Toward detecting and classifying KT, we proposed 2D-CNN models; three models are concerning KT detection such as a 2D convolutional neural network with six layers (CNN-6), a ResNet50 with 50 layers, and a VGG16 with 16 layers. The last model is for KT classification as a 2D convolutional neural network with four layers (CNN-4). In addition, a novel dataset from the King Abdullah University Hospital (KAUH) has been collected that consists of 8,400 images of 120 adult patients who have performed CT scans for suspected kidney masses. The dataset was divided into 80% for the training set and 20% for the testing set. The accuracy results for the detection models of 2D CNN-6 and ResNet50 reached 97%, 96%, and 60%, respectively. At the same time, the accuracy results for the classification model of the 2D CNN-4 reached 92%. Our novel models achieved promising results; they enhance the diagnosis of patient conditions with high accuracy, reducing radiologist’s workload and providing them with a tool that can automatically assess the condition of the kidneys, reducing the risk of misdiagnosis. Furthermore, increasing the quality of healthcare service and early detection can change the disease’s track and preserve the patient’s life.
Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation
AlZu'bi, Shadi
AlQatawneh, Sokyna
ElBes, Mohammad
Alsmirat, Mohammad
Concurrency and Computation: Practice and Experience2019Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Hidden Markov Model
Segmentation
machine learning
Acceptable error rate, low quality assessment, and time complexity are the major problems in image segmentation, which needed to be discovered. A variety of acceleration techniques have been applied and achieve real time results, but still limited in 3D. HMM is one of the best statistical techniques that played a significant rule recently. The problem associated with HMM is time complexity, which has been resolved using different accelerator. In this research, we propose a methodology for transferring HMM matrices from image to another skipping the training time for the rest of the 3D volume. One HMM train is generated and generalized to the whole volume. The concepts behind multi‐orientation geometrical segmentation has been employed here to improve the quality of HMM segmentation. Axial, saggital, and coronal orientations have been considered individually and together to achieve accurate segmentation results in less processing time and superior quality in the detection accuracy.
Multi-orientation geometric medical volumes segmentation using 3D multiresolution analysis
AlZu’bi, Shadi
Jararweh, Yaser
Al-Zoubi, Hassan
Elbes, Mohammed
Kanan, Tarek
Gupta, Brij
Multimedia Tools and Applications2018Journal Article, cited 40 times
Website
Lung Phantom
QIN-LungCT-Seg
Medical images have a very significant impact in the diagnosing and treating process of patient ailments and radiology applications. For many reasons, processing medical images can greatly improve the quality of radiologists’ job. While 2D models have been in use for medical applications for decades, wide-spread utilization of 3D models appeared only in recent years. The proposed work in this paper aims to segment medical volumes under various conditions and in different axel representations. In this paper, we propose an algorithm for segmenting Medical Volumes based on Multiresolution Analysis. Different 3D volume reconstructed versions have been considered to come up with a robust and accurate segmentation results. The proposed algorithm is validated using real medical and Phantom Data. Processing time, segmentation accuracy of predefined data sets and radiologist’s opinions were the key factors for methods validations.
Automatic detection of main pancreatic duct dilation and pancreatic parenchymal atrophy based on a shape feature in abdominal contrast-enhanced CT images
Ambo, Shintaro
Hirano, Ryo
Hattori, Chihiro
Journal of Medical Imaging2025Journal Article, cited 0 times
Website
CTPRED-SUNITINIB-PANNET
TCGA-LIHC
PANCREAS-CT
Imaging Biomarker Ontology (IBO): A Biomedical Ontology to Annotate and Share Imaging Biomarker Data
Amdouni, Emna
Gibaud, Bernard
Journal on Data Semantics2018Journal Article, cited 0 times
Website
TCGA-GBM
dicom
Biomarker Retrieval and Knowledge Reasoning System (BiomRKRS)
ontology
Multi-resolution 3D CNN for MRI Brain Tumor Segmentation and Survival Prediction
In this study, an automated three dimensional (3D) deep segmentation approach for detecting gliomas in 3D pre-operative MRI scans is proposed. Then, a classification algorithm based on random forests, for survival prediction is presented. The objective is to segment the glioma area and produce segmentation labels for its different sub-regions, i.e. necrotic and the non-enhancing tumor core, the peritumoral edema, and enhancing tumor. The proposed deep architecture for the segmentation task encompasses two parallel streamlines with two different resolutions. One deep convolutional neural network is to learn local features of the input data while the other one is set to have a global observation on whole image. Deemed to be complementary, the outputs of each stream are then merged to provide an ensemble complete learning of the input image. The proposed network takes the whole image as input instead of patch-based approaches in order to consider the semantic features throughout the whole volume. The algorithm is trained on BraTS 2019 which included 335 training cases, and validated on 127 unseen cases from the validation dataset using a blind testing approach. The proposed method was also evaluated on the BraTS 2019 challenge test dataset of 166 cases. The results show that the proposed methods provide promising segmentations as well as survival prediction. The mean Dice overlap measures of automatic brain tumor segmentation for validation set were 0.86, 0.77 and 0.71 for the whole tumor, core and enhancing tumor, respectively. The corresponding results for the challenge test dataset were 0.82, 0.72, and 0.70, respectively. The overall accuracy of the proposed model for the survival prediction task is 55% for the validation and 49% for the test dataset.
Fuzzy information granulation towards benign and malignant lung nodules classification
Amini, Fatemeh
Amjadifard, Roya
Mansouri, Azadeh
Computer Methods and Programs in Biomedicine Update2024Journal Article, cited 0 times
Website
LIDC-IDRI
SPIE-AAPM Lung CT Challenge
Algorithm Development
Classification
Radiomic features
Lung cancer is the second common cancer with the highest death rate in the world. Cancer diagnosis in the early stages is a critical factor for increasing the treatment speed. This paper proposes a new machine learning method based on a fuzzy approach to detect benign and malignant lung nodules to early-diagnose lung cancer by investigating the computed tomography (CT) images. First, the lung nodule images are pre-processed via the Gabor wavelet transform. Then, some of the texture features are extracted from the transformed domain based on the statistical characteristics and histogram of the local patterns of images. Finally, based on the fuzzy information granulation (FIG) method, which is widely recognized as being able to distinguish between similar textures, a FIG-based classifier is introduced to classify the benign and malignant lung nodules. The clinical data set used for this research are a combination of 150 CT scans of LIDC and SPIE-APPM data sets. Also the LIDC data set is analyzed alone. The results show that the proposed method can be an innovative alternative to classify the benign and malignant nodules in the CT images.
Hybrid Mass Detection in Breast MRI Combining Unsupervised Saliency Analysis and Deep Learning
To interpret a breast MRI study, a radiologist has to examine over 1000 images, and integrate spatial and temporal information from multiple sequences. The automated detection and classification of suspicious lesions can help reduce the workload and improve accuracy. We describe a hybrid mass-detection algorithm that combines unsupervised candidate detection with deep learning-based classification. The detection algorithm first identifies image-salient regions, as well as regions that are cross-salient with respect to the contralateral breast image. We then use a convolutional neural network (CNN) to classify the detected candidates into true-positive and false-positive masses. The network uses a novel multi-channel image representation; this representation encompasses information from the anatomical and kinetic image features, as well as saliency maps. We evaluated our algorithm on a dataset of MRI studies from 171 patients, with 1957 annotated slices of malignant (59%) and benign (41%) masses. Unsupervised saliency-based detection provided a sensitivity of 0.96 with 9.7 false-positive detections per slice. Combined with CNN classification, the number of false positive detections dropped to 0.7 per slice, with 0.85 sensitivity. The multi-channel representation achieved higher classification performance compared to single-channel images. The combination of domain-specific unsupervised methods and general-purpose supervised learning offers advantages for medical imaging applications, and may improve the ability of automated algorithms to assist radiologists.
CellSighter: a neural network to classify cells in highly multiplexed images
Amitay, Yael
Bussi, Yuval
Feinstein, Ben
Bagon, Shai
Milo, Idan
Keren, Leeat
Nature Communications2023Journal Article, cited 0 times
CRC_FFPE-CODEX_CellNeighs
Machine Learning
Multiplexed imaging enables measurement of multiple proteins in situ, offering an unprecedented opportunity to chart various cell types and states in tissues. However, cell classification, the task of identifying the type of individual cells, remains challenging, labor-intensive, and limiting to throughput. Here, we present CellSighter, a deep-learning based pipeline to accelerate cell classification in multiplexed images. Given a small training set of expert-labeled images, CellSighter outputs the label probabilities for all cells in new images. CellSighter achieves over 80% accuracy for major cell types across imaging platforms, which approaches inter-observer concordance. Ablation studies and simulations show that CellSighter is able to generalize its training data and learn features of protein expression levels, as well as spatial features such as subcellular expression patterns. CellSighter’s design reduces overfitting, and it can be trained with only thousands or even hundreds of labeled examples. CellSighter also outputs a prediction confidence, allowing downstream experts control over the results. Altogether, CellSighter drastically reduces hands-on time for cell classification in multiplexed images, while improving accuracy and consistency across datasets.
Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment Based on Texture Analysis
Ammar, Mohammed
Mahmoudi, Saïd
Stylianos, Drisis
Procedia Computer Science2016Journal Article, cited 2 times
Website
QIN Breast DCE-MRI
texture analysis
Computer Aided Diagnosis (CADx)
BREAST
MRI modality is one of the most usual techniques used for diagnosis and treatment planning of breast cancer. The aim of this study is to prove that texture based feature techniques such as co-occurrence matrix features extracted from MRI images can be used to quantify response of tumor treatment. To this aim, we use a dataset composed of two breast MRI examinations for 9 patients. Three of them were responders and six non responders. The first exam was achieved before the initiation of the treatment (baseline). The later one was done after the first cycle of the chemo treatment (control). A set of selected texture parameters have been selected and calculated for each exam. These selected parameters are: Cluster Shade, dissimilarity, entropy, homogeneity. The p-values estimated for the pathologic complete responders pCR and non pathologic complete responders pNCR patients prove that homogeneity (P-value=0.027) and cluster shade (P-value=0.0013) are the more relevant parameters related to pathologic complete responders pCR.
A Predictive Clinical-Radiomics Nomogram for Survival Prediction of Glioblastoma Using MRI
Ammari, Samy
Sallé de Chou, Raoul
Balleyguier, Corinne
Chouzenoux, Emilie
Touat, Mehdi
Quillent, Arnaud
Dumont, Sarah
Bockel, Sophie
Garcia, Gabriel C. T. E.
Elhaik, Mickael
Francois, Bidault
Borget, Valentin
Lassau, Nathalie
Khettab, Mohamed
Assi, Tarek
Diagnostics2021Journal Article, cited 8 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Glioblastoma Multiforme (GBM)
Machine Learning
Radiomics
Glioblastoma (GBM) is the most common and aggressive primary brain tumor in adult patients with a median survival of around one year. Prediction of survival outcomes in GBM patients could represent a huge step in treatment personalization. The objective of this study was to develop machine learning (ML) algorithms for survival prediction of GBM patient. We identified a radiomic signature on a training-set composed of data from the 2019 BraTS challenge (210 patients) from MRI retrieved at diagnosis. Then, using this signature along with the age of the patients for training classification models, we obtained on test-sets AUCs of 0.85, 0.74 and 0.58 (0.92, 0.88 and 0.75 on the training-sets) for survival at 9-, 12- and 15-months, respectively. This signature was then validated on an independent cohort of 116 GBM patients with confirmed disease relapse for the prediction of patients surviving less or more than the median OS of 22 months. Our model insured an AUC of 0.71 (0.65 on train). The Kaplan-Meier method showed significant OS difference between groups (log-rank p = 0.05). These results suggest that radiomic signatures may improve survival outcome predictions in GBM thus creating a solid clinical tool for tailoring therapy in this population.
Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network
An, Feng-Ping
Complexity2019Journal Article, cited 0 times
Website
Radiomics
CT
Classification
ADNI
OASIS
Convolutional Neural Network (CNN)
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.
Detection of Leukemia Using Convolutional Neural Network
Anagha, V.
Disha, A.
Aishwarya, B. Y.
Nikkita, R.
Biradar, Vidyadevi G.
2022Book Section, cited 0 times
C-NMC 2019
Pathomics
Computer Aided Detection (CADe)
Leukemia
Convolutional Neural Network (CNN)
Keras
TensorFlow
Deep Learning
Leukemia which is commonly known as blood cancer is a fatal type of cancer that affects white blood cells. It usually originates from the bone marrow and causes the development of abnormal blood cells called blasts. The diagnosis is made by blood tests and bone marrow biopsy which involve manual work and are time consuming. There is a need for development of an automatic tool for the detection of white blood cell cancer. Therefore, in this work, a classification model using Convolutional Neural Network with Deep Learning techniques as a basis is proposed. This work was implemented using Keras library with TensorFlow as backend. This model was trained and evaluated on cancer cell dataset C_NMC_2019 which includes white blood cell regions segmented from the microscopic blood smear images. The model offers an accuracy of 91% for training and 87% for testing which is satisfactory.
Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images
Anand, Shruthi
Vinod, Viji
Rampure, Anand
International Journal of Applied Engineering Research2015Journal Article, cited 4 times
Website
TCGA-BRCA
Machine learning
Brain Tumor Segmentation and Survival Prediction Using Automatic Hard Mining in 3D CNN Architecture
We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas and its constituents from multimodal Magnetic Resonance Images (MRI). The architecture uses dense connectivity patterns to reduce the number of weights and residual connection and is initialized with weights obtained from training this model with BraTS 2018 dataset. Hard mining is done during training to train for the difficult cases of segmentation tasks by increasing the dice similarity coefficient (DSC) threshold to choose the hard cases as epoch increases. On the BraTS2020 validation data (n = 125), this architecture achieved a tumor core, whole tumor, and active tumor dice of 0.744, 0.876, 0.714, respectively. On the test dataset, we get an increment in DSC of tumor core and active tumor by approximately 7%. In terms of DSC, our network performances on the BraTS 2020 test data are 0.775, 0.815, and 0.85 for enhancing tumor, tumor core, and whole tumor, respectively. Overall survival of a subject is determined using conventional machine learning from rediomics features obtained using generated segmentation mask. Our approach has achieved 0.448 and 0.452 as the accuracy on the validation and test dataset.
Data Augmentation and Transfer Learning for Brain Tumor Detection in Magnetic Resonance Imaging
Anaya-Isaza, Andres
Mera-Jimenez, Leonel
IEEE Access2022Journal Article, cited 1 times
Website
TCGA-LGG
ResNet50
Computer Aided Detection (CADe)
The exponential growth of deep learning networks has allowed us to tackle complex tasks, even in fields as complicated as medicine. However, using these models requires a large corpus of data for the networks to be highly generalizable and with high performance. In this sense, data augmentation methods are widely used strategies to train networks with small data sets, being vital in medicine due to the limited access to data. A clear example of this is magnetic resonance imaging in pathology scans associated with cancer. In this vein, we compare the effect of several conventional data augmentation schemes on the ResNet50 network for brain tumor detection. In addition, we included our strategy based on principal component analysis. The training was performed with the network trained from zeros and transfer-learning, obtained from the ImageNet dataset. The investigation allowed us to achieve an F1 detection score of 92.34%. The score was achieved with the ResNet50 network through the proposed method and implementing the learning transfer. In addition, it was also concluded that the proposed method is different from the other conventional methods with a significance level of 0.05 through the Kruskal Wallis test statistic.
Comparison of Current Deep Convolutional Neural Networks for the Segmentation of Breast Masses in Mammograms
Anaya-Isaza, Andrés
Mera-Jiménez, Leonel
Cabrera-Chavarro, Johan Manuel
Guachi-Guachi, Lorena
Peluffo-Ordóñez, Diego
Rios-Patiño, Jorge Ivan
IEEE Access2021Journal Article, cited 0 times
CBIS-DDSM
Breast cancer causes approximately 684,996 deaths worldwide, making it the leading cause of female cancer mortality. However, these figures can be reduced with early diagnosis through mammographic imaging, allowing for the timely and effective treatment of this disease. To establish the best tools for contributing to the automatic diagnosis of breast cancer, different deep learning (DL) architectures were compared in terms of breast lesion segmentation, lesion type classification, and degree of suspicion of malignancy tests. The tasks were completed with state-of-the-art architectures and backbones. Initially, during segmentation, the base UNet, Visual Geometry Group 19 (VGG19), InceptionResNetV2, EfficientNet, MobileNetv2, ResNet, ResNeXt, MultiResUNet, linkNet-VGG19, DenseNet, SEResNet and SeResNeXt architectures were compared, where “Res” denotes a residual network. In addition, training was performed with 5 of the most advanced loss functions and validated by the Dice coefficient, sensitivity, and specificity. The proposed models achieved Dice values above 90%, with the EfficientNet architecture achieving 94.75% and 99% accuracy on the two tasks. Subsequently, classification was addressed with the ResNet50V2, VGG19, InceptionResNetV2, DenseNet121, InceptionV3, Xception and EfficientNetB7 networks. The proposed models achieved 96.97% and 97.73% accuracy through the VGG19 and ResNet50V2 networks on the lesion classification and degree of suspicion tasks, respectively. All three tasks were addressed with open-access databases, including the Digital Database for Screening Mammography (DDSM), the Mammographic Image Analysis Society (MIAS) database, and INbreast.
Multi-modality GLCM image texture feature for segmentation and tissue classification
Andrade, Diego
Gifford, Howard C.
Das, Mini
2023Conference Proceedings, cited 0 times
Duke-Breast-Cancer-MRI
Humans and computer observer models often rely on feature analysis from a single imaging modality. We will examine benefits of new features that assist in image classification and detection of malignancies in MRI and X-Ray tomographic images. While the image formation principles are different in these modalities, there are common features like contrast that are often employed by humans (radiologist) in each of these when making decision. We will examine other features that may not be well-understood or explored such as grey level co-occurrence matrix (GLCM) texture features. As preliminary data, we show here the utility of some of these features along with classification methods aided by Gaussian mixture models (GMM) and fuzzy C-Means dimensionality reduction. GLCM maps characterize the image texture and provide a numerical and spatial tool of the texture signatures present in it. We will present pathways for using these in tissue classification, segmentation and development of task-based assessments.
A Multi Brain Tumor Classification Using a Deep Reinforcement Learning Model
Brain Tumor is a type of disease where the abnormal cells will grow in the human brain. There will be different type of tumors in the brain and also these tumors will be in the spinal cord. Doctors will use some techniques to cure this tumors which are present in the brain. So the first task is to classify the different types of tumors and to give the respective treatment. In general the Magnetic-Resonance-Imaging (MRI) is used to find the type of tumor is present in the image or not and also identifies the position of the tumor. Basically images will have Benign or malignant type of tumors. Benign tumors are non-cancerous can be cured with the help of medicines. Malignant tumors are dangerous they can’t be cured with medicines it will leads to death of a person. MRI is used to classify these type of tumors. MRI images will use more time to evaluate the tumor and evaluation of the tumor is different for different doctors. So There is one more technique which is used to classify the brain tumor images are deep learning. Deep learning consists of supervised learning mechanism, unsupervised learning mechanism and Reinforcement learning mechanism. The DL model uses convolution neural network to classify the brain tumor images into Glioma, Meningioma and Pituitary from the given dataset and also used for classification and feature Extraction of images. The dataset is consisting of 3064 images which is included with Glioma, Meningioma and pituitary tumors. Here, Reinforcement learning mechanism is used for classifying the images based on the agent, reward, policy, state. The Deep Q-network which is part of Reinforcement learning is used for better accuracy. Reinforcement learning got more accuracy in classification Compared to different mechanisms like supervised, unsupervised mechanisms. In this Accuracy of the Brain Tumor classification is increased to 95.4% by using Reinforcement compared with the supervised learning. The results indicates that classification of the Brain Tumors.
Imaging Genomics in Glioblastoma Multiforme: A Predictive Tool for Patients Prognosis, Survival, and Outcome
Anil, Rahul
Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America2016Journal Article, cited 3 times
Website
Radiogenomics
Glioblastoma Multiforme (GBM)
The integration of imaging characteristics and genomic data has started a new trend in approach toward management of glioblastoma (GBM). Many ongoing studies are investigating imaging phenotypical signatures that could explain more about the behavior of GBM and its outcome. The discovery of biomarkers has played an adjuvant role in treating and predicting the outcome of patients with GBM. Discovering these imaging phenotypical signatures and dysregulated pathways/genes is needed and required to engineer treatment based on specific GBM manifestations. Characterizing these parameters will establish well-defined criteria so researchers can build on the treatment of GBM through personal medicine.
Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data
Anirudh, Rushil
Thiagarajan, Jayaraman J
Bremer, Timo
Kim, Hyojin
2016Conference Proceedings, cited 33 times
Website
Convolutional Neural Network (CNN)
LUNG
A deep learning study on osteosarcoma detection from histological images
Anisuzzaman, D.M.
Barzekar, Hosein
Tong, Ling
Luo, Jake
Yu, Zeyun
Biomedical Signal Processing and Control2021Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
In the U.S. 5–10% of new pediatric cases of cancer are primary bone tumors. The most common type of primary malignant bone tumor is osteosarcoma. The intention of the present work is to improve the detection and diagnosis of osteosarcoma using computer-aided detection (CAD) and diagnosis (CADx). Such tools as convolutional neural networks (CNNs) can significantly decrease the surgeon’s workload and make a better prognosis of patient conditions. CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance. In this study, transfer learning techniques, pre-trained CNNs, are adapted to a public dataset on osteosarcoma histological images to detect necrotic images from non-necrotic and healthy tissues. First, the dataset was preprocessed, and different classifications are applied. Then, Transfer learning models including VGG19 and Inception V3 are used and trained on Whole Slide Images (WSI) with no patches, to improve the accuracy of the outputs. Finally, the models are applied to different classification problems, including binary and multi-class classifiers. Experimental results show that the accuracy of the VGG19 has the highest, 96%, performance amongst all binary classes and multiclass classification. Our fine-tuned model demonstrates state-of-the-art performance on detecting malignancy of Osteosarcoma based on histologic images.
Brain tumour classification using two-tier classifier with adaptive segmentation technique
Anitha, V
Murugavalli, S
IET Computer Vision2016Journal Article, cited 46 times
Website
TCGA-GBM
Radiomics
BRAIN
Texture features
Magnetic resonance imaging (MRI)
A brain tumour is a mass of tissue that is structured by a gradual addition of anomalous cells and it is important to classify brain tumours from the magnetic resonance imaging (MRI) for treatment. Human investigation is the routine technique for brain MRI tumour detection and tumours classification. Interpretation of images is based on organised and explicit classification of brain MRI and also various techniques have been proposed. Information identified with anatomical structures and potential abnormal tissues which are noteworthy to treat are given by brain tumour segmentation on MRI, the proposed system uses the adaptive pillar K-means algorithm for successful segmentation and the classification methodology is done by the two-tier classification approach. In the proposed system, at first the self-organising map neural network trains the features extracted from the discrete wavelet transform blend wavelets and the resultant filter factors are consequently trained by the K-nearest neighbour and the testing process is also accomplished in two stages. The proposed two-tier classification system classifies the brain tumours in double training process which gives preferable performance over the traditional classification method. The proposed system has been validated with the support of real data sets and the experimental results showed enhanced performance.
A Bi-FPN-Based Encoder–Decoder Model for Lung Nodule Image Segmentation
Annavarapu, Chandra Sekhara Rao
Parisapogu, Samson Anosh Babu
Keetha, Nikhil Varma
Donta, Praveen Kumar
Rajita, Gurindapalli
Diagnostics2023Journal Article, cited 0 times
Website
QIN-LungCT-Seg
Segmentation
Algorithm Development
Computed Tomography (CT)
LUNA16 Challenge
Encoder-decoder
Early detection and analysis of lung cancer involve a precise and efficient lung nodule segmentation in computed tomography (CT) images. However, the anonymous shapes, visual features, and surroundings of the nodules as observed in the CT images pose a challenging and critical problem to the robust segmentation of lung nodules. This article proposes a resource-efficient model architecture: an end-to-end deep learning approach for lung nodule segmentation. It incorporates a Bi-FPN (bidirectional feature network) between an encoder and a decoder architecture. Furthermore, it uses the Mish activation function and class weights of masks with the aim of enhancing the efficiency of the segmentation. The proposed model was extensively trained and evaluated on the publicly available LUNA-16 dataset consisting of 1186 lung nodules. To increase the probability of the suitable class of each voxel in the mask, a weighted binary cross-entropy loss of each sample of training was utilized as network training parameter. Moreover, on the account of further evaluation of robustness, the proposed model was evaluated on the QIN Lung CT dataset. The results of the evaluation show that the proposed architecture outperforms existing deep learning models such as U-Net with a Dice Similarity Coefficient of 82.82% and 81.66% on both datasets.
Integrating imaging and omics data: A review
Antonelli, Laura
Guarracino, Mario Rosario
Maddalena, Lucia
Sangiovanni, Mara
Biomedical Signal Processing and Control2019Journal Article, cited 0 times
NSCLC Radiogenomics-Stanford
We refer to omics imaging as an emerging interdisciplinary field concerned with the integration of data collected from biomedical images and omics analyses. Bringing together information coming from different sources, it permits to reveal hidden genotype–phenotype relationships, with the aim of better understanding the onset and progression of many diseases, and identifying new diagnostic and prognostic biomarkers. More in detail, biomedical images, generated by anatomical or functional techniques, are processed to extract hundreds of numerical features describing visual aspects – as in solid cancer imaging – or functional elements – as in neuroimaging. These imaging features are then complemented and integrated with genotypic and phenotypic information, such as DNA mutations, RNA expression levels, and protein abundances. Apart from the difficulties arising from imaging and omics analyses alone, the process of integrating, combining, processing, and making sense of the omics imaging data is quite challenging, owed to the heterogeneity of the sources, the high dimensionality of the resulting feature space, and the reduced availability of freely accessible, large, and well-curated datasets containing both images and omics data for each sample. In this review, we present the state of the art of omics imaging, with the aim of providing the interested reader a unique source of information, with links for further detailed information. Based on the existing literature, we describe both the omics and imaging data that have been adopted, provide a list of curated databases of integrated resources, discuss the types of adopted features, give hints on the used data analysis methods, and overview current research in this field.
The Medical Segmentation Decathlon
Antonelli, M.
Reinke, A.
Bakas, S.
Farahani, K.
Kopp-Schneider, A.
Landman, B. A.
Litjens, G.
Menze, B.
Ronneberger, O.
Summers, R. M.
van Ginneken, B.
Bilello, M.
Bilic, P.
Christ, P. F.
Do, R. K. G.
Gollub, M. J.
Heckers, S. H.
Huisman, H.
Jarnagin, W. R.
McHugo, M. K.
Napel, S.
Pernicka, J. S. G.
Rhode, K.
Tobon-Gomez, C.
Vorontsov, E.
Meakin, J. A.
Ourselin, S.
Wiesenfarth, M.
Arbelaez, P.
Bae, B.
Chen, S.
Daza, L.
Feng, J.
He, B.
Isensee, F.
Ji, Y.
Jia, F.
Kim, I.
Maier-Hein, K.
Merhof, D.
Pai, A.
Park, B.
Perslev, M.
Rezaiifar, R.
Rippel, O.
Sarasua, I.
Shen, W.
Son, J.
Wachinger, C.
Wang, L.
Wang, Y.
Xia, Y.
Xu, D.
Xu, Z.
Zheng, Y.
Simpson, A. L.
Maier-Hein, L.
Cardoso, M. J.
Nat Commun2022Journal Article, cited 79 times
Website
TCGA-GBM
TCGA-LGG
BraTS-TCGA-GBM
BraTS-TCGA-LGG
NSCLC Radiogenomics: Initial Stanford Study of 26 Cases
Challenge
*Algorithms
*Image Processing
Computer-Assisted/methods
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.
Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks
Antonio, Victor Andrew A
Ono, Naoaki
Saito, Akira
Sato, Tetsuo
Altaf-Ul-Amin, Md
Kanaya, Shigehiko
International Journal of Computer Assisted Radiology and Surgery2018Journal Article, cited 0 times
Website
TCGA-LUAD
Machine learning
histopathology imaging features
PURPOSE: Convolutional neural networks have become rapidly popular for image recognition and image analysis because of its powerful potential. In this paper, we developed a method for classifying subtypes of lung adenocarcinoma from pathological images using neural network whose that can evaluate phenotypic features from wider area to consider cellular distributions. METHODS: In order to recognize the types of tumors, we need not only to detail features of cells, but also to incorporate statistical distribution of the different types of cells. Variants of autoencoders as building blocks of pre-trained convolutional layers of neural networks are implemented. A sparse deep autoencoder which minimizes local information entropy on the encoding layer is then proposed and applied to images of size [Formula: see text]. We applied this model for feature extraction from pathological images of lung adenocarcinoma, which is comprised of three transcriptome subtypes previously defined by the Cancer Genome Atlas network. Since the tumor tissue is composed of heterogeneous cell populations, recognition of tumor transcriptome subtypes requires more information than local pattern of cells. The parameters extracted using this approach will then be used in multiple reduction stages to perform classification on larger images. RESULTS: We were able to demonstrate that these networks successfully recognize morphological features of lung adenocarcinoma. We also performed classification and reconstruction experiments to compare the outputs of the variants. The results showed that the larger input image that covers a certain area of the tissue is required to recognize transcriptome subtypes. The sparse autoencoder network with [Formula: see text] input provides a 98.9% classification accuracy. CONCLUSION: This study shows the potential of autoencoders as a feature extraction paradigm and paves the way for a whole slide image analysis tool to predict molecular subtypes of tumors from pathological features.
Fast wavelet based image characterization for content based medical image retrieval
A large collection of medical images surrounds health care centers and hospitals. Medical images produced by different modalities like magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and X-rays have increased incredibly with the advent of latest technologies for image acquisition. Retrieving clinical images of interest from these large data sets is a thought-provoking and demanding task. In this paper, a fast wavelet based medical image retrieval system is proposed that can aid physicians in the identification or analysis of medical images. The image signature is calculated using kurtosis and standard deviation as features. A possible use case is when the radiologist has some suspicion on diagnosis and wants further case histories, the acquired clinical images are sent (e.g. MRI images of brain) as a query to the content based medical image retrieval system. The system is tuned to retrieve the top most relevant images to the query. The proposed system is computationally efficient and more accurate in terms of the quality of retrieved images.
Breast Density Transformations Using CycleGANs for Revealing Undetected Findings in Mammograms
Breast cancer is the most common cancer in women, a leading cause of morbidity and mortality, and a significant health issue worldwide. According to the World Health Organization’s cancer awareness recommendations, mammographic screening should be regularly performed on middle-aged or older women to increase the chances of early cancer detection. Breast density is widely known to be related to the risk of cancer development. The American College of Radiology Breast Imaging Reporting and Data System categorizes mammography into four levels based on breast density, ranging from ACR-A (least dense) to ACR-D (most dense). Computer-aided diagnostic (CAD) systems can now detect suspicious regions in mammograms and identify abnormalities more quickly and accurately than human readers. However, their performance is still influenced by the tissue density level, which must be considered when designing such systems. In this paper, we propose a novel method that uses CycleGANs to transform suspicious regions of mammograms from ACR-B, -C, and -D levels to ACR-A level. This transformation aims to reduce the masking effect caused by thick tissue and separate cancerous regions from surrounding tissue. Our proposed system enhances the performance of conventional CNN-based classifiers significantly by focusing on regions of interest that would otherwise be misidentified due to fatty masking. Extensive testing on different types of mammograms (digital and scanned X-ray film) demonstrates the effectiveness of our system in identifying normal, benign, and malignant regions of interest.
Genotype-Guided Radiomics Signatures for Recurrence Prediction of Non-Small Cell Lung Cancer
Aonpong, Panyanat
Iwamoto, Yutaro
Han, Xian-Hua
Lin, Lanfen
Chen, Yen-Wei
IEEE Access2021Journal Article, cited 0 times
NSCLC Radiogenomics
Non-small cell lung cancer (NSCLC) is a serious disease and has a high recurrence rate after surgery. Recently, many machine learning methods have been proposed for recurrence prediction. The methods using gene expression data achieve high accuracy rates but expensive. While, the radiomics features using computer tomography (CT) image is a cost-effective method, but their accuracy is not competitive. In this paper, we propose a genotype-guided radiomics method (GGR) for obtaining high prediction accuracy at a low cost. We used a public radiogenomics dataset of NSCLC, which includes CT images and gene expression data. Our proposed method is two steps method that uses two models. The first model is a gene estimation model, which is used to estimate the gene expression from radiomics features and deep features extracted from CT images. The second model is used to predict the recurrence using the estimated gene. The proposed GGR method is designed based on hybrid features which is the fusion of handcrafted- and deep learning-based features. The experiments demonstrated that the prediction accuracy can be improved significantly from 78.61% (existing radiomics method) and 79.09% (ResNet50) to 83.28% by the proposed GGR.
Improved Genotype-Guided Deep Radiomics Signatures for Recurrence Prediction of Non-Small Cell Lung Cancer
Aonpong, P.
Iwamoto, Y.
Han, X. H.
Lin, L.
Chen, Y. W.
Annu Int Conf IEEE Eng Med Biol Soc2021Journal Article, cited 0 times
NSCLC Radiogenomics
Radiomic features
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/genetics
Genotype
LUNG
Humans
*Lung Neoplasms/diagnostic imaging/genetics
Tomography
X-Ray Computed
Non-small cell lung cancer (NSCLC) is a type of lung cancer that has a high recurrence rate after surgery. Precise prediction of preoperative prognosis for NSCLC recurrence tends to contribute to the suitable preparation for treatment. Currently, many studied have been conducted to predict the recurrence of NSCLC based on Computed Tomography-images (CT images) or genetic data. The CT image is not expensive but inaccurate. The gene data is more expensive but has high accuracy. In this study, we proposed a genotype-guided radiomics method called GGR and GGR_Fusion to make a higher accuracy prediction model with requires only CT images. The GGR is a two-step method which is consists of two models: the gene estimation model using deep learning and the recurrence prediction model using estimated genes. We further propose an improved performance model based on the GGR model called GGR_Fusion to improve the accuracy. The GGR_Fusion uses the extracted features from the gene estimation model to enhance the recurrence prediction model. The experiments showed that the prediction performance can be improved significantly from 78.61% accuracy, AUC=0.66 (existing radiomics method), 79.09% accuracy, AUC=0.68 (deep learning method) to 83.28% accuracy, AUC=0.77 by the proposed GGR and 84.39% accuracy, AUC=0.79 by the proposed GGR_Fusion.Clinical Relevance-This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 84.39% by our proposed method using only the CT image.
Hand-Crafted and Deep Learning-Based Radiomics Models for Recurrence Prediction of Non-Small Cells Lung Cancers
Aonpong, Panyanat
Iwamoto, Yutaro
Wang, Weibin
Lin, Lanfen
Chen, Yen-Wei
Innovation in Medicine and Healthcare2020Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Deep Learning
LUNG
This research was created to examine the recurrence of non-small lung cancer (NSCLC) using computed-tomography images (CT-images) to avoid biopsy from patients because the cancer cells may have an uneven distribution which can lead to the investigation mistake. This work presents a comparison of the operations of two different methods: Hand-Crafted Radiomics model and deep learning-based radiomics model using 88 patient samples from open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive (TCIA) Public Access. In Hand-Crafted Radiomics Models, the pattern of NSCLC CT-images was analyzed in various statistics as radiomics features. The radiomics features associated with recurrence are selected through three statistical calculations: LASSO, Chi-2, and ANOVA. Then, those selected radiomics features were processed using different models. In the Deep Learning-based Radiomics Model, the proposed artificial neural network has been used to enhance the recurrence prediction. The Hand-Crafted Radiomics Model with non-selected, Lasso, Chi-2, and ANOVA, give the following results: 76.56% (AUC 0.6361), 76.83% (AUC 0.6375), 78.64% (AUC 0.6778), and 78.17% (AUC 0.6556), respectively, and the Deep Learning-based Radiomic Models, including ResNet50 and DenseNet121 give the following results: 79.00% (AUC 0.6714), and 79.31% (AUC 0.6712), respectively.
Genomics-Based Models for Recurrence Prediction of Non-small Cells Lung Cancers
This research is designed to examine the recurrence of non-small lung cancer (NSCLC) prediction using genomics information to reach the maximum accuracy. The raw gene data show very good performance but require more precise examination. This work is study about the way to reduce the complexity of the gene data with minimal information loss. This processed gene data tends to have the ability to archive the reasonable prediction result with faster process. This work presents a comparison of the operations of the two steps, including gene selection and gene quantization, Linear quantization and K-mean quantization, using associated gene selected from 88 patient sample from the open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive Public Access. We use the different number of the group splitting and compare the performance of the recurrence prediction in both operations. The results of this study show us that the F-test method can provide us the best gene set that related to NSCLC recurrence. With F-test without quantization, accuracy of the prediction has been improved from 81.41% (using 5587 genes) to 91.83% (using selected 294 genes). With quantization methods, the suitable gene groups separation can maximize the accuracy to 93.42% using K-mean quantization.
Investigation of radiomics and deep convolutional neural networks approaches for glioma grading
Aouadi, S.
Torfeh, T.
Arunachalam, Y.
Paloor, S.
Riyas, M.
Hammoud, R.
Al-Hammadi, N.
Biomed Phys Eng Express2023Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
BraTS 2021
Humans
*Gastrointestinal Stromal Tumors
*Fibromatosis
Aggressive
*Glioma/diagnostic imaging
Algorithm Development
*Melanoma
Computed Tomography (CT)
benchmarking
Deep learning
glioma grading
multi-contrast MRI
Radiomics
Purpose.To determine glioma grading by applying radiomic analysis or deep convolutional neural networks (DCNN) and to benchmark both approaches on broader validation sets.Methods.Seven public datasets were considered: (1) low-grade glioma or high-grade glioma (369 patients, BraTS'20) (2) well-differentiated liposarcoma or lipoma (115, LIPO); (3) desmoid-type fibromatosis or extremity soft-tissue sarcomas (203, Desmoid); (4) primary solid liver tumors, either malignant or benign (186, LIVER); (5) gastrointestinal stromal tumors (GISTs) or intra-abdominal gastrointestinal tumors radiologically resembling GISTs (246, GIST); (6) colorectal liver metastases (77, CRLM); and (7) lung metastases of metastatic melanoma (103, Melanoma). Radiomic analysis was performed on 464 (2016) radiomic features for the BraTS'20 (others) datasets respectively. Random forests (RF), Extreme Gradient Boosting (XGBOOST) and a voting algorithm comprising both classifiers were tested. The parameters of the classifiers were optimized using a repeated nested stratified cross-validation process. The feature importance of each classifier was computed using the Gini index or permutation feature importance. DCNN was performed on 2D axial and sagittal slices encompassing the tumor. A balanced database was created, when necessary, using smart slices selection. ResNet50, Xception, EficientNetB0, and EfficientNetB3 were transferred from the ImageNet application to the tumor classification and were fine-tuned. Five-fold stratified cross-validation was performed to evaluate the models. The classification performance of the models was measured using multiple indices including area under the receiver operating characteristic curve (AUC).Results.The best radiomic approach was based on XGBOOST for all datasets; AUC was 0.934 (BraTS'20), 0.86 (LIPO), 0.73 (LIVER), (0.844) Desmoid, 0.76 (GIST), 0.664 (CRLM), and 0.577 (Melanoma) respectively. The best DCNN was based on EfficientNetB0; AUC was 0.99 (BraTS'20), 0.982 (LIPO), 0.977 (LIVER), (0.961) Desmoid, 0.926 (GIST), 0.901 (CRLM), and 0.89 (Melanoma) respectively.Conclusion.Tumor classification can be accurately determined by adapting state-of-the-art machine learning algorithms to the medical context.
Classification of lung nodule malignancy in computed tomography imaging utilising generative adversarial networks and semi-supervised transfer learning
Apostolopoulos, Ioannis D.
Papathanasiou, Nikolaos D.
Panayiotakis, George S.
Biocybernetics and Biomedical Engineering2021Journal Article, cited 2 times
Website
LIDC-IDRI
LUNG
Convolutional Neural Network (CNN)
Deep Learning
The pulmonary nodules' malignancy rating is commonly confined in patient follow-up; examining the nodule's activity is estimated with the Positron Emission Tomography (PET) system or biopsy. However, these strategies are usually after the initial detection of the malignant nodules acquired from the Computed Tomography (CT) scan. In this study, a Deep Learning methodology to address the challenge of the automatic characterisation of Solitary Pulmonary Nodules (SPN) detected in CT scans is proposed.; ; The research methodology is based on Convolutional Neural Networks, which have proven to be excellent automatic feature extractors for medical images. The publicly available CT dataset, called Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), and a small CT scan dataset derived from a PET/CT system, is considered the classification target. New, realistic nodule representations are generated employing Deep Convolutional Generative Adversarial Networks to circumvent the shortage of large-scale data to train robust CNNs. Besides, a hierarchical CNN called Feature Fusion VGG19 (FF-VGG19) was developed to enhance feature extraction of the CNN proposed by the Visual Geometry Group (VGG). Moreover, the generated nodule images are separated into two classes by utilising a semi-supervised approach, called self-training, to tackle weak labelling due to DC-GAN inefficiencies.; ; The DC-GAN can generate realistic SPNs, as the experts could only distinguish 23 % of the synthetic nodule images. As a result, the classification accuracy of FF-VGG19 on the LIDC-IDRI dataset increases by +7%, reaching 92.07 %, while the classification accuracy on the CT dataset is increased by 5 %, reaching 84,3 %.
Automatic classification of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques
Apostolopoulos, Ioannis D
Pintelas, Emmanuel G
Livieris, Ioannis E
Apostolopoulos, Dimitris J
Papathanasiou, Nikolaos D
Pintelas, Panagiotis E
Panayiotakis, George S
Medical & Biological Engineering & Computing2021Journal Article, cited 0 times
Website
LIDC-IDRI
machine learning
Transfer learning
End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.
Potentials of radiomics for cancer diagnosis and treatment in comparison with computer-aided diagnosis
Arimura, Hidetaka
Soufi, Mazen
Ninomiya, Kenta
Kamezawa, Hidemi
Yamada, Masahiro
Radiological Physics and Technology2018Journal Article, cited 0 times
Website
RIDER
non-small cell lung cancer (NSCLC)
Computer Aided Diagnosis (CADx)
Radiomics
Segmentation
Computer-aided diagnosis (CAD) is a field that is essentially based on pattern recognition that improves the accuracy of a diagnosis made by a physician who takes into account the computer’s “opinion” derived from the quantitative analysis of radiological images. Radiomics is a field based on data science that massively and comprehensively analyzes a large number of medical images to extract a large number of phenotypic features reflecting disease traits, and explores the associations between the features and patients’ prognoses for precision medicine. According to the definitions for both, you may think that radiomics is not a paraphrase of CAD, but you may also think that these definitions are “image manipulation”. However, there are common and different features between the two fields. This review paper elaborates on these common and different features and introduces the potential of radiomics for cancer diagnosis and treatment by comparing it with CAD.
Artificial Intelligence in Prostate Imaging
Arlova, Alena
Choyke, Peter L.
Turkbey, Baris
2021Journal Article, cited 0 times
PROSTATEx
The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans
Armato III, Samuel G
McLennan, Geoffrey
Bidaut, Luc
McNitt-Gray, Michael F
Meyer, Charles R
Reeves, Anthony P
Zhao, Binsheng
Aberle, Denise R
Henschke, Claudia I
Hoffman, Eric A
Kazerooni, E. A.
MacMahon, H.
Van Beeke, E. J.
Yankelevitz, D.
Biancardi, A. M.
Bland, P. H.
Brown, M. S.
Engelmann, R. M.
Laderach, G. E.
Max, D.
Pais, R. C.
Qing, D. P.
Roberts, R. Y.
Smith, A. R.
Starkey, A.
Batrah, P.
Caligiuri, P.
Farooqi, A.
Gladish, G. W.
Jude, C. M.
Munden, R. F.
Petkovska, I.
Quint, L. E.
Schwartz, L. H.
Sundaram, B.
Dodd, L. E.
Fenimore, C.
Gur, D.
Petrick, N.
Freymann, J.
Kirby, J.
Hughes, B.
Casteele, A. V.
Gupte, S.
Sallamm, M.
Heath, M. D.
Kuhn, M. H.
Dharaiya, E.
Burns, R.
Fryd, D. S.
Salganicoff, M.
Anand, V.
Shreter, U.
Vastagh, S.
Croft, B. Y.
Medical Physics2011Journal Article, cited 546 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
LUNG
PURPOSE: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. METHODS: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. RESULTS: The Database contains 7371 lesions marked "nodule" by at least one radiologist. 2669 of these lesions were marked "nodule > or =3 mm" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. CONCLUSIONS: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.
Collaborative projects
Armato, S
McNitt-Gray, M
Meyer, C
Reeves, A
Clarke, L
Int J CARS2012Journal Article, cited 307 times
Website
LIDC-IDRI
Special Section Guest Editorial: LUNGx Challenge for computerized lung nodule classification: reflections and lessons learned
Armato, Samuel G
Hadjiiski, Lubomir
Tourassi, Georgia D
Drukker, Karen
Giger, Maryellen L
Li, Feng
Redmond, George
Farahani, Keyvan
Kirby, Justin S
Clarke, Laurence P
Journal of Medical Imaging2015Journal Article, cited 20 times
Website
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants' computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists' AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.
Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture
Arora, Anuja
Jayal, Ambikesh
Gupta, Mayank
Mittal, Prakhar
Satapathy, Suresh Chandra
Computers2021Journal Article, cited 6 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2018
BRAIN
Segmentation
Algorithm Development
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively.
Discovery of pre-therapy 2-deoxy-2-18 F-fluoro-D-glucose positron emission tomography-based radiomics classifiers of survival outcome in non-small-cell lung cancer patients
Arshad, Mubarik A
Thornton, Andrew
Lu, Haonan
Tam, Henry
Wallitt, Kathryn
Rodgers, Nicola
Scarsbrook, Andrew
McDermott, Garry
Cook, Gary J
Landau, David
European Journal of Nuclear Medicine and Molecular Imaging2018Journal Article, cited 0 times
Website
Viable and necrotic tumor assessment from whole slide images of osteosarcoma using machine-learning and deep-learning models
Arunachalam, Harish Babu
Mishra, Rashika
Daescu, Ovidiu
Cederberg, Kevin
Rakheja, Dinesh
Sengupta, Anita
Leonard, David
Hallac, Rami
Leavey, Patrick
PLoS One2019Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
Deep Learning
Support Vector Machine
Pathological estimation of tumor necrosis after chemotherapy is essential for patients with osteosarcoma. This study reports the first fully automated tool to assess viable and necrotic tumor in osteosarcoma, employing advances in histopathology digitization and automated learning. We selected 40 digitized whole slide images representing the heterogeneity of osteosarcoma and chemotherapy response. With the goal of labeling the diverse regions of the digitized tissue into viable tumor, necrotic tumor, and non-tumor, we trained 13 machine-learning models and selected the top performing one (a Support Vector Machine) based on reported accuracy. We also developed a deep-learning architecture and trained it on the same data set. We computed the receiver-operator characteristic for discrimination of non-tumor from tumor followed by conditional discrimination of necrotic from viable tumor and found our models performing exceptionally well. We then used the trained models to identify regions of interest on image-tiles generated from test whole slide images. The classification output is visualized as a tumor-prediction map, displaying the extent of viable and necrotic tumor in the slide image. Thus, we lay the foundation for a complete tumor assessment pipeline from original histology images to tumor-prediction map generation. The proposed pipeline can also be adopted for other types of tumor.
Context-Aware Self-Supervised Learning of Whole Slide Images
Aryal, Milan
Soltani, Nasim Yahya
2024Journal Article, cited 0 times
TCGA-KICH
TCGA-KIRC
TCGA-KIRP
Machine Learning
Presenting whole slide images (WSIs) as graph will enable a more efficient and accurate learning framework for cancer diagnosis. Due to the fact that a single WSI consists of billions of pixels and there is a lack of vast annotated datasets required for computational pathology, the problem of learning from WSIs using typical deep learning approaches such as convolutional neural network (CNN) is challenging. Additionally, WSIs down-sampling may lead to the loss of data that is essential for cancer detection. A novel two-stage learning technique is presented in this work. Since context, such as topological features in the tumor surroundings, may hold important information for cancer grading and diagnosis, a graph representation capturing all dependencies among regions in the WSI is very intuitive. Graph convolutional network (GCN) is deployed to include context from the tumor and adjacent tissues, and self-supervised learning is used to enhance training through unlabeled data. More specifically, the entire slide is presented as a graph, where the nodes correspond to the patches from the WSI. The proposed framework is then tested using WSIs from prostate and kidney cancers.
Effect of Applying Leakage Correction on rCBV Measurement Derived From DSC-MRI in Enhancing and Nonenhancing Glioma
Arzanforoosh, Fatemeh
Croal, Paula L.
van Garderen, Karin A.
Smits, Marion
Chappell, Michael A.
Warnert, Esther A. H.
Frontiers in Oncology2021Journal Article, cited 0 times
Website
QIN-BRAIN-DSC-MRI
Prognostic model
Purpose: Relative cerebral blood volume (rCBV) is the most widely used parameter derived from DSC perfusion MR imaging for predicting brain tumor aggressiveness. However, accurate rCBV estimation is challenging in enhancing glioma, because of contrast agent extravasation through a disrupted blood-brain barrier (BBB), and even for nonenhancing glioma with an intact BBB, due to an elevated steady-state contrast agent concentration in the vasculature after first passage. In this study a thorough investigation of the effects of two different leakage correction algorithms on rCBV estimation for enhancing and nonenhancing tumors was conducted.; ; Methods: Two datasets were used retrospectively in this study: 1. A publicly available TCIA dataset (49 patients with 35 enhancing and 14 nonenhancing glioma); 2. A dataset acquired clinically at Erasmus MC (EMC, Rotterdam, NL) (47 patients with 20 enhancing and 27 nonenhancing glial brain lesions). The leakage correction algorithms investigated in this study were: a unidirectional model-based algorithm with flux of contrast agent from the intra- to the extravascular extracellular space (EES); and a bidirectional model-based algorithm additionally including flow from EES to the intravascular space.; ; Results: In enhancing glioma, the estimated average contrast-enhanced tumor rCBV significantly (Bonferroni corrected Wilcoxon Signed Rank Test, p < 0.05) decreased across the patients when applying unidirectional and bidirectional correction: 4.00 ± 2.11 (uncorrected), 3.19 ± 1.65 (unidirectional), and 2.91 ± 1.55 (bidirectional) in TCIA dataset and 2.51 ± 1.3 (uncorrected), 1.72 ± 0.84 (unidirectional), and 1.59 ± 0.9 (bidirectional) in EMC dataset. In nonenhancing glioma, a significant but smaller difference in observed rCBV was found after application of both correction methods used in this study: 1.42 ± 0.60 (uncorrected), 1.28 ± 0.46 (unidirectional), and 1.24 ± 0.37 (bidirectional) in TCIA dataset and 0.91 ± 0.49 (uncorrected), 0.77 ± 0.37 (unidirectional), and 0.67 ± 0.34 (bidirectional) in EMC dataset.; ; Conclusion: Both leakage correction algorithms were found to change rCBV estimation with BBB disruption in enhancing glioma, and to a lesser degree in nonenhancing glioma. Stronger effects were found for bidirectional leakage correction than for unidirectional leakage correction.
Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network
Asadi, Fawad
Angsuwatanakul, Thanate
O’Reilly, Jamie A.
IBRO Neuroscience Reports2024Journal Article, cited 2 times
Website
TCGA-LGG
Segmentation
Glioblastoma
Generative Adversarial Network (GAN)
Magnetic Resonance Imaging (MRI)
Synthetic images
Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.
Technical note: Performance evaluation of volumetric imaging based on motion modeling by principal component analysis
Asano, Suzuka
Oseki, Keishi
Takao, Seishin
Miyazaki, Koichi
Yokokawa, Kohei
Matsuura, Taeko
Taguchi, Hiroshi
Katoh, Norio
Aoyama, Hidefumi
Umegaki, Kikuo
Miyamoto, Naoki
Medical Physics2022Journal Article, cited 0 times
4D-Lung
PURPOSE: To quantitatively evaluate the achievable performance of volumetric imaging based on lung motion modeling by principal component analysis (PCA).
METHODS: In volumetric imaging based on PCA, internal deformation was represented as a linear combination of the eigenvectors derived by PCA of the deformation vector fields evaluated from patient-specific four-dimensional-computed tomography (4DCT) datasets. The volumetric image was synthesized by warping the reference CT image with a deformation vector field which was evaluated using optimal principal component coefficients (PCs). Larger PCs were hypothesized to reproduce deformations larger than those included in the original 4DCT dataset. To evaluate the reproducibility of PCA-reconstructed volumetric images synthesized to be close to the ground truth as possible, mean absolute error (MAE), structure similarity index measure (SSIM) and discrepancy of diaphragm position were evaluated using 22 4DCT datasets of nine patients.
RESULTS: Mean MAE and SSIM values for the PCA-reconstructed volumetric images were approximately 80 HU and 0.88, respectively, regardless of the respiratory phase. In most test cases including the data of which motion range was exceeding that of the modeling data, the positional error of diaphragm was less than 5 mm. The results suggested that large deformations not included in the modeling 4DCT dataset could be reproduced. Furthermore, since the first PC correlated with the displacement of the diaphragm position, the first eigenvector became the dominant factor representing the respiration-associated deformations. However, other PCs did not necessarily change with the same trend as the first PC, and no correlation was observed between the coefficients. Hence, randomly allocating or sampling these PCs in expanded ranges may be applicable to reasonably generate an augmented dataset with various deformations.
CONCLUSIONS: Reasonable accuracy of image synthesis comparable to those in the previous research were shown by using clinical data. These results indicate the potential of PCA-based volumetric imaging for clinical applications.
Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation
Asaturyan, Hykoush
Gligorievski, Antonio
Villarini, Barbara
Computerized Medical Imaging and Graphics2019Journal Article, cited 3 times
Website
Pancreas-CT
segmentation
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.
The image semantic segmentation challenge consists of classifying each pixel of an image (or just several ones) into an instance, where each instance (or category) corresponds to an object. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. Following a comprehensive review of state-of-the-art deep learning-based medical and non-medical image segmentation solutions, we make the following contributions. A deep learning-based (medical) image segmentation typical pipeline includes designing layers (A), designing an architecture (B), and defining a loss function (C). A clean/modified (D)/adversarialy perturbed (E) image is fed into a model (consisting of layers and loss function) to predict a segmentation mask for scene understanding etc. In some cases where the number of segmentation annotations is limited, weakly supervised approaches (F) are leverages. For some applications where further analysis is needed e.g., predicting volumes and objects burden, the segmentation mask is fed into another post-processing step (G). In this thesis, we tackle each of the steps (A-G). I) As for step (A and E), we studied the effect of the adversarial perturbation on image segmentation models and proposed a method that improves the segmentation performance via a non-linear radial basis convolutional feature mapping by learning a Mahalanobis-like distance function on both adversarially perturbed and unperturbed images. Our method then maps the convolutional features onto a linearly well-separated manifold, which prevents small adversarial perturbations from forcing a sample to cross the decision boundary. II) As for step (B), we propose light, learnable skip connections which learn first to select the most discriminative channels and then aggregate the selected ones as single-channel attending to the most discriminative regions of input. Compared to the heavy classical skip connections, our method reduces the computation cost and memory usage while it improves segmentation performance. III) As for step (C), we examined the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning-based loss function. Specifically, we leverage the Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time, gradually learn better model parameters by penalizing for false positives/negatives using a cross-entropy term which also helps. IV) As for step (D), we propose a new segmentation performance-boosting paradigm that relies on optimally modifying the network's input instead of the network itself. In particular, we leverage the gradients of a trained segmentation network with respect to the input to transfer it into a space where the segmentation accuracy improves. V) As for step (F), we propose a weakly supervised image segmentation model with a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. VI) Although many semi-automatic segmentation based methods have been developed, as for step (G), we introduce a method that completely eliminates the segmentation step and directly estimates the volume and activity of the lesions from positron emission tomography scans.
Low-Rank Convolutional Networks for Brain Tumor Segmentation
Ashtari, Pooya
Maes, Frederik
Van Huffel, Sabine
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
The automated segmentation of brain tumors is crucial for various clinical purposes from diagnosis to treatment planning to follow-up evaluations. The vast majority of effective models for tumor segmentation are based on convolutional neural networks with millions of parameters being trained. Such complex models can be highly prone to overfitting especially in cases where the amount of training data is insufficient. In this work, we devise a 3D U-Net-style architecture with residual blocks, in which low-rank constraints are imposed on weights of the convolutional layers in order to reduce overfitting. Within the same architecture, this helps to design networks with several times fewer parameters. We investigate the effectiveness of the proposed technique on the BraTS 2020 challenge.
Extension Network of Radiomics-Based Deeply Supervised U-Net (ERDU) for Prostate Image Segmentation
Automatic prostate segmentation from MRI images is important in disease diagnosis and treatment. The main challenges are the complex boundaries, the spatial and morphological heterogeneity, and the variety of prostate shapes. This paper proposes a deep CNN network based on 2D Res-Unet with equalization and noise reduction for preprocessing using a median filter. Additionally, a residual connection and batch normalization are used in the UNet-based network to improve gradient flow and avoid overfitting the network. The 2D Res-Unet method showed promising results on the PROSTATEx prostate MRI dataset. It achieves a dice similarity coefficient of 82.7% with a small number of parameters while outperforming the standard benchmark algorithms. Our results show that the EDRU network achieves more accurate results than the state-of-the-art U-net network for prostate gland segmentation.
Fusion of CT and MR Liver Images by SURF-Based Registration
Aslan, Muhammet Fatih
Durdu, Akif
International Journal of Intelligent Systems and Applications in Engineering2019Journal Article, cited 3 times
Website
TCGA-LIHC
Prior-aware autoencoders for lung pathology segmentation
Astaraki, M.
Smedby, O.
Wang, C.
Med Image Anal2022Journal Article, cited 0 times
LIDC-IDRI
NSCLC-Radiomics
*COVID-19/diagnostic imaging
*Carcinoma
Non-Small-Cell Lung
Humans
Image Processing
Computer-Assisted/methods
LUNG
Tomography
X-Ray Computed
Healthy image generation
Lung pathology segmentation
Prior-aware deep learning
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion segmentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and reconstruct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information regarding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On average, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model produces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.
Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.
Astaraki, Mehdi
Wang, Chunliang
Buizza, Giulia
Toma-Dasu, Iuliana
Lazzeroni, Marta
Smedby, Orjan
Physica Medica2019Journal Article, cited 0 times
Website
RIDER Lung CT
Radiomics
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.
Multimodal Brain Tumor Segmentation with Normal Appearance Autoencoder
We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model.
Neural image compression for non-small cell lung cancer subtype classification in H&E stained whole-slide images
Aswolinskiy, Witali
Tellez, David
Raya, Gabriel
van der Woude, Lieke
Looijen-Salamon, Monika
van der Laak, Jeroen
Grunberg, Katrien
Ciompi, Francesco
2021Conference Proceedings, cited 0 times
Pathomics
Convolutional Neural Network (CNN)
TCGA-LUAD
CPTAC-LUAD
TCGA-LUSC
Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients
Athira, KV
Nithin, SS
Computer2018Journal Article, cited 0 times
Website
Radiomics
non-small cell lung cancer
Machine learning
To develop a computer aided detection scheme to predict the stage 1 non-small cell lung cancer recurrence risk in lung cancer patients after surgery. By using chest computed tomography images; that taken before surgery, this system automatically segment the tumor that seen on CT images and extract the tumor related morphological and texture-based image features. We trained a Naïve Bayesian network classifier using six image features and an ANN classifier using two genomic biomarkers, these biomarkers are protein expression of the excision repair cross-complementing 1 gene (ERCC1) & a regulatory subunit of ribonucleotide reductase (RRM1) to predict the cancer recurrence risk, respectively. We developed a new approach that has a high potential to assist doctors in more effectively managing first stage NSCLC patients to reduce the cancer recurrence risk.
Visualizing the association between the location and prognosis of isocitrate dehydrogenase wild-type glioblastoma: a voxel-wise Cox regression analysis with open-source datasets
Atsukawa, N.
Tatekawa, H.
Ueda, D.
Oura, T.
Matsushita, S.
Horiuchi, D.
Takita, H.
Mitsuyama, Y.
Baba, R.
Tsukamoto, T.
Shimono, T.
Miki, Y.
Neuroradiology2024Journal Article, cited 0 times
Website
UCSF-PDGM
UPENN-GBM
Brain atlas
Glioblastoma
Magnetic Resonance Imaging (MRI)
Survival analysis
Tumor location
PURPOSE: This study examined the correlation between tumor location and prognosis in patients with glioblastoma using magnetic resonance images of various isocitrate dehydrogenase (IDH) wild-type glioblastomas from The Cancer Imaging Archive (TCIA). The relationship between tumor location and prognosis was visualized using voxel-wise Cox regression analysis. METHODS: Participants with IDH wild-type glioblastoma were selected, and their survival and demographic data and tumor characteristics were collected from TCIA datasets. Post-contrast-enhanced T1-weighted imaging, T2-fluid attenuated inversion recovery imaging, and tumor segmentation data were also compiled. Following affine registration of each image and tumor segmentation region of interest to the MNI standard space, a voxel-wise Cox regression analysis was conducted. This analysis determined the association of the presence or absence of the tumor with the prognosis in each voxel after adjusting for the covariates. RESULTS: The study included 769 participants of 464 men and 305 women (mean age, 63 years +/- 12 [standard deviation]). The hazard ratio map indicated that tumors in the medial frontobasal region and around the third and fourth ventricles were associated with poorer prognoses, underscoring the challenges of complete resection and treatment accessibility in these areas regardless of the tumor volume. Conversely, tumors located in the right temporal and occipital lobes had favorable prognoses. CONCLUSION: This study showed an association between tumor location and prognosis. These findings may assist clinicians in developing more precise and effective treatment plans for patients with glioblastoma to improve their management.
Oropharyngeal cancer prognosis based on clinicopathologic and quantitative imaging biomarkers with multiparametric model and machine learning methods
Aim: There is an unmet need for integrating quantitative imaging biomarkers into current risk stratification tools and to investigate relationships between various clinical characteristics, both radiomics features as well as other clinical prognosticators for oropharyngeal cancer (OPC). Multivariate analysis and ML algorithms can be used to predict recurrence free survival in patients with OPC. Method: Open access clinical meta data and matched baseline contrast-enhanced computed tomography (CECT) scans were accessed for a cohort of 495 OPC patients treated between 2005 and 2012 available at Head and Neck Cancer CT Atlas. DOI: 10.7937/K9/ TCIA.2017.umz8dv6s. The Cox proportional hazards (CPHs) were used to evaluate a large number of prognostic variables toward survival of cancer patients. Kaplan-Meier method was deployed to estimate mean and median survival with 95% CI and was compared using log-rank test. ML algorithms using random forest (RF) classifiers were used for prediction. Variables used in the models were age, gender, smoking status, smoking, TNM characteristics, AJCC staging, acks, subsite of origin, therapeutic combination, radiation dose, radiation duration, relapse-free survival and vital status. Results: Performance of CPH and RSF model in terms of Harell’s c-index (95% confidence interval) was compared and RSF model had an error rate of 38.94% or a c-index of 0.61 which is compared with CPH index of 0.62 which indicates a medium-level prediction. Conclusion: ML is a promising toolset for improving prediction of oral cancer outcomes. However, it is a medium-level prediction, and additional work is needed to improve its accuracy and consistency. Additional refinements in the model may provide useful inputs for an improved personalized care and improving outcomes in HNSCC patients.
Multi-threshold Attention U-Net (MTAU) Based Model for Multimodal Brain Tumor Segmentation in MRI Scans
Gliomas are one of the most frequent brain tumors and are classified into high grade and low grade gliomas. The segmentation of various regions such as tumor core, enhancing tumor etc. plays an important role in determining severity and prognosis. Here, we have developed a multi-threshold model based on attention U-Net for identification of various regions of the tumor in magnetic resonance imaging (MRI). We propose a multi-path segmentation and built three separate models for the different regions of interest. The proposed model achieved mean Dice Coefficient of 0.59, 0.72, and 0.61 for enhancing tumor, whole tumor and tumor core respectively on the training dataset. The same model gave mean Dice Coefficient of 0.57, 0.73, and 0.61 on the validation dataset and 0.59, 0.72, and 0.57 on the test dataset .
Analysis of dual tree M‐band wavelet transform based features for brain image classification
Ayalapogu, Ratna Raju
Pabboju, Suresh
Ramisetty, Rajeswara Rao
Magnetic Resonance in Medicine2018Journal Article, cited 1 times
Website
REMBRANDT
brain cancer
A novel adaptive momentum method for medical image classification using convolutional neural network
Aytac, U. C.
Gunes, A.
Ajlouni, N.
BMC Med Imaging2022Journal Article, cited 0 times
Website
REMBRANDT
BRAIN
COVID-19
Diagnostic Imaging
LUNG
Computed Tomography (CT)
*Adaptive momentum methods
*Backpropagation algorithm
*Convolutional neural networks
*Medical image classification
*Nonconvex optimization
BACKGROUND: AI for medical diagnosis has made a tremendous impact by applying convolutional neural networks (CNNs) to medical image classification and momentum plays an essential role in stochastic gradient optimization algorithms for accelerating or improving training convolutional neural networks. In traditional optimizers in CNNs, the momentum is usually weighted by a constant. However, tuning hyperparameters for momentum can be computationally complex. In this paper, we propose a novel adaptive momentum for fast and stable convergence. METHOD: Applying adaptive momentum rate proposes increasing or decreasing based on every epoch's error changes, and it eliminates the need for momentum hyperparameter optimization. We tested the proposed method with 3 different datasets: REMBRANDT Brain Cancer, NIH Chest X-ray, COVID-19 CT scan. We compared the performance of a novel adaptive momentum optimizer with Stochastic gradient descent (SGD) and other adaptive optimizers such as Adam and RMSprop. RESULTS: Proposed method improves SGD performance by reducing classification error from 6.12 to 5.44%, and it achieved the lowest error and highest accuracy compared with other optimizers. To strengthen the outcomes of this study, we investigated the performance comparison for the state-of-the-art CNN architectures with adaptive momentum. The results shows that the proposed method achieved the highest with 95% compared to state-of-the-art CNN architectures while using the same dataset. The proposed method improves convergence performance by reducing classification error and achieves high accuracy compared with other optimizers.
Advances in medical image analysis with vision Transformers: A comprehensive review
Azad, Reza
Kazerouni, Amirhossein
Heidari, Moein
Aghdam, Ehsan Khodapanah
Molaei, Amirali
Jia, Yiwei
Jose, Abin
Roy, Rijo
Merhof, Dorit
Medical Image Analysis2023Journal Article, cited 0 times
COVID-19-NY-SBU
TCGA-KIRC-Radiogenomics
TCGA-LUAD
TCGA-LUSC
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Deep Learning-Based Anomaly Detection for Early Cancer Detection in CT Scans
Early cancer identification is essential for patients’ prognosis and survival rates to be improved. Due to their capacity to record precise anatomical details, computed tomography (CT) scans are often utilized for cancer screening and diagnosis. However, detecting subtle anomalies indicative of early-stage cancer in CT scans can be challenging for radiologists, often leading to delayed diagnoses. This study suggests a Deep Learning-Based Anomaly diagnosis method for CT scan-based early cancer diagnosis. We use deep convolutional neural networks’ strength to automatically learn and extract beneficial characteristics from CT scans, enabling the detection of slight abnormalities that might not be visible to human observers. The proposed model is trained on a carefully curated dataset extracted from the Cancer Imaging Archive, comprising middle slices from 475 CT series obtained from 69 patients. We assess the deep learning model's performance by utilizing various metrics, such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC). Our findings show that the suggested approach outperforms conventional computer-aided diagnosis methods in detecting early cancer abnormalities with high accuracy and sensitivity. The results of this study have the potential to substantially influence clinical practice by assisting radiologists in quickly identifying cases of early-stage cancer and enabling timely and focused therapies, eventually improving patient outcomes and survival rates.
A Pre-study on the Layer Number Effect of Convolutional Neural Networks in Brain Tumor Classification
Convolutional Neural Networks significantly influenced the revolution of Artificial Intelligence and Deep Learning, and it has become a basic model for image classification processes. However, Convolutional Neural Networks can be applied in different architectures and has many other parameters that require several experiments to reach the optimal results in applications. The number of images used, the input size of the images, the number of layers, and their parameters are the main factors that directly affect the success of the models. In this study, seven CNN architectures with different convolutional layers and dense layers were applied to the Brain Tumor Progression dataset. The CNN architectures are designed by gradually decreasing and increasing the layers, and the performance results on the considered dataset have been analyzed using five-fold cross-validation. The results showed that deeper architectures in binary classification tasks could reduce the performance rates up to 7%. It has been observed that models with the lowest number of layers are more successful in sensitivity results. General results demonstrated that networks with two convolutional and fully connected layers produced superior results depending on the filter and neuron number adjustments within their layers. The results might support the researchers to determine the initial architecture in binary classification studies.
Addressing label noise in leukemia image classification using small loss approach and pLOF with weighted-average ensemble
Aziz, Md Tarek
Mahmud, S. M. Hasan
Goh, Kah Ong Michael
Nandi, Dip
Egyptian Informatics Journal2024Journal Article, cited 0 times
Website
C-NMC 2019
Algorithm Development
Pathomics
Classification
Label noise
Model uncertainty
Feature extraction
Shapley values
Ensemble learning
Explainable AI
Acute Lymphoblastic Leukemia (ALL)
Machine learning (ML) and deep learning (DL) models have been extensively explored for the early diagnosis of various cancer diseases, including Leukemia, with many of them achieving significant performance improvements comparable to those of human experts. However, challenges like limited image data, inaccurate annotations, and prediction reliability still hinder their broad implementation to establish a trustworthy computer-aided diagnosis (CAD) system. This paper introduces a novel weighted-average ensemble model for classifying Acute Lymphoblastic Leukemia, along with a reliable Computer-Aided Diagnosis (CAD) system that combines the strengths of both ML and DL approaches. Initially, a variety of filtering methods are extensively analyzed to determine the most suitable image representation, with subsequent data augmentation techniques to expand the training data. Second, a modified VGG-19 model was proposed with fine-tuning that was utilized as a feature extractor to extract meaningful features from the training samples. Third, A small-loss approach and probabilistic local outlier factor (pLOF) have been developed on the extracted features to address the label noise issue. Fourth, we proposed an weighted-average ensemble model based on the top five models as base learners, with weights calculated based on their model uncertainty to ensure reliable predictions. Fifth, we calculated Shapley values based on cooperative game theory and performed feature selection with different feature combinations to determine the optimal number of features using SHAP. Finally, we integrate these strategies to develop an interpretable CAD system. This system not only predicts the disease but also generates Grad-CAM images to visualize potential affected areas, enhancing both clarity and diagnostic insight. All of our code is provided in the following repository: https://github.com/taareek/leukemia-classification
Hybrid optimized MRF based lung lobe segmentation and lung cancer classification using Shufflenet
B, Spoorthi
Mahesh, Shanthi
Multimedia Tools and Applications2023Journal Article, cited 1 times
Website
LIDC-IDRI
Radiomics
Lung cancer is a kind of harmful cancer type that originates from the lungs. In this research, the lung lobe segmentation is carried out using Markov Random Field (MRF)-based Artificial Hummingbird Cuckoo algorithm (AHCA). The AHCA algorithm is modelled by considering the benefits of both the Artificial Hummingbird algorithm (AHA) and the Cuckoo search (CS) algorithm. Moreover, the lung cancer classification is done with ShuffleNet, which is trained by the Artificial Hummingbird Firefly optimization algorithm (AHFO) which is the integration of AHA and Firefly algorithm (FA). In this research, two algorithms are devised for both segmentation and classification. From these two algorithms, the AHA algorithm is used for updating the location. The AHA algorithm had three phases, such as foraging, guided foraging and migrating foraging where the guided foraging stage is selected to update the location for both segmentation and classification. Besides, the developed AHFO-based ShuffleNet scheme attained superior performance with respect to the testing accuracy of 0.9071, sensitivity of 0.9137 and specificity of 0.9039. The performance improvement of the proposed method for testing accuracy is 6.615%, 3.197%, 2.756%, and 1.764% higher than the existing methods. In future, the performance will be boosted by the advanced scheme for identifying the grade of disease.
OpenKBP: The open‐access knowledge‐based planning grand challenge and dataset
Babier, A.
Zhang, B.
Mahmood, R.
Moore, K. L.
Purdie, T. G.
McNiven, A. L.
Chan, T. C. Y.
Medical Physics2021Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Head-Neck-PET-CT
Head-Neck-CT-Atlas
TCGA-HNSC
Radiation Therapy
Machine Learning
Contouring
Computed Tomography (CT)
PURPOSE: To advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP) in radiation therapy research. METHODS: We hosted OpenKBP, a 2020 AAPM Grand Challenge, and challenged participants to develop the best method for predicting the dose of contoured computed tomography (CT) images. The models were evaluated according to two separate scores: (a) dose score, which evaluates the full three-dimensional (3D) dose distributions, and (b) dose-volume histogram (DVH) score, which evaluates a set DVH metrics. We used these scores to quantify the quality of the models based on their out-of-sample predictions. To develop and test their models, participants were given the data of 340 patients who were treated for head-and-neck cancer with radiation therapy. The data were partitioned into training ( n = 200 ), validation ( n = 40 ), and testing ( n = 100 ) datasets. All participants performed training and validation with the corresponding datasets during the first (validation) phase of the Challenge. In the second (testing) phase, the participants used their model on the testing data to quantify the out-of-sample performance, which was hidden from participants and used to determine the final competition ranking. Participants also responded to a survey to summarize their models. RESULTS: The Challenge attracted 195 participants from 28 countries, and 73 of those participants formed 44 teams in the validation phase, which received a total of 1750 submissions. The testing phase garnered submissions from 28 of those teams, which represents 28 unique prediction methods. On average, over the course of the validation phase, participants improved the dose and DVH scores of their models by a factor of 2.7 and 5.7, respectively. In the testing phase one model achieved the best dose score (2.429) and DVH score (1.478), which were both significantly better than the dose score (2.564) and the DVH score (1.529) that was achieved by the runner-up models. Lastly, many of the top performing teams reported that they used generalizable techniques (e.g., ensembles) to achieve higher performance than their competition. CONCLUSION: OpenKBP is the first competition for knowledge-based planning research. The Challenge helped launch the first platform that enables researchers to compare KBP prediction methods fairly and consistently using a large open-source dataset and standardized metrics. OpenKBP has also democratized KBP research by making it accessible to everyone, which should help accelerate the progress of KBP research. The OpenKBP datasets are available publicly to help benchmark future KBP research.
Analysis of Classification Methods for Diagnosis of Pulmonary Nodules in CT Images
Baboo, Capt Dr S Santhosh
Iyyapparaj, E
IOSR Journal of Electrical and Electronics Engineering2017Journal Article, cited 0 times
Website
LIDC-IDRI
Computed Tomography (CT)
LUNG
Classification
Random Forest
Computer Aided Detection (CADe)
The main aim of this work is to propose a novel Computer-aided detection (CAD) system based on a Contextual clustering combined with region growing for assisting radiologists in early identification of lung cancer from computed tomography(CT) scans. Instead of using conventional thresholding approach, this proposed work uses Contextual Clustering which yields a more accurate segmentation of the lungs from the chest volume. Following segmentation GLCM features are extracted which are then classified using three different classifiers namely Random forest, SVM and k-NN.
Detection of Brain Tumour in MRI Scan Images using Tetrolet Transform and SVM Classifier
Babu, B Shoban
Varadarajan, S
Indian Journal of Science and Technology2017Journal Article, cited 1 times
Website
REMBRANDT
Classification
Support Vector Machine (SVM)
Brain
BIOMEDICAL IMAGE RETRIEVAL USING LBWP
Babu, Joyce Sarah
Mathew, Soumya
Simon, Rini
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
Algorithm Development
local bit-plane wavelet pattern (LBWP)
bit-plane
wavelet
Content based image retrieval (CBIR)
Detection of Liver Tumor Using Gradient Vector Flow Algorithm
Baby, Jisha
Rajalakshmi, T.
Snekhalatha, U.
2019Book Section, cited 0 times
LungCT-Diagnosis
Liver tumor also known as the hepatic tumor is a type of growth found in or on the liver. Identifying the tumor location can be a tedious, error-prone and need an experts study to identify it. This paper presents a segmentation technique to segment the liver tumor using Gradient Vector Flow (GVF) snakes algorithm. To initiate snakes algorithm the images need to be insensitive to noise, Wiener Filter is proposed to remove the noise. The GVF snake starts its process by initially extending it to create an initial boundary. The GVF forces are calculated and help in driving the algorithm to stretch and bend the initial contour towards the region of interest due to the difference in intensity. The images were classified into tumor and non-tumor categories by Artificial Neural Network Classifier depending on the features extracted which showed notable dissimilarity between normal and abnormal images.
Optimized convolutional neural network by firefly algorithm for magnetic resonance image classification of glioma brain tumor grade
Bacanin, Nebojsa
Bezdan, Timea
Venkatachalam, K.
Al-Turjman, Fadi
Journal of Real-Time Image Processing2021Journal Article, cited 0 times
Website
TCGA-GBM
Classification
REMBRANDT
Magnetic Resonance Imaging (MRI)
Convolutional Neural Network (CNN)
Computer Aided Diagnosis (CADx)
The most frequent brain tumor types are gliomas. The magnetic resonance imaging technique helps to make the diagnosis of brain tumors. It is hard to get the diagnosis in the early stages of the glioma brain tumor, although the specialist has a lot of experience. Therefore, for the magnetic resonance imaging interpretation, a reliable and efficient system is required which helps the doctor to make the diagnosis in early stages. To make classification of the images, to which class the glioma belongs, convolutional neural networks, which proved that they can obtain an excellent performance in the image classification tasks, can be used. Convolutional network hyperparameters’ tuning is a very important issue in this domain for achieving high accuracy on the image classification; however, this task takes a lot of computational time. Approaching this issue, in this manuscript, we propose a metaheuristics method to automatically find the near-optimal values of convolutional neural network hyperparameters based on a modified firefly algorithm and develop a system for automatic image classification of glioma brain tumor grades from magnetic resonance imaging. First, we have tested the proposed modified algorithm on the set of standard unconstrained benchmark functions and the performance is compared to the original algorithm and other modified variants. Upon verifying the efficiency of the proposed approach in general, it is applied for hyperparameters’ optimization of the convolutional neural network. The IXI dataset and the cancer imaging archive with more collections of data are used for evaluation purposes, and additionally, the method is evaluated on the axial brain tumor images. The obtained experimental results and comparative analysis with other state-of-the-art algorithms tested under the same conditions show the robustness and efficiency of the proposed method.
Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.
Mammography and breast tomosynthesis simulator for virtual clinical trials
Badal, Andreu
Sharma, Diksha
Graff, Christian G.
Zeng, Rongping
Badano, Aldo
Computer Physics Communications2021Journal Article, cited 0 times
Website
VICTRE
mammography
Breast
Computer modeling and simulations are increasingly being used to predict the clinical performance of x-ray imaging devices in silico, and to generate synthetic patient images for training and testing of machine learning algorithms. We present a detailed description of the computational models implemented in the open source GPU-accelerated Monte Carlo x-ray imaging simulation code MC-GPU. This code, originally developed to simulate radiography and computed tomography, has been extended to replicate a commercial full-field digital mammography and digital breast tomosynthesis (DBT) device. The code was recently used to image 3000 virtual breast models with the aim of reproducing in silico a clinical trial used in support of the regulatory approval of DBT as a replacement of mammography for breast cancer screening. The updated code implements a more realistic x-ray source model (extended 3D focal spot, tomosynthesis acquisition trajectory, tube motion blurring) and an improved detector model (direct-conversion Selenium detector with depth-of-interaction effects, fluorescence tracking, electronic noise and anti-scatter grid). The software uses a high resolution voxelized geometry model to represent the breast anatomy. To reduce the GPU memory requirements, the code stores the voxels in memory within a binary tree structure. The binary tree is an efficient compression mechanism because many voxels with the same composition are combined in common tree branches while preserving random access to the phantom composition at any location. A delta scattering ray-tracing algorithm which does not require computing ray-voxel interfaces is used to minimize memory access. Multiple software verification and validation steps intended to establish the credibility of the implemented computational models are reported. The software verification was done using a digital quality control phantom and an ideal pinhole camera. The validation was performed reproducing standard bench testing experiments used in clinical practice and comparing with experimental measurements. A sensitivity study intended to assess the robustness of the simulated results to variations in some of the input parameters was performed using an in silico clinical trial pipeline with simulated lesions and mathematical observers. We show that MC-GPU is able to simulate x-ray projections that incorporate many of the sources of variability found in clinical images, and that the simulated results are robust to some uncertainty in the input parameters. Limitations of the implemented computational models are discussed. Program summary Program title: MCGPU_VICTRE CPC Library link to program files: http://dx.doi.org/10.17632/k5x2bsf27m.1 Licensing provisions: CC0 1.0 Programming language: C (with NVIDIA CUDA extensions) Nature of problem: The health risks associated with ionizing radiation impose a limit to the amount of clinical testing that can be done with x-ray imaging devices. In addition, radiation dose cannot be directly measured inside the body. For these reasons, a computational replica of an x-ray imaging device that simulates radiographic images of synthetic anatomical phantoms is of great value for device evaluation. The simulated radiographs and dosimetric estimates can be used for system design and optimization, task-based evaluation of image quality, machine learning software training, and in silico imaging trials. Solution method: Computational models of a mammography x-ray source and detector have been implemented. X-ray transport through matter is simulated using Monte Carlo methods customized for parallel execution in multiple Graphics Processing Units. The input patient anatomy is represented by voxels, which are efficiently stored in the video memory using a new binary tree structure compression mechanism.
Diagnosing Prostate Cancer: An Implementation of Deep Machine Learning Fusion Network in MRI Using a Transfer Learning Approach
Of all the terminal cancers that plague men, prostate cancer remains one of the most prevalent and ubiquitous. Data shows prostate cancer is the second leading cause of cancer death worldwide among men. About 11% of men have prostate cancer at some time during their lives. As it happens, we have dedicated our entire research to developing an approach that can improve the existing precision of prostate cancer diagnosis. In our research, we have dedicated a Transfer Learning approach for the Deep Learning model to compare the accuracy in results using Machine Learning classifiers. In addition, we evaluated individual performance in classifications with different evaluation measures using a Deep Learning pre-trained network, VGG16. During our evaluation, we assessed several performance metrics such as Precision, Recall, F1 Score, and Loss Vs. Accuracy for performance analysis. Upon implementing the Transfer Learning approach, we recorded the optimum performance using the VGG16 architecture compared to other popular Deep learning models such as MobileNet and ResNet. It is important to note that we have used the convolutional block and dense layers of VGG16 architecture to extract features from our image dataset. Afterward, we forwarded those features to Machine Learning classifiers to tabulate the final classification result. Upon successful tabulation, we have secured significant accuracy in prognostication using the Deep Machine Learning method in our research.
Automated Classification of Axial CT Slices Using Convolutional Neural Network
Badura, Paweł
Juszczyk, Jan
Bożek, Paweł
Smoliński, Michał
2020Book Section, cited 0 times
Head-Neck Cetuximab
LIDC-IDRI
Machine Learning
Badura, PawełJuszczyk, JanBożek, PawełSmoliński, Michał This study addresses the automated recognition of the axial computed tomography (CT) slice content in terms of a predefined region of the body for the computer-aided diagnosis purposes. A 23-layer convolutional neural network was designed, trained and tested for the axial CT slice classification. The system was validated over 120 CT studies from publicly available databases containing 21 704 images in two experiments with different definitions of classes. The classification accuracy reached 93.6% and 97.0% for the database partitions into 9 and 5 classes, respectively.
Survival time prediction by integrating cox proportional hazards network and distribution function network
Baek, Eu-Tteum
Yang, Hyung Jeong
Kim, Soo Hyung
Lee, Guee Sang
Oh, In-Jae
Kang, Sae-Ryung
Min, Jung-Joon
BMC Bioinformatics2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
Cox proportional hazard model
Deep Learning
BACKGROUND: The Cox proportional hazards model is commonly used to predict hazard ratio, which is the risk or probability of occurrence of an event of interest. However, the Cox proportional hazard model cannot directly generate an individual survival time. To do this, the survival analysis in the Cox model converts the hazard ratio to survival times through distributions such as the exponential, Weibull, Gompertz or log-normal distributions. In other words, to generate the survival time, the Cox model has to select a specific distribution over time. RESULTS: This study presents a method to predict the survival time by integrating hazard network and a distribution function network. The Cox proportional hazards network is adapted in DeepSurv for the prediction of the hazard ratio and a distribution function network applied to generate the survival time. To evaluate the performance of the proposed method, a new evaluation metric that calculates the intersection over union between the predicted curve and ground truth was proposed. To further understand significant prognostic factors, we use the 1D gradient-weighted class activation mapping method to highlight the network activations as a heat map visualization over an input data. The performance of the proposed method was experimentally verified and the results compared to other existing methods. CONCLUSIONS: Our results confirmed that the combination of the two networks, Cox proportional hazards network and distribution function network, can effectively generate accurate survival time.
Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT
Bagheri, Mohammad Hadi
Roth, Holger
Kovacs, William
Yao, Jianhua
Farhadi, Faraz
Li, Xiaobai
Summers, Ronald M
Acad Radiol2019Journal Article, cited 0 times
Website
Pancreas-CT
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.
Brain Tumor Segmentation Based on Zernike Moments, Enhanced Ant Lion Optimization, and Convolutional Neural Network in MRI Images
Gliomas that form in glial cells in the spinal cord and brain are the most aggressive and common kinds of brain tumors (intra-axial brain tumors) due to their rapid progression and infiltrative nature. The procedure of recognizing tumor margins from healthy tissues is still an arduous and time-consuming task in the clinical routine. In this study, a robust and efficient machine learning-based pipeline is suggested for brain tumor segmentation. Moreover, we employ four MRI modalities for increasing the final accuracy of the segmentation results, namely, Flair, T1, T2, and T1ce. Firstly, eight feature maps are extracted from each modality using the Zernike moments approach. The Zernike moments can create a feature map using two parameters, namely, n and m. So, by changing these values, we are able to generate different sets of edge feature maps. Then, eight edge feature maps for each modality are selected to produce a final feature map. Next, four original images are encoded into new four images to represent more unique and key information using the Local Directional Number Pattern (LDNP). As different encoded image leads to obtaining different final results and accuracies, the Enhanced Ant Lion Optimization (EALO) was employed to find the best possible set of feature maps for creating the best possible encoded image. Finally, a CNN model is utilized to explore significant details from the brain tissue more efficiently which accepts four input patches. Overall, the suggested framework outperforms the baseline methods regarding Dice score and Recall.
Imaging genomics in cancer research: limitations and promises
Bai, Harrison X
Lee, Ashley M
Yang, Li
Zhang, Paul
Davatzikos, Christos
Maris, John M
Diskin, Sharon J
The British journal of radiology2016Journal Article, cited 28 times
Website
Radiogenomics
Feature fusion Siamese network for breast cancer detection comparing current and prior mammograms
Bai, J.
Jin, A.
Wang, T.
Yang, C.
Nabavi, S.
Med Phys2022Journal Article, cited 0 times
CBIS-DDSM
CMMD
BCS-DBT
BREAST
Automatic detection
Artificial Intelligence
*Breast Neoplasms/diagnostic imaging
Female
Humans
Machine Learning
Mammography/methods
Neural Networks
Computer
Siamese
deep learning
prior mammogram
PURPOSE: Automatic detection of very small and nonmass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an artificial intelligence (AI) system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS: The proposed Siamese-based network uses high-resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS: We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (long short-term memory [LSTM] and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score, and area under the curve (AUC). Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS: Integrating prior mammogram images improves automatic cancer classification, specially for very small and nonmass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models.
A Novel Framework for Improving Pulse-Coupled Neural Networks With Fuzzy Connectedness for Medical Image Segmentation
Bai, Peirui
Yang, Kai
Min, Xiaolin
Guo, Ziyang
Li, Chang
Fu, Yingxia
Han, Chao
Lu, Xiang
Liu, Qingyi
IEEE Access2020Journal Article, cited 0 times
TCGA-LIHC
Machine Learning
A pulse-coupled neural network (PCNN) is a promising image segmentation approach that requires no training. However, it is challenging to successfully apply a PCNN to medical image segmentation due to common but difficult scenarios such as irregular object shapes, blurred boundaries, and intensity inhomogeneity. To improve this situation, a novel framework incorporating fuzzy connectedness (FC) is proposed. First, a comparative study of the traditional PCNN models is carried out to analyze the framework and firing mechanism. Then, the characteristic matrix of fuzzy connectedness (CMFC) is presented for the first time. The CMFC can provide more intensity information and spatial relationships at the pixel level, which is helpful for producing a more reasonable firing mechanism in the PCNN models. Third, by integrating the CMFC into the PCNN framework models, a construction scheme of FC-PCNN models is designed. To illustrate this concept, a general solution that can be applied to different PCNN models is developed. Next, the segmentation performances of the proposed FC-PCNN models are evaluated by comparison with the traditional PCNN models, the traditional segmentation methods, and deep learning methods. The test images include synthetic and real medical images from the Internet and three public medical image datasets. The quantitative and visual comparative analysis demonstrates that the proposed FC-PCNN models outperform the traditional PCNN models and the traditional segmentation methods and achieve competitive performance to the deep learning methods. In addition, the proposed FC-PCNN models have favorable capability to eliminate inference from surrounding artifacts.
Development and validation of a radiomic prediction model for TACC3 expression and prognosis in non-small cell lung cancer using contrast-enhanced CT imaging
Bai, Weichao
Zhao, Xinhan
Ning, Qian
Translational Oncology2025Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Overall Survival Prediction in Glioblastoma With Radiomic Features Using Machine Learning
Baid, Ujjwal
Rane, Swapnil U.
Talbar, Sanjay
Gupta, Sudeep
Thakur, Meenakshi H.
Moiyadi, Aliasgar
Mahajan, Abhishek
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
Glioblastoma is a WHO grade IV brain tumor, which leads to poor overall survival (OS) of patients. For precise surgical and treatment planning, OS prediction of glioblastoma (GBM) patients is highly desired by clinicians and oncologists. Radiomic research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, first-order, intensity-based volume and shape-based and textural radiomic features are extracted from fluid-attenuated inversion recovery (FLAIR) and T1ce MRI data. The region of interest is further decomposed with stationary wavelet transform with low-pass and high-pass filtering. Further, radiomic features are extracted on these decomposed images, which helped in acquiring the directional information. The efficiency of the proposed algorithm is evaluated on Brain Tumor Segmentation (BraTS) challenge training, validation, and test datasets. The proposed approach achieved 0.695, 0.571, and 0.558 on BraTS training, validation, and test datasets. The proposed approach secured the third position in BraTS 2018 challenge for the OS prediction task.
Brain Tumor Segmentation with Cascaded Deep Convolutional Neural Network
Cancer is the second leading cause of death globally and is responsible for an estimated 9.6 million deaths in 2018. Approximately 70% of deaths from cancer occur in low and middle-income countries. One defining feature of cancer is the rapid creation of abnormal cells that grow uncontrollably causing tumor. Gliomas are brain tumors that arises from the glial cells in brain and comprise of 80% of all malignant brain tumors. Accurate delineation of tumor cells from healthy tissues is important for precise treatment planning. Because of different forms, shapes, sizes and similarity of the tumor tissues with rest of the brain segmentation of the Glial tumors is challenging. In this study we have proposed fully automatic two step approach for Glioblastoma (GBM) brain tumor segmentation with Cascaded U-Net. Training patches are extracted from 335 cases from Brain Tumor Segmentation (BraTS) Challenge for training and results are validated on 125 patients. The proposed approach is evaluated quantitatively in terms of Dice Similarity Coefficient (DSC) and Hausdorff95 distance.
A Novel Approach for Fully Automatic Intra-Tumor Segmentation With 3D U-Net Architecture for Gliomas
Baid, Ujjwal
Talbar, Sanjay
Rane, Swapnil
Gupta, Sudeep
Thakur, Meenakshi H.
Moiyadi, Aliasgar
Sable, Nilesh
Akolkar, Mayuresh
Mahajan, Abhishek
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
Purpose: Gliomas are the most common primary brain malignancies, with varying degrees of aggressiveness and prognosis. Understanding of tumor biology and intra-tumor heterogeneity is necessary for planning personalized therapy and predicting response to therapy. Accurate tumoral and intra-tumoral segmentation on MRI is the first step toward understanding the tumor biology through computational methods. The purpose of this study was to design a segmentation algorithm and evaluate its performance on pre-treatment brain MRIs obtained from patients with gliomas. Materials and Methods: In this study, we have designed a novel 3D U-Net architecture that segments various radiologically identifiable sub-regions like edema, enhancing tumor, and necrosis. Weighted patch extraction scheme from the tumor border regions is proposed to address the problem of class imbalance between tumor and non-tumorous patches. The architecture consists of a contracting path to capture context and the symmetric expanding path that enables precise localization. The Deep Convolutional Neural Network (DCNN) based architecture is trained on 285 patients, validated on 66 patients and tested on 191 patients with Glioma from Brain Tumor Segmentation (BraTS) 2018 challenge dataset. Three dimensional patches are extracted from multi-channel BraTS training dataset to train 3D U-Net architecture. The efficacy of the proposed approach is also tested on an independent dataset of 40 patients with High Grade Glioma from our tertiary cancer center. Segmentation results are assessed in terms of Dice Score, Sensitivity, Specificity, and Hausdorff 95 distance (ITCN intra-tumoral classification network). Result: Our proposed architecture achieved Dice scores of 0.88, 0.83, and 0.75 for the whole tumor, tumor core and enhancing tumor, respectively, on BraTS validation dataset and 0.85, 0.77, 0.67 on test dataset. The results were similar on the independent patients' dataset from our hospital, achieving Dice scores of 0.92, 0.90, and 0.81 for the whole tumor, tumor core and enhancing tumor, respectively. Conclusion: The results of this study show the potential of patch-based 3D U-Net for the accurate intra-tumor segmentation. From experiments, it is observed that the weighted patch-based segmentation approach gives comparable performance with the pixel-based approach when there is a thin boundary between tumor subparts.
2017Conference Proceedings, cited 2030 times
Website
Algorithm Development
BraTS
brain
glioma
glioma sub-region segmentation
brain tumors
image mapping into colors
clinical decision support
Radiomics
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features
Bakas, Spyridon
Akbari, Hamed
Sotiras, Aristeidis
Bilello, Michel
Rozycki, Martin
Kirby, Justin S.
Freymann, John B.
Farahani, Keyvan
Davatzikos, Christos
Scientific Data2017Journal Article, cited 1036 times
Website
TCGA-GBM
TCGA-LGG
Radiomic feature
Segmentation
Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method.
The University of Pennsylvania glioblastoma (UPenn-GBM) cohort: advanced MRI, clinical, genomics, & radiomics
Bakas, S.
Sako, C.
Akbari, H.
Bilello, M.
Sotiras, A.
Shukla, G.
Rudie, J. D.
Santamaria, N. F.
Kazerooni, A. F.
Pati, S.
Rathore, S.
Mamourian, E.
Ha, S. M.
Parker, W.
Doshi, J.
Baid, U.
Bergman, M.
Binder, Z. A.
Verma, R.
Lustig, R. A.
Desai, A. S.
Bagley, S. J.
Mourelatos, Z.
Morrissette, J.
Watt, C. D.
Brem, S.
Wolf, R. L.
Melhem, E. R.
Nasrallah, M. P.
Mohan, S.
O'Rourke, D. M.
Davatzikos, C.
Sci Data2022Journal Article, cited 0 times
UPENN-GBM
Magnetic Resonance Imaging (MRI)
radiomics
Genomics
MRI
Glioblastoma is the most common aggressive adult brain tumor. Numerous studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of: a) number of subjects, b) lack of consistent acquisition protocol, c) data quality, or d) accompanying clinical, demographic, and molecular information. Toward alleviating these limitations, we contribute the "University of Pennsylvania Glioblastoma Imaging, Genomics, and Radiomics" (UPenn-GBM) dataset, which describes the currently largest publicly available comprehensive collection of 630 patients diagnosed with de novo glioblastoma. The UPenn-GBM dataset includes (a) advanced multi-parametric magnetic resonance imaging scans acquired during routine clinical practice, at the University of Pennsylvania Health System, (b) accompanying clinical, demographic, and molecular information, (d) perfusion and diffusion derivative volumes, (e) computationally-derived and manually-revised expert annotations of tumor sub-regions, as well as (f) quantitative imaging (also known as radiomic) features corresponding to each of these regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.
GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.
Bakas, S.
Zeng, K.
Sotiras, A.
Rathore, S.
Akbari, H.
Gaonkar, B.
Rozycki, M.
Pati, S.
Davatzikos, C.
Brainlesion2016Journal Article, cited 49 times
Website
Algorithm Development
Challenge
Segmentation
BRAIN
BraTS
Lower-grade glioma (LGG)
Glioblastoma Multiforme (GBM)
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction
Bakasa, W.
Viriri, S.
J Imaging2023Journal Article, cited 29 times
Website
Pancreas-CT
Vgg16
XGBoost
classification
Computed Tomography (CT)
feature extraction
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16-XGBoost (VGG16-backbone feature extractor and Extreme Gradient Boosting-classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16-XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4.
Predicting Lung Cancer Survival Time Using Deep Learning Techniques
Lung cancer is one of the most commonly diagnosed cancer. Most studies found that lung cancer patients have a survival time up to 5 years after the cancer is found. An accurate prognosis is the most critical aspect of a clinical decision-making process for patients. predicting patients' survival time helps healthcare professionals to make treatment recommendations based on the prediction. In this paper, we used various deep learning methods to predict the survival time of Non-Small Cell Lung Cancer (NSCLC) patients in days which has been evaluated on clinical and radiomics dataset. The dataset was extracted from computerized tomography (CT) images that contain data for 300 patients. The concordance index (C-index) was used to evaluate the models. We applied several deep learning approaches and the best accuracy gained is 70.05% on the OWKIN task using Multilayer Perceptron (MLP) which outperforms the baseline model provided by the OWKIN task organizers
A radiogenomic dataset of non-small cell lung cancer
Bakr, Shaimaa
Gevaert, Olivier
Echegaray, Sebastian
Ayers, Kelsey
Zhou, Mu
Shafiq, Majid
Zheng, Hong
Benson, Jalen Anthony
Zhang, Weiruo
Leung, Ann NC
Scientific Data2018Journal Article, cited 1 times
Website
non-small cell lung cancer (NSCLC)
Radiogenomics
Secure telemedicine using RONI halftoned visual cryptography without pixel expansion
Bakshi, Arvind
Patel, Anoop Kumar
Journal of Information Security and Applications2019Journal Article, cited 0 times
Website
BRAIN
Algorithm Development
To provide quality healthcare services worldwide telemedicine is a well-known technique. It delivers healthcare services remotely. For the diagnosis of disease and prescription by the doctor, lots of information is needed to be shared over public and private channels. Medical information like MRI, X-Ray, CT-scan etc. contains very personal information and needs to be secured. Security like confidentiality, privacy, and integrity of medical data is still a challenge. It is observed that the existing security techniques like digital watermarking, encryption are not efficient for real-time use. This paper investigates the problem and provides the solution of security considering major aspects, using Visual Cryptography (VC). The proposed algorithm creates shares for parts of the image which does not have relevant information. All the information which contains data related to the disease is supposed to be relevant and is marked as the region of interest (ROI). The integrity of the image is maintained by inserting some information in the region of non-interest (RONI). All the shares generated are transmitted over different channels and embedded information is decrypted by overlapping (in XOR fashion) shares in theta(1) time. Visual perception of all the results discussed in this article is very clear. The proposed algorithm has performance metrics as PSNR (peak signal-to-noise ratio), SSIM (structure similarity matrix), and Accuracy having values 22.9452, 0.9701, and 99.8740 respectively. (C) 2019 Elsevier Ltd. All rights reserved.
An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images
Bal, A.
Banerjee, M.
Chaki, R.
Sharma, P.
Med Biol Eng Comput2021Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
Magnetic Resonance Imaging (MRI)
Neural Networks
Computer
Brain tumor
Segmentation
Deep convolution neural network
Manual segmentation
Radiomic features
Two-pathway CNN
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
Test–Retest Reproducibility Analysis of Lung CT Image Features
Balagurunathan, Yoganand
Kumar, Virendra
Gu, Yuhua
Kim, Jongphil
Wang, Hua
Liu, Ying
Goldgof, Dmitry B
Hall, Lawrence O
Korn, Rene
Zhao, Binsheng
Journal of Digital Imaging2014Journal Article, cited 85 times
Website
RIDER Lung CT
Non Small Cell Lung Cancer (NSCLC)
Quantitative size, shape, and texture features derived from computed tomographic (CT) images may be useful as predictive, prognostic, or response biomarkers in non-small cell lung cancer (NSCLC). However, to be useful, such features must be reproducible, non-redundant, and have a large dynamic range. We developed a set of quantitative three-dimensional (3D) features to describe segmented tumors and evaluated their reproducibility to select features with high potential to have prognostic utility. Thirty-two patients with NSCLC were subjected to unenhanced thoracic CT scans acquired within 15 min of each other under an approved protocol. Primary lung cancer lesions were segmented using semi-automatic 3D region growing algorithms. Following segmentation, 219 quantitative 3D features were extracted from each lesion, corresponding to size, shape, and texture, including features in transformed spaces (laws, wavelets). The most informative features were selected using the concordance correlation coefficient across test–retest, the biological range and a feature independence measure. There were 66 (30.14 %) features with concordance correlation coefficient ≥ 0.90 across test–retest and acceptable dynamic range. Of these, 42 features were non-redundant after grouping features with R2Bet ≥ 0.95. These reproducible features were found to be predictive of radiological prognosis. The area under the curve (AUC) was 91 % for a size-based feature and 92 % for the texture features (runlength, laws). We tested the ability of image features to predict a radiological prognostic score on an independent NSCLC (39 adenocarcinoma) samples, the AUC for texture features (runlength emphasis, energy) was 0.84 while the conventional size-based features (volume, longest diameter) was 0.80. Test–retest and correlation analyses have identified non-redundant CT image features with both high intra-patient reproducibility and inter-patient biological range. Thus making the case that quantitative image features are informative and prognostic biomarkers for NSCLC.
Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.
Comput Biol Med2024Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Humans
*Tomography
X-Ray Computed/methods
Neural Networks
Computer
Convolutional operations
Distillation
Image quality assessment
Low-dose computed tomography
No-reference quality assessment
Self-attention mechanism
Vision transformers
No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a "teacher ensemble network" is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a "student network", comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA's remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology.
MRI Brain Tumor Segmentation and Uncertainty Estimation Using 3D-UNet Architectures
Ballestar, Laura Mora
Vilaplana, Veronica
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Automation of brain tumor segmentation in 3D magnetic resonance images (MRIs) is key to assess the diagnostic and treatment of the disease. In recent years, convolutional neural networks (CNNs) have shown improved results in the task. However, high memory consumption is still a problem in 3D-CNNs. Moreover, most methods do not include uncertainty information, which is especially critical in medical diagnosis. This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data. The different trained models are then used to create an ensemble that leverages the properties of each model, thus increasing the performance. We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively. In addition, a hybrid approach is proposed that helps increase the accuracy of the segmentation. The model and uncertainty estimation measurements proposed in this work have been used in the BraTS’20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.
Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image
Bandyopadhyay, Oishila
Biswas, Arindam
Bhattacharya, Bhargab B
J Digit Imaging2018Journal Article, cited 0 times
Website
Algorithm Development
Support Vector Machine (SVM)
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.
Ensemble of CNNs for Segmentation of Glioma Sub-regions with Survival Prediction
Gliomas are the most common malignant brain tumors, having varying level of aggressiveness, with Magnetic Resonance Imaging (MRI) being used for their diagnosis. As these tumors are highly heterogeneous in shape and appearance, their segmentation becomes a challenging task. In this paper we propose an ensemble of three Convolutional Neural Network (CNN) architectures viz. (i) P-Net, (ii) U-Net with spatial pooling, and (iii) ResInc-Net for glioma sub-regions segmentation. The segmented tumor Volume of Interest (VOI) is further used for extracting spatial habitat features for the prediction of Overall Survival (OS) of patients. A new aggregated loss function is used to help in effectively handling the data imbalance problem. The concept of modeling predictive distributions, test time augmentation and ensembling methods are used to reduce uncertainty and increase the confidence of the model prediction. The proposed integrated system (for Segmentation and OS prediction) is trained and validated on the Brain Tumor Segmentation (BraTS) Challenge 2019 dataset. We ranked among the top performing methods on Segmentation and Overall Survival prediction on the validation dataset, as observed from the leaderboard. We also ranked among the top four in the Uncertainty Quantification task on the testing dataset.
Novel Volumetric Sub-region Segmentation in Brain Tumors
Banerjee, Subhashis
Mitra, Sushmita
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
A novel deep learning based model called Multi-Planar Spatial Convolutional Neural Network (MPS-CNN) is proposed for effective, automated segmentation of different sub-regions viz. peritumoral edema (ED), necrotic core (NCR), enhancing and non-enhancing tumor core (ET/NET), from multi-modal MR images of the brain. An encoder-decoder type CNN model is designed for pixel-wise segmentation of the tumor along three anatomical planes (axial, sagittal, and coronal) at the slice level. These are then combined, by incorporating a consensus fusion strategy with a fully connected Conditional Random Field (CRF) based post-refinement, to produce the final volumetric segmentation of the tumor and its constituent sub-regions. Concepts, such as spatial-pooling and unpooling are used to preserve the spatial locations of the edge pixels, for reducing segmentation error around the boundaries. A new aggregated loss function is also developed for effectively handling data imbalance. The MPS-CNN is trained and validated on the recent Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018 dataset. The Dice scores obtained for the validation set for whole tumor (WT :NCR/NE +ET +ED), tumor core (TC:NCR/NET +ET), and enhancing tumor (ET) are 0.90216, 0.87247, and 0.82445. The proposed MPS-CNN is found to perform the best (based on leaderboard scores) for ET and TC segmentation tasks, in terms of both the quantitative measures (viz. Dice and Hausdorff). In case of the WT segmentation it also achieved the second highest accuracy, with a score which was only 1% less than that of the best performing method.
Glioma Classification Using Deep Radiomics
Banerjee, Subhashis
Mitra, Sushmita
Masulli, Francesco
Rovetta, Stefano
SN Computer Science2020Journal Article, cited 1 times
Website
TCGA-GBM
LGG-1p19qDeletion
Convolutional Neural Network (CNN)
Glioma constitutes $$80\%$$80%of malignant primary brain tumors in adults, and is usually classified as high-grade glioma (HGG) and low-grade glioma (LGG). The LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy being challenging for brain tumor patients, noninvasive imaging techniques like magnetic resonance imaging (MRI) have been extensively employed in diagnosing brain tumors. Therefore, development of automated systems for the detection and prediction of the grade of tumors based on MRI data becomes necessary for assisting doctors in the framework of augmented intelligence. In this paper, we thoroughly investigate the power of deep convolutional neural networks (ConvNets) for classification of brain tumors using multi-sequence MR images. We propose novel ConvNet models, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out testing, and testing on the holdout dataset are used to evaluate the performance of the ConvNets. The results demonstrate that the proposed ConvNets achieve better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of $$95\%$$95%for the low/high grade glioma classification problem. A score of $$97\%$$97%is generated for classification of LGG with/without 1p/19q codeletion, without any additional effort toward extraction and selection of features. We study the properties of self-learned kernels/ filters in different layers, through visualization of the intermediate layer outputs. We also compare the results with that of state-of-the-art methods, demonstrating a maximum improvement of $$7\%$$7%on the grading performance of ConvNets and $$9\%$$9%on the prediction of 1p/19q codeletion status.
Multi-planar Spatial-ConvNet for Segmentation and Survival Prediction in Brain Cancer
A new deep learning method is introduced for the automatic delineation/segmentation of brain tumors from multi-sequence MR images. A Radiomic model for predicting the Overall Survival (OS) is designed, based on the features extracted from the segmented Volume of Interest (VOI). An encoder-decoder type ConvNet model is designed for pixel-wise segmentation of the tumor along three anatomical planes (axial, sagittal and coronal) at the slice level. These are then combined, using a consensus fusion strategy, to produce the final volumetric segmentation of the tumor and its sub-regions. Novel concepts such as spatial-pooling and unpooling are introduced to preserve the spatial locations of the edge pixels for reducing segmentation error around the boundaries. We also incorporate shortcut connections to copy and concatenate the receptive fields from the encoder to the decoder part, for helping the decoder network localize and recover the object details more effectively. These connections allow the network to simultaneously incorporate high-level features along with pixel-level details. A new aggregated loss function helps in effectively handling data imbalance. The integrated segmentation and OS prediction system is trained and validated on the BraTS 2018 dataset.
Deep Active Learning for Glioblastoma Quantification
Generating pixel or voxel-wise annotations of radiological images to train deep learning-based segmentation models is a time consuming and expensive job involving precious time and effort of radiologists. Other challenges include obtaining diverse annotated training data that covers the entire spectrum of potential situations. In this paper, we propose an Active Learning (AL) based segmentation strategy involving a human annotator or “Oracle" to annotate interactively. The deep learning-based segmentation model learns in parallel by training in iterations with the annotated samples. A publicly available MRI dataset of brain tumors (Glioma) is used for the experimental studies. The efficiency of the proposed AL-based segmentation model is demonstrated in terms of annotation time requirement compared with the conventional Passive Learning (PL) based strategies. Experimentally it is also demonstrated that the proposed AL-based segmentation strategy achieves comparable or enhanced segmentation performance with much fewer annotations through quantitative and qualitative evaluations of the segmentation results.
A Fully Automated Deep Learning Network for Brain Tumor Segmentation
Bangalore Yogananda, C. G.
Shah, B. R.
Vejdani-Jahromi, M.
Nalawade, S. S.
Murugesan, G. K.
Yu, F. F.
Pinho, M. C.
Wagner, B. C.
Emblem, K. E.
Bjornerud, A.
Fei, B.
Madhuranthakam, A. J.
Maldjian, J. A.
Tomography2020Journal Article, cited 40 times
Website
BraTS 2018
BraTS 2017
*Deep Learning
Humans
Image Processing
Computer-Assisted
Magnetic Resonance Imaging (MRI)
Segmentation
Convolutional Neural Network (CNN)
Dense U-Net
Machine learning
We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network's performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow.
Fully Automated Brain Tumor Segmentation and Survival Prediction of Gliomas Using Deep Learning and MRI
Tumor segmentation of magnetic resonance images is a critical step in providing objective measures of predicting aggressiveness and response to therapy in gliomas. It has valuable applications in diagnosis, monitoring, and treatment planning of brain tumors. The purpose of this work was to develop a fully-automated deep learning method for tumor segmentation and survival prediction. Well curated brain tumor cases with multi-parametric MR Images from the BraTS2019 dataset were used. A three-group framework was implemented, with each group consisting of three 3D-Dense-UNets to segment whole-tumor (WT), tumor-core (TC) and enhancing-tumor (ET). Each group was trained using different approaches and loss-functions. The output segmentations of a particular label from their respective networks from the three groups were ensembled and post-processed. For survival analysis, a linear regression model based on imaging texture features and wavelet texture features extracted from each of the segmented components was implemented. The networks were tested on both the BraTS2019 validation and testing datasets. The segmentation networks achieved average dice-scores of 0.901, 0.844 and 0.801 for WT, TC and ET respectively on the validation dataset and achieved dice-scores of 0.877, 0.835 and 0.803 for WT, TC and ET respectively on the testing dataset. The survival prediction network achieved an accuracy score of 0.55 and mean squared error (MSE) of 119244 on the validation dataset and achieved an accuracy score of 0.51 and MSE of 455500 on the testing dataset. This method could be implemented as a robust tool to assist clinicians in primary brain tumor management and follow-up.
MRI-Based Deep Learning Method for Classification of IDH Mutation Status
Bangalore Yogananda, C. G.
Wagner, B. C.
Truong, N. C. D.
Holcomb, J. M.
Reddy, D. D.
Saadat, N.
Hatanpaa, K. J.
Patel, T. R.
Fei, B.
Lee, M. D.
Jain, R.
Bruce, R. J.
Pinho, M. C.
Madhuranthakam, A. J.
Maldjian, J. A.
Bioengineering (Basel)2023Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Ivy GAP
UCSF-PDGM
Convolutional Neural Network (CNN)
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
U-net
brain tumor
Deep learning
Glioma
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI and genomic data were obtained from The Cancer Imaging Archive (TCIA) and The Erasmus Glioma Database (EGD). Two separate 2D networks were developed using nnU-Net, a T2w-image-only network (T2-net) and a multi-contrast network (MC-net). Each network was separately trained using TCIA (227 subjects) or TCIA + EGD data (683 subjects combined). The networks were trained to classify IDH mutation status and implement single-label tumor segmentation simultaneously. The trained networks were tested on over 1100 held-out datasets including 360 cases from UT Southwestern Medical Center, 136 cases from New York University, 175 cases from the University of Wisconsin-Madison, 456 cases from EGD (for the TCIA-trained network), and 495 cases from the University of California, San Francisco public database. A receiver operating characteristic curve (ROC) was drawn to calculate the AUC value to determine classifier performance. Results: T2-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 85.4% and 87.6% with AUCs of 0.86 and 0.89, respectively. MC-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 91.0% and 92.8% with AUCs of 0.94 and 0.96, respectively. We developed reliable, high-performing deep learning algorithms for IDH classification using both a T2-image-only and a multi-contrast approach. The networks were tested on more than 1100 subjects from diverse databases, making this the largest study on image-based IDH classification to date.
The involvement of brain regions associated with lower KPS and shorter survival time predicts a poor prognosis in glioma
Bao, Hongbo
Wang, Huan
Sun, Qian
Wang, Yujie
Liu, Hui
Liang, Peng
Lv, Zhonghua
Frontiers in Neurology2023Journal Article, cited 0 times
Website
TCGA-LGG
LGG-1p19qDeletion
TCGA-GBM
Radiogenomics
Radiomics
Isocitrate dehydrogenase (IDH) mutation
Algorithm Development
Background: Isocitrate dehydrogenase-wildtype glioblastoma (IDH-wildtype GBM) and IDH-mutant astrocytoma have distinct biological behaviors and clinical outcomes. The location of brain tumors is closely associated not only with clinical symptoms and prognosis but also with key molecular alterations such as IDH. Therefore, we hypothesize that the key brain regions influencing the prognosis of glioblastoma and astrocytoma are likely to differ. This study aims to (1) identify specific regions that are associated with the Karnofsky Performance Scale (KPS) or overall survival (OS) in IDH-wildtype GBM and IDH-mutant astrocytoma and (2) test whether the involvement of these regions could act as a prognostic indicator.; ; Methods: A total of 111 patients with IDH-wildtype GBM and 78 patients with IDH-mutant astrocytoma from the Cancer Imaging Archive database were included in the study. Voxel-based lesion-symptom mapping (VLSM) was used to identify key brain areas for lower KPS and shorter OS. Next, we analyzed the structural and cognitive dysfunction associated with these regions. The survival analysis was carried out using Kaplan–Meier survival curves. Another 72 GBM patients and 48 astrocytoma patients from Harbin Medical University Cancer Hospital were used as a validation cohort.; ; Results: Tumors located in the insular cortex, parahippocampal gyrus, and middle and superior temporal gyrus of the left hemisphere tended to lead to lower KPS and shorter OS in IDH-wildtype GBM. The regions that were significantly correlated with lower KPS in IDH-mutant astrocytoma included the subcallosal cortex and cingulate gyrus. These regions were associated with diverse structural and cognitive impairments. The involvement of these regions was an independent predictor for shorter survival in both GBM and astrocytoma.; ; Conclusion: This study identified the specific regions that were significantly associated with OS or KPS in glioma. The results may help neurosurgeons evaluate patient survival before surgery and understand the pathogenic mechanisms of glioma in depth.
Precision Lung Cancer Segmentation from CT & PET Images Using Mask2Former
Lung cancer is a leading cause of death worldwide, highlighting the critical need for early diagnosis. Lung image analysis and segmentation are essential steps in this process, but manual segmentation of medical images is extremely time-consuming for radiation oncologists. The complexity of this task is heightened by the significant variability in lung tumors, which can differ greatly in size, shape, and texture due to factors like tumor subtype, stage, and patient-specific characteristics. Traditional segmentation methods often struggle to accurately capture this diversity. To address these challenges, we propose a lung cancer diagnosis system based on Mask2Former, utilizing CT (Computed Tomography) and PET (Positron Emission Tomography) images. This system excels in generating high-quality instance segmentation masks, enabling it to better adapt to the heterogeneous nature of lung tumors compared to traditional methods. Additionally, our system classifies the segmented output as either benign or malignant, leveraging a self-supervised network. The proposed approach offers a powerful tool for early diagnosis and effective management of lung cancer using CT and PET data. Extensive experiments demonstrate its effectiveness in achieving improved segmentation and classification results.
Isodoses-a set theory-based patient-specific QA measure to compare planned and delivered isodose distributions in photon radiotherapy
Baran, M.
Tabor, Z.
Kabat, D.
Tulik, M.
Jelen, K.
Rzecki, K.
Forostianyi, B.
Balabuszek, K.
Koziarski, R.
Waligorski, M. P. R.
Strahlenther Onkol2022Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Radiation Therapy
Confusion matrix
Dose distribution
Dose-volume histogram
Gamma analysis
Quality assurance
BACKGROUND: The gamma index and dose-volume histogram (DVH)-based patient-specific quality assurance (QA) measures commonly applied in radiotherapy planning are unable to simultaneously deliver detailed locations and magnitudes of discrepancy between isodoses of planned and delivered dose distributions. By exploiting statistical classification performance measures such as sensitivity or specificity, compliance between a planned and delivered isodose may be evaluated locally, both for organs-at-risk (OAR) and the planning target volume (PTV), at any specified isodose level. Thus, a patient-specific QA tool may be developed to supplement those presently available in clinical radiotherapy. MATERIALS AND METHODS: A method was developed to locally establish and report dose delivery errors in three-dimensional (3D) isodoses of planned (reference) and delivered (evaluated) dose distributions simultaneously as a function the dose level and of spatial location. At any given isodose level, the total volume of delivered dose containing the reference and the evaluated isodoses is locally decomposed into four subregions: true positive-subregions within both reference and evaluated isodoses, true negative-outside of both of these isodoses, false positive-inside the evaluated isodose but not the reference isodose, and false negatives-inside the reference isodose but not the evaluated isodose. Such subregions may be established over the whole volume of delivered dose. This decomposition allows the construction of a confusion matrix and calculation of various indices to quantify the discrepancies between the selected planned and delivered isodose distributions, over the complete range of values of dose delivered. The 3D projection and visualization of the spatial distribution of these discrepancies facilitates the application of the developed method in clinical practice. RESULTS: Several clinical photon radiotherapy plans were analyzed using the developed method. In some plans at certain isodose levels, dose delivery errors were found at anatomically significant locations. These errors were not otherwise highlighted-neither by gamma analysis nor by DVH-based QA measures. A specially developed 3D projection tool to visualize the spatial distribution of such errors against anatomical features of the patient aids in the proposed analysis of therapy plans. CONCLUSIONS: The proposed method is able to spatially locate delivery errors at selected isodose levels and may supplement the presently applied gamma analysis and DVH-based QA measures in patient-specific radiotherapy planning.
A New Adaptive-Weighted Fusion Rule for Wavelet based PET/CT Fusion
Barani, R
Sumathi, M
International Journal of Signal Processing, Image Processing and Pattern Recognition2016Journal Article, cited 1 times
Website
RIDER Lung PET-CT
Image fusion
In recent years the Wavelet Transform (WT) had an important role in various applications of signal and image processing. In Image Processing, WT is more useful in many domains like image denoising, feature segmentation, compression, restoration, image fusion, etc. In WT based image fusion, initially the source images are decomposed into approximation and detail coefficients and followed by combining the coefficients using the suitable fusion rules. The resultant fused image is reconstructed by applying; inverse WT on the combined coefficients. This paper proposes a new adaptive fusion rule for combining the approximation coefficients of CT and PET images. The Excellency of the proposed fusion rule is stamped by measuring the image information metrics, EOG, SD and ENT on the decomposed approximation coefficients. On the other hand, the detail coefficients are combined using several existing fusion rules. The resultant fused images are quantitatively analyzed using the non-reference image quality, image fusion and error metrics. The analysis declares that the newly proposed fusion rule is more suitable for; extracting the complementary information from CT and PET images and also produces the fused image which is rich in content with good contrast and sharpness.
Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study
Barboriak, Daniel P
Zhang, Zheng
Desai, Pratikkumar
Snyder, Bradley S
Safriel, Yair
McKinstry, Robert C
Bokstein, Felix
Sorensen, Gregory
Gilbert, Mark R
Boxerman, Jerrold L
Radiology2019Journal Article, cited 2 times
Website
ACRIN-DSC-MR-Brain
ACRIN 6677
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.
A transformer-based deep neural network for detection and classification of lung cancer via PET/CT images
Barbouchi, Khalil
Hamdi, Dhekra El
Elouedi, Ines
Aïcha, Takwa Ben
Echi, Afef Kacem
Slim, Ihsen
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Algorithm Development
LUNG
Deep Learning
Radiomics
Classification
Lung cancer is the leading cause of death for men and women worldwide and the second most frequent cancer. Therefore, early detection of the disease increases the cure rate. This paper presents a new approach to evaluate the ability of positron emission tomography/computed tomography (PET/CT) images to classify and detect lung cancer using deep learning techniques. Our approach aims to fully automate lung cancer's anatomical localization from PET/CT images. It also searches to classify the tumor, which is essential as it makes it possible to determine the disease's speed of progression and the best treatments to adopt. We have built, in this work, an approach based on transformers by implementing the DETR model as a tool to detect the tumor and assist physicians in staging patients with lung cancer. The TNM staging system and histologic subtype classification were both taken as a standard for classification. Experimental results demonstrated that our approach achieves sound results on tumor localization, T staging, and histology classification. Our proposed approach detects tumors with an intersection over union (IOU) of 0.8 when tested on the Lung-PET-CT-Dx dataset. It also has yielded better accuracy than state-of-the-art T-staging and histologic classification methods. It classified T-stage and histologic subtypes with an accuracy of 0.97 and 0.94, respectively.
Variational Quantum Denoising Technique for Medical Images
A novel variational restoration framework for the medical images corrupted by quantum, or Poisson, noise is proposed in this research paper. The considered approach is using a variational scheme that leads to a nonlinear fourth-order PDE-based model. That partial differential equation model is then solved numerically by developing a consistent finite difference-based numerical approximation scheme converging to its variational solution. The obtained numerical algorithm removes successfully the quantum noise from the medical images, preserves their details, and outperforms other shot noise filtering solutions.
Geometric and Dosimetric Evaluation of a Commercially Available Auto-segmentation Tool for Gross Tumour Volume Delineation in Locally Advanced Non-small Cell Lung Cancer: a Feasibility Study
Barrett, S.
Simpkin, A.J.
Walls, G.M.
Leech, M.
Marignol, L.
2020Journal Article, cited 0 times
4D-Lung
AIMS: To quantify the reliability of a commercially available auto-segmentation tool in locally advanced non-small cell lung cancer using serial four-dimensional computed tomography (4DCT) scans during conventionally fractionated radiotherapy.
MATERIALS AND METHODS: Eight patients with serial 4DCT scans (n = 44) acquired over the course of radiotherapy were assessed. Each 4DCT had a physician-defined primary tumour manual contour (MC). An auto-contour (AC) and a user-adjusted auto-contour (UA-AC) were created for each scan. Geometric agreement of the AC and the UA-AC to the MC was assessed using the dice similarity coefficient (DSC), the centre of mass (COM) shift from the MC and the structure volume difference from the MC. Bland Altman analysis was carried out to assess agreement between contouring methods. Dosimetric reliability was assessed by comparison of planning target volume dose coverage on the MC and UA-AC. The time trend analysis of the geometric accuracy measures from the initial planning scan through to the final scan for each patient was evaluated using a Wilcoxon signed ranks test to assess the reliability of the UA-AC over the duration of radiotherapy.
RESULTS: User adjustment significantly improved all geometric comparison metrics over the AC alone. Improved agreement was observed in smaller tumours not abutting normal soft tissue and median values for geometric comparisons to the MC for DSC, tumour volume difference and COM offset were 0.80 (range 0.49-0.89), 0.8 cm3 (range 0.0-5.9 cm3) and 0.16 cm (range 0.09-0.69 cm), respectively. There were no significant differences in dose metrics measured from the MC and the UA-AC after Bonferroni correction. Variation in geometric agreement between the MC and the UA-AC were observed over the course of radiotherapy with both DSC (P = 0.035) and COM shift from the MC (ns) worsening. The median tumour volume difference from the MC improved at the later time point.
CONCLUSIONS: These findings suggest that the UA-AC can produce geometrically and dosimetrically acceptable contours for appropriately selected patients with non-small cell lung cancer. Larger studies are required to confirm the findings.
Pathologically-Validated Tumor Prediction Maps in MRI
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. ; This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.
Equating quantitative emphysema measurements on different CT image reconstructions
Bartel, Seth T
Bierhals, Andrew J
Pilgram, Thomas K
Hong, Cheng
Schechtman, Kenneth B
Conradi, Susan H
Gierada, David S
Medical Physics2011Journal Article, cited 15 times
Website
National Lung Screening Trial (NLST)
LUNG
LDCT
PURPOSE: To mathematically model the relationship between CT measurements of emphysema obtained from images reconstructed using different section thicknesses and kernels and to evaluate the accuracy of the models for converting measurements to those of a reference reconstruction. METHODS: CT raw data from the lung cancer screening examinations of 138 heavy smokers were reconstructed at 15 different combinations of section thickness and kernel. An emphysema index was quantified as the percentage of the lung with attenuation below -950 HU (EI950). Linear, quadratic, and power functions were used to model the relationship between EI950 values obtained with a reference 1 mm, medium smooth kernel reconstruction and values from each of the other 14 reconstructions. Preferred models were selected using the corrected Akaike information criterion (AICc), coefficients of determination (R2), and residuals (conversion errors), and cross-validated by a jackknife approach using the leave-one-out method. RESULTS: The preferred models were power functions, with model R2 values ranging from 0.949 to 0.998. The errors in converting EI950 measurements from other reconstructions to the 1 mm, medium smooth kernel reconstruction in leave-one-out testing were less than 3.0 index percentage points for all reconstructions, and less than 1.0 index percentage point for five reconstructions. Conversion errors were related in part to image noise, emphysema distribution, and attenuation histogram parameters. Conversion inaccuracy related to increased kernel sharpness tended to be reduced by increased section thickness. CONCLUSIONS: Image reconstruction-related differences in quantitative emphysema measurements were successfully modeled using power functions.
A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy
Bartelheimer, Kathrin
2020Thesis, cited 0 times
Dissertation
Thesis
Head-Neck Cetuximab
Segmentation
Model
Skeletonization
Abstract; ; During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times.; ; In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input.; ; To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved.; ; Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy.; ; Translation of abstract (German); ; Anatomische Veränderungen im Laufe der fraktionierten Strahlentherapie erzeugen Unsicherheiten in der tatsächlich applizierten Dosisverteilung. Je steiler die Dosisgradienten in der Verteilung sind, desto größer wird der Einfluss von Patientendeformationen. Insbesondere in der Protonentherapie erzeugen schon kleine anatomische Veränderungen im mm-Bereich große Unsicherheiten in der Reichweite und somit extreme Unterschiede zur geplanten Dosis. Um solche anatomischen Veränderungen zu quantifizieren, werden Deformationsmodelle benötigt. Durch die aufkommenden Möglichkeiten von MR-guidance gewinnt das Weichgewebe an Sichtbarkeit. Allerdings gibt es bisher nur wenige Modelle für Weichgewebe, welche den Anforderungen von hochpräziser Strahlentherapie genügen. Die meisten Modelle berücksichtigen entweder nicht genügend anatomische Details oder benötigen lange Rechenzeiten.; ; In dieser Arbeit wird ein schnelles Deformationsmodell für Weichgewebe entwickelt, welches es ermöglicht, Gewebeeigenschaften von heterogenem Gewebe zu berücksichtigen. Dieses Modell basiert auf dem Chainmail (CM)-Konzept, welches um drei grundlegende Eigenschaften erweitert wird. Rotationsfreiheitsgrade werden in das CM-Konzept eingebracht, um das charakteristische Deformationsverhalten zu verbessern. Es wird ein neues Konzept für multiple Deformationsinitiatoren entwickelt, um mit globalem Deformationsinput umgehen zu können. Und zuletzt wird ein Konzept zum Umgang mit verschiedenen Formen von Deformationsinput vorgestellt, welches eine hohe Flexibilität für die Kopplung zu anderen Modellen ermöglicht.; ; Um diese Flexibilität des Modells zu zeigen, wurde es mit einem kinematischen Skelettmodell für die Kopf-Hals-Region gekoppelt, welches anatomisch korrekten Input für die Knochen liefert. Basierend auf exemplarischen Patientendatensätzen wurde gezeigt, dass das gekoppelte Modell realistisch aussehende, künstlich deformierte CTs erzeugen kann. Dies war sowohl für eine kleine Deformation im Bereich von interfraktionellen Bewegungen als auch für eine große Deformation, wie z.B. eine arms-up zu arms-down Bewegung, welche zwischen multimodalen Bildern auftreten kann, möglich. Die Ergebnisse zeigen eine starke Verbesserung der Biofidelity im Vergleich zum CM-Modell, und auch im Vergleich zu klinisch eingesetzten bildbasierten Deformationsmodellen. Die Rechenzeiten für das Modell liegen im Bereich von 30 min für single-threaded Berechnungen. Durch einfache Code-Parallelisierung können Zeiten im Bereich von 1 min erreicht werden.; ; Anwendungen, die realistische CTs aus Vorwärtsdeformationen benötigen, werden von der verbesserten Biofidelity des entwickelten Modells profitieren. Mögliche Anwendungen sind die Erstellung von Plan-Bibliotheken und virtuellen Phantomen sowie Daten-Augmentation für deep-learning Ansätze. Aufgrund der geringen Rechenzeiten ist das Modell auch für Anwendungen in der Bildregistrierung gut geeignet. In diesem Kontext wird es zu einer verbesserten Berechnung der akkumulierten Dosis beitragen, welche für hochpräzise adaptive Strahlentherapie benötigt wird.
Removing Mixture Noise from Medical Images Using Block Matching Filtering and Low-Rank Matrix Completion
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated.; In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data.; Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models.; In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network.; Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.
Neuroimaging-Based Classification Algorithm for Predicting 1p/19q-Codeletion Status in IDH-Mutant Lower Grade Gliomas
Batchala, P.P.
Muttikkal, T.J.E.
Donahue, J.H.
Patrie, J.T.
Schiff, D.
Fadul, C.E.
Mrachek, E.K.
Lopes, M.-B.
Jain, R.
Patel, S.H.
American Journal of Neuroradiology2019Journal Article, cited 0 times
TCGA-LGG
MRI
Oligodendroglioma
BACKGROUND AND PURPOSE: Isocitrate dehydrogenase (IDH)-mutant lower grade gliomas are classified as oligodendrogliomas or diffuse astrocytomas based on 1p/19q-codeletion status. We aimed to test and validate neuroradiologists' performances in predicting the codeletion status of IDH-mutant lower grade gliomas based on simple neuroimaging metrics.
MATERIALS AND METHODS: One hundred two IDH-mutant lower grade gliomas with preoperative MR imaging and known 1p/19q status from The Cancer Genome Atlas composed a training dataset. Two neuroradiologists in consensus analyzed the training dataset for various imaging features: tumor texture, margins, cortical infiltration, T2-FLAIR mismatch, tumor cyst, T2* susceptibility, hydrocephalus, midline shift, maximum dimension, primary lobe, necrosis, enhancement, edema, and gliomatosis. Statistical analysis of the training data produced a multivariate classification model for codeletion prediction based on a subset of MR imaging features and patient age. To validate the classification model, 2 different independent neuroradiologists analyzed a separate cohort of 106 institutional IDH-mutant lower grade gliomas.
RESULTS: Training dataset analysis produced a 2-step classification algorithm with 86.3% codeletion prediction accuracy, based on the following: 1) the presence of the T2-FLAIR mismatch sign, which was 100% predictive of noncodeleted lower grade gliomas, (n = 21); and 2) a logistic regression model based on texture, patient age, T2* susceptibility, primary lobe, and hydrocephalus. Independent validation of the classification algorithm rendered codeletion prediction accuracies of 81.1% and 79.2% in 2 independent readers. The metrics used in the algorithm were associated with moderate-substantial interreader agreement (κ = 0.56-0.79).
CONCLUSIONS: We have validated a classification algorithm based on simple, reproducible neuroimaging metrics and patient age that demonstrates a moderate prediction accuracy of 1p/19q-codeletion status among IDH-mutant lower grade gliomas.
A novel decentralized model for storing and sharing neuroimaging data using ethereum blockchain and the interplanetary file system
Batchu, Sai
Henry, Owen S.
Hakim, Abraham A.
International Journal of Information Technology2021Journal Article, cited 0 times
Website
REMBRANDT
Information Storage and Retrieval
Current methods to store and transfer medical neuroimaging data raise issues with security and transparency, and novel protocols are needed. Ethereum smart contracts present an encouraging new option. Ethereum is an open-source platform that allows users to construct smart contracts—self-executable packages of code that exist in the Ethereum state and allow transactions under programmed conditions. The present study developed a proof-of-concept smart contract that stores patient brain tumor data such as patient identifier, disease, grade, chemotherapy drugs, and Karnofsky score. The InterPlanetary file system was used to efficiently store the image files, and the corresponding content identifier hashes were stored within the smart contracts. Testing with a private, proof-of-authority network required only 889 MB of memory per insertion to insert 350 patient records, while retrieval required 910 MB. Inserting 350 patient records required 907 ms. The concept presented in this study exemplifies the use of smart contracts and off chain data storage for efficient retrieval/insertion of medical neuroimaging data.
Brain Tumor Automatic Detection from MRI Images Using Transfer Learning Model with Deep Convolutional Neural Network
Bayoumi, Esraa
Abd-Ellah, mahmoud
Khalaf, Ashraf A. M.
Gharieb, Reda
Journal of Advanced Engineering Trends2021Journal Article, cited 1 times
Website
Brain-Tumor-Progression
RIDER NEURO MRI
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Transfer learning
Resnet50
Vgg16
AlexNet
Inceptionv3
Brain tumor detection successfully in early-stage plays important role in improving patient treatment and survival. Evaluating magnetic resonance imaging (MRI) images manually is a very difficult task due to the numerous numbers of images produced in the clinic routinely. So, there is a need for using a computer-aided diagnosis (CAD) system for early detection and classification of brain tumors as normal and abnormal. The paper aims to design and evaluate the convolution neural network (CNN) Transfer Learning state-of-the-art performance proposed for image classification over the recent years. Five different modifications have been applied to five different famous CNN to know the most effective modification. Five-layer modifications with parameter tuning are applied for each architecture providing a new CNN architecture for brain tumor detection. Most brain tumor datasets have a small number of images to train the deep learning structure. Therefore, two datasets are used in the evaluation to ensure the effectiveness of the proposed structures. Firstly, a standard dataset from the RIDER Neuro MRI database including 349 brain MRI images with 109 normal images and 240 abnormal images. Secondly, a collection of 120 brain MRI images including 60 abnormal images and 60 normal images. The results show that the proposed CNN Transfer Learning with MRI’s can learn significant biomarkers of brain tumor, however, the best accuracy, specificity, and sensitivity gained is 100% for all of them.
Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study
Becker, A. S.
Chaitanya, K.
Schawkat, K.
Müehlematter, U. J.
Hotker, A. M.
Konukoglu, E.
Donati, O. F.
Eur J Radiol2019Journal Article, cited 3 times
Website
Prostate-3T
PROSTATE
Segmentation
Magnetic Resonance Imaging (MRI)
PURPOSE: To evaluate the interreader variability in prostate and seminal vesicle (SV) segmentation on T2w MRI. METHODS: Six readers segmented the peripheral zone (PZ), transitional zone (TZ) and SV slice-wise on axial T2w prostate MRI examinations of n=80 patients. Twenty different similarity scores, including dice score (DS), Hausdorff distance (HD) and volumetric similarity coefficient (VS), were computed with the VISCERAL EvaluateSegmentation software for all structures combined and separately for the whole gland (WG=PZ+TZ), TZ and SV. Differences between base, midgland and apex were evaluated with DS slice-wise. Descriptive statistics for similarity scores were computed. Wilcoxon testing to evaluate differences of DS, HD and VS was performed. RESULTS: Overall segmentation variability was good with a mean DS of 0.859 (+/-SD=0.0542), HD of 36.6 (+/-34.9 voxels) and VS of 0.926 (+/-0.065). The WG showed a DS, HD and VS of 0.738 (+/-0.144), 36.2 (+/-35.6 vx) and 0.853 (+/-0.143), respectively. The TZ showed generally lower variability with a DS of 0.738 (+/-0.144), HD of 24.8 (+/-16 vx) and VS of 0.908 (+/-0.126). The lowest variability was found for the SV with DS of 0.884 (+/-0.0407), HD of 17 (+/-10.9 vx) and VS of 0.936 (+/-0.0509). We found a markedly lower DS of the segmentations in the apex (0.85+/-0.12) compared to the base (0.87+/-0.10, p<0.01) and the midgland (0.89+/-0.10, p<0.001). CONCLUSIONS: We report baseline values for interreader variability of prostate and SV segmentation on T2w MRI. Variability was highest in the apex, lower in the base, and lowest in the midgland.
Semantic Composition of Data Analytical Processes
Bednár, Peter
Ivančáková, Juliana
Sarnovský, Martin
Acta Polytechnica Hungarica2024Journal Article, cited 0 times
Website
C_NMC_2019 Dataset: ALL Challenge dataset of ISBI 2019
Algorithm Development
Semantic features
Ontology
This paper presents the semantic framework for the description and automatic composition of the data analytical processes. The framework specifies how to describe goals, input data, outputs and various data operators for data pre-processing and modelling that can be applied to achieve the goals. The main contribution of this paper is the formal language for the specification of the preconditions, postconditions, inputs and outputs of the data operators. The formal description of the operators with the logical expressions allows automatic composition of operators into the complex workflows achieving the specified goals of the data analysis. The evaluation of the semantic; framework was performed on the two real-world use cases from the medical domain, where the automatically generated workflow was compared with the implementation manually programmed by the data scientist.
Integration of proteomics with CT-based qualitative and radiomic features in high-grade serous ovarian cancer patients: an exploratory analysis
Beer, Lucian
Sahin, Hilal
Bateman, Nicholas W
Blazic, Ivana
Vargas, Hebert Alberto
Veeraraghavan, Harini
Kirby, Justin
Fevrier-Sullivan, Brenda
Freymann, John B
Jaffe, C Carl
European Radiology2020Journal Article, cited 1 times
Website
TCGA-OV
CPTAC
OVARY
radiomics
CT
Anatomical DCE-MRI phantoms generated from glioma patient data
Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data
Beichel, Reinhard R
Smith, Brian J
Bauer, Christian
Ulrich, Ethan J
Ahmadvand, Payam
Budzevich, Mikalai M
Gillies, Robert J
Goldgof, Dmitry
Grkovski, Milan
Hamarneh, Ghassan
Medical Physics2017Journal Article, cited 7 times
Website
QIN PET Phantom
PURPOSE: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. METHODS: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. RESULTS: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively. CONCLUSION: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.
FDG PET based prediction of response in head and neck cancer treatment: Assessment of new quantitative imaging features
Beichel, Reinhard R.
Ulrich, Ethan J.
Smith, Brian J.
Bauer, Christian
Brown, Bartley
Casavant, Thomas
Sunderland, John J.
Graham, Michael M.
Buatti, John M.
PLoS One2019Journal Article, cited 0 times
QIN-HEADNECK
Head and Neck
PET
INTRODUCTION: 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is now a standard diagnostic imaging test performed in patients with head and neck cancer for staging, re-staging, radiotherapy planning, and outcome assessment. Currently, quantitative analysis of FDG PET scans is limited to simple metrics like maximum standardized uptake value, metabolic tumor volume, or total lesion glycolysis, which have limited predictive value. The goal of this work was to assess the predictive potential of new (i.e., nonstandard) quantitative imaging features on head and neck cancer outcome.
METHODS: This retrospective study analyzed fifty-eight pre- and post-treatment FDG PET scans of patients with head and neck squamous cell cancer to calculate five standard and seventeen new features at baseline and post-treatment. Cox survival regression was used to assess the predictive potential of each quantitative imaging feature on disease-free survival.
RESULTS: Analysis showed that the post-treatment change of the average tracer uptake in the rim background region immediately adjacent to the tumor normalized by uptake in the liver represents a novel PET feature that is associated with disease-free survival (HR 1.95; 95% CI 1.27, 2.99) and has good discriminative performance (c index 0.791).
CONCLUSION: The reported findings define a promising new direction for quantitative imaging biomarker research in head and neck squamous cell cancer and highlight the potential role of new radiomics features in oncology decision making as part of precision medicine.
Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma
Beig, Niha
Bera, Kaustav
Prasanna, Prateek
Antunes, Jacob
Correa, Ramon
Singh, Salendra
Saeed Bamashmos, Anas
Ismail, Marwa
Braman, Nathaniel
Verma, Ruchika
Hill, Virginia B
Statsevych, Volodymyr
Ahluwalia, Manmeet S
Varadan, Vinay
Madabhushi, Anant
Tiwari, Pallavi
Clin Cancer Res2020Journal Article, cited 0 times
Website
TCGA-GBM
Ivy GAP
Magnetic Resonance Imaging (MRI)
BRAIN
Glioblastoma Multiforme (GBM)
Radiomics
Radiogenomics
PURPOSE: To (i) create a survival risk score using radiomic features from the tumor habitat on routine MRI to predict progression-free survival (PFS) in glioblastoma and (ii) obtain a biological basis for these prognostic radiomic features, by studying their radiogenomic associations with molecular signaling pathways. EXPERIMENTAL DESIGN: Two hundred three patients with pretreatment Gd-T1w, T2w, T2w-FLAIR MRI were obtained from 3 cohorts: The Cancer Imaging Archive (TCIA; n = 130), Ivy GAP (n = 32), and Cleveland Clinic (n = 41). Gene-expression profiles of corresponding patients were obtained for TCIA cohort. For every study, following expert segmentation of tumor subcompartments (necrotic core, enhancing tumor, peritumoral edema), 936 3D radiomic features were extracted from each subcompartment across all MRI protocols. Using Cox regression model, radiomic risk score (RRS) was developed for every protocol to predict PFS on the training cohort (n = 130) and evaluated on the holdout cohort (n = 73). Further, Gene Ontology and single-sample gene set enrichment analysis were used to identify specific molecular signaling pathway networks associated with RRS features. RESULTS: Twenty-five radiomic features from the tumor habitat yielded the RRS. A combination of RRS with clinical (age and gender) and molecular features (MGMT and IDH status) resulted in a concordance index of 0.81 (P < 0.0001) on training and 0.84 (P = 0.03) on the test set. Radiogenomic analysis revealed associations of RRS features with signaling pathways for cell differentiation, cell adhesion, and angiogenesis, which contribute to chemoresistance in GBM. CONCLUSIONS: Our findings suggest that prognostic radiomic features from routine Gd-T1w MRI may also be significantly associated with key biological processes that affect response to chemotherapy in GBM.
Radiogenomic analysis of hypoxia pathway is predictive of overall survival in Glioblastoma
Beig, N.
Patel, J.
Prasanna, P.
Hill, V.
Gupta, A.
Correa, R.
Bera, K.
Singh, S.
Partovi, S.
Varadan, V.
Ahluwalia, M.
Madabhushi, A.
Tiwari, P.
Scientific RepoRtS2018Journal Article, cited 5 times
Website
TCGA-GBM
Radiomics
Segmentation
Texture features
Hypoxia, a characteristic trait of Glioblastoma (GBM), is known to cause resistance to chemo-radiation treatment and is linked with poor survival. There is hence an urgent need to non-invasively characterize tumor hypoxia to improve GBM management. We hypothesized that (a) radiomic texture descriptors can capture tumor heterogeneity manifested as a result of molecular variations in tumor hypoxia, on routine treatment naive MRI, and (b) these imaging based texture surrogate markers of hypoxia can discriminate GBM patients as short-term (STS), mid-term (MTS), and long-term survivors (LTS). 115 studies (33 STS, 41 MTS, 41 LTS) with gadolinium-enhanced T1-weighted MRI (Gd-T1w) and T2-weighted (T2w) and FLAIR MRI protocols and the corresponding RNA sequences were obtained. After expert segmentation of necrotic, enhancing, and edematous/nonenhancing tumor regions for every study, 30 radiomic texture descriptors were extracted from every region across every MRI protocol. Using the expression profile of 21 hypoxia-associated genes, a hypoxia enrichment score (HES) was obtained for the training cohort of 85 cases. Mutual information score was used to identify a subset of radiomic features that were most informative of HES within 3-fold cross-validation to categorize studies as STS, MTS, and LTS. When validated on an additional cohort of 30 studies (11 STS, 9 MTS, 10 LTS), our results revealed that the most discriminative features of HES were also able to distinguish STS from LTS (p = 0.003).
Radiogenomic analysis of hypoxia pathway reveals computerized MRI descriptors predictive of overall survival in Glioblastoma
Sexually dimorphic radiogenomic models identify distinct imaging and biological pathways that are prognostic of overall survival in glioblastoma
Beig, Niha
Singh, Salendra
Bera, Kaustav
Prasanna, Prateek
Singh, Gagandeep
Chen, Jonathan
Bamashmos, Anas Saeed
Barnett, Addison
Hunter, Kyle
Statsevych, Volodymyr
Hill, Virginia B
Varadan, Vinay
Madabhushi, Anant
Ahluwalia, Manmeet S
Tiwari, Pallavi
Neuro-Oncology2020Journal Article, cited 0 times
IvyGAP
Glioblastoma
MRI
BACKGROUND: Recent epidemiological studies have suggested that sexual dimorphism influences treatment response and prognostic outcome in glioblastoma (GBM). To this end, we sought to (i) identify distinct sex-specific radiomic phenotypes-from tumor subcompartments (peritumoral edema, enhancing tumor, and necrotic core) using pretreatment MRI scans-that are prognostic of overall survival (OS) in GBMs, and (ii) investigate radiogenomic associations of the MRI-based phenotypes with corresponding transcriptomic data, to identify the signaling pathways that drive sex-specific tumor biology and treatment response in GBM.
METHODS: In a retrospective setting, 313 GBM patients (male = 196, female = 117) were curated from multiple institutions for radiomic analysis, where 130 were used for training and independently validated on a cohort of 183 patients. For the radiogenomic analysis, 147 GBM patients (male = 94, female = 53) were used, with 125 patients in training and 22 cases for independent validation.
RESULTS: Cox regression models of radiomic features from gadolinium T1-weighted MRI allowed for developing more precise prognostic models, when trained separately on male and female cohorts. Our radiogenomic analysis revealed higher expression of Laws energy features that capture spots and ripple-like patterns (representative of increased heterogeneity) from the enhancing tumor region, as well as aggressive biological processes of cell adhesion and angiogenesis to be more enriched in the "high-risk" group of poor OS in the male population. In contrast, higher expressions of Laws energy features (which detect levels and edges) from the necrotic core with significant involvement of immune related signaling pathways was observed in the "low-risk" group of the female population.
CONCLUSIONS: Sexually dimorphic radiogenomic models could help risk-stratify GBM patients for personalized treatment decisions.
Medical Physics2019Journal Article, cited 0 times
Website
HNSCC-3D-CT-RT
squamous cell carcenoma
HEAD AND NECK
computed tomography
Purpose To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. Acquisition and validation methods This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2–27), mid-treatment CT at 22 days after start of treatment (range: 13–38), and post-treatment CT 65 days after start of treatment (range: 35–192). Patients received RT treatment to a total dose of 58–70 Gy, using daily 2.0–2.20 Gy, fractions for 30–35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. Data format and usage notes The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). Discussion This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.
Evaluating the Use of rCBV as a Tumor Grade and Treatment Response Classifier Across NCI Quantitative Imaging Network Sites: Part II of the DSC-MRI Digital Reference Object (DRO) Challenge
Bell, Laura C
Semmineh, Natenael
An, Hongyu
Eldeniz, Cihat
Wahl, Richard
Schmainda, Kathleen M
Prah, Melissa A
Erickson, Bradley J
Korfiatis, Panagiotis
Wu, Chengyue
Sorace, Anna G
Yankeelov, Thomas E
Rutledge, Neal
Chenevert, Thomas L
Malyarenko, Dariya
Liu, Yichu
Brenner, Andrew
Hu, Leland S
Zhou, Yuxiang
Boxerman, Jerrold L
Yen, Yi-Fen
Kalpathy-Cramer, Jayashree
Beers, Andrew L
Muzi, Mark
Madhuranthakam, Ananth J
Pinho, Marco
Johnson, Brian
Quarles, C Chad
Tomography2020Journal Article, cited 1 times
Website
QIN-BRAIN-DSC-MRI
Classification
BRAIN
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking.
Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T
Bell, Laura C
Stokes, Ashley M
Quarles, C Chad
Journal of Magnetic Resonance Imaging2020Journal Article, cited 0 times
Website
QIN-BRAIN-DSC-MRI
Brain-Tumor-Progression
Classification
Development of a 3D CNN-based AI Model for Automated Segmentation of the Prostatic Urethra
Belue, M. J.
Harmon, S. A.
Patel, K.
Daryanani, A.
Yilmaz, E. C.
Pinto, P. A.
Wood, B. J.
Citrin, D. E.
Choyke, P. L.
Turkbey, B.
Acad Radiol2022Journal Article, cited 0 times
Website
PROSTATEx
radiation therapy
PROSTATE
urethra
RATIONALE AND OBJECTIVE: The combined use of prostate cancer radiotherapy and MRI planning is increasingly being used in the treatment of clinically significant prostate cancers. The radiotherapy dosage quantity is limited by toxicity in organs with de-novo genitourinary toxicity occurrence remaining unperturbed. Estimation of the urethral radiation dose via anatomical contouring may improve our understanding of genitourinary toxicity and its related symptoms. Yet, urethral delineation remains an expert-dependent and time-consuming procedure. In this study, we aim to develop a fully automated segmentation tool for the prostatic urethra. MATERIALS AND METHODS: This study incorporated 939 patients' T2-weighted MRI scans (train/validation/test/excluded: 657/141/140/1 patients), including in-house and public PROSTATE-x datasets, and their corresponding ground truth urethral contours from an expert genitourinary radiologist. The AI model was developed using MONAI framework and was based on a 3D-UNet. AI model performance was determined by Dice score (volume-based) and the Centerline Distance (CLD) between the prediction and ground truth centers (slice-based). All predictions were compared to ground truth in a systematic failure analysis to elucidate the model's strengths and weaknesses. The Wilcoxon-rank sum test was used for pair-wise comparison of group differences. RESULTS: The overall organ-adjusted Dice score for this model was 0.61 and overall CLD was 2.56 mm. When comparing prostates with symmetrical (n = 117) and asymmetrical (n = 23) benign prostate hyperplasia (BPH), the AI model performed better on symmetrical prostates compared to asymmetrical in both Dice score (0.64 vs. 0.51 respectively, p < 0.05) and mean CLD (2.3 mm vs. 3.8 mm respectively, p < 0.05). When calculating location-specific performance, the performance was highest at the apex and lowest at the base location of the prostate for Dice and CLD. Dice location dependence: symmetrical (Apex, Mid, Base: 0.69 vs. 0.67 vs. 0.54 respectively, p < 0.05) and asymmetrical (Apex, Mid, Base: 0.68 vs. 0.52 vs. 0.39 respectively, p < 0.05). CLD location dependence: symmetrical (Apex, Mid, Base: 1.43 mm vs. 2.15 mm vs. 3.28 mm, p < 0.05) and asymmetrical (Apex, Mid, Base: 1.83 mm vs. 3.1 mm vs. 6.24 mm, p < 0.05). CONCLUSION: We developed a fully automated prostatic urethra segmentation AI tool yielding its best performance in prostate glands with symmetric BPH features. This system can potentially be used to assist treatment planning in patients who can undergo whole gland radiation therapy or ablative focal therapy.
Towards High Performing and Reliable Deep Convolutional Neural Network Models for Typically Limited Medical Imaging Datasets
Artificial Intelligence (AI) is “The science and engineering of making intelligent machines, especially intelligent computer programs” [93]. Artificial Intelligence has been applied in a wide range of fields including automobiles, space, robotics, and healthcare.; According to recent reports, AI will have a huge impact on increasing the world economy by 2030 and it’s expected that the greatest impact will be in the field of healthcare. The global market size of AI in healthcare was estimated at USD 10.4 billion in 2021 and is; expected to grow at a high rate from 2022 to 2030 (CAGR of 38.4%) [124]. Applications of AI in healthcare include robot-assisted surgery, disease detection, health monitoring, and; automatic medical image analysis. Healthcare organizations are becoming increasingly in terested in how artificial intelligence can support better patient care while reducing costs; and improving efficiencies.; Deep learning is a subset of AI that is becoming transformative for healthcare. Deep; learning offers fast and accurate data analysis. Deep learning is based on the concept of; artificial neural networks to solve complex problems.; In this dissertation, we propose deep learning-based solutions to the problems of limited; medical imaging in two clinical contexts: brain tumor prognosis and COVID-19 diagno sis. For brain tumor prognosis, we suggest novel systems for overall survival prediction; of Glioblastoma patients from small magnetic resonance imaging (MRI) datasets based on; ensembles of convolutional neural networks (CNNs). For COVID-19 diagnosis, we reveal; one critical problem with CNN-based approaches for predicting COVID-19 from chest X-ray; (CXR) imaging: shortcut learning. Then, we experimentally suggest methods to mitigate; this problem to build fair, reliable, robust, and transparent deep learning based clinical; decision support systems. We discovered this problem with CNNs and using Chest Xray imaging. However, the issue and solutions generally apply to other imaging modalities and; recognition problems.
Ensembles of Convolutional Neural Networks for Survival Time Estimation of High-Grade Glioma Patients from Multimodal MRI
Ben Ahmed, Kaoutar
Hall, Lawrence O.
Goldgof, Dmitry B.
Gatenby, Robert
Diagnostics2022Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
Deep learning
Glioblastoma Multiforme (GBM)
Machine Learning
Magnetic Resonance Imaging (MRI)
Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.
Prostate Cancer Delineation in MRI; Images Based on Deep Learning:; Quantitative Comparison and Promising; Perspective
Prostate cancer is the most common malignant male tumor. Magnetic Resonance Imaging (MRI) plays a crucial role in the detection, diagnosis, and treatment of prostate cancer diseases. Computer-aided diagnosis systems can help doctors to analyze MRI images and detect prostate cancer earlier. One of the key stages of prostate cancer CAD systems is the automatic delineation of the prostate. Deep learning has recently demonstrated promising segmentation results with medical images. The purpose of this paper is to compare the state-of-the-art of deep learning-based approaches for prostate delineation in MRI images and discussing their limitations and strengths. Besides, we introduce a promising perspective for prostate tumor classification in MRI images. This perspective includes the use of the best segmentation model to detect the prostate tumors in MRI images. Then, we will employ the segmented images to extract the radiomics features that will be used to discriminate benign or malignant prostate tumors.
Deep Convolutional Neural Networks for Brain Tumor Segmentation: Boosting Performance Using Deep Transfer Learning: Preliminary Results
Brain tumor segmentation through MRI images analysis is one of the most challenging issues in medical field. Among these issues, Glioblastomas (GBM) invade the surrounding tissue rather than displacing it, causing unclear boundaries, furthermore, GBM in MRI scans have the same appearance as Gliosis, stroke, inflammation and blood spots. Also, fully automatic brain tumor segmentation methods face other issues such as false positive and false negative regions. In this paper, we present new pipelines to boost the prediction of GBM tumoral regions. These pipelines are based on 3 stages, first stage, we developed Deep Convolutional Neural Networks (DCNNs), then in second stage we extract multi-dimensional features from higher-resolution representation of DCNNs, in third stage we developed machine learning algorithms, where we feed the extracted features from DCNNs into different algorithms such as Random forest (RF) and Logistic regression (LR), and principal component analysis with support vector machine (PCA-SVM). Our experiment results are reported on BRATS-2019 dataset where we achieved through our proposed pipelines the state-of-the-art performance. The average Dice score of our best proposed brain tumor segmentation pipeline is 0.85, 0.76, 0.74 for whole tumor, tumor core, and enhancing tumor, respectively. Finally, our proposed pipeline provides an accurate segmentation performance in addition to the computational efficiency in terms of inference time makes it practical for day-to-day use in clinical centers and for research.
miRNA normalization enables joint analysis of several datasets to increase sensitivity and to reveal novel miRNAs differentially expressed in breast cancer
Ben-Elazar, Shay
Aure, Miriam Ragle
Jonsdottir, Kristin
Leivonen, Suvi-Katri
Kristensen, Vessela N.
Janssen, Emiel A. M.
Sahlberg, Kristine Kleivi
Lingjærde, Ole Christian
Yakhini, Zohar
2021Journal Article, cited 0 times
TCGA-BRCA
Different miRNA profiling protocols and technologies introduce differences in the resulting quantitative expression profiles. These include differences in the presence (and measurability) of certain miRNAs. We present and examine a method based on quantile normalization, Adjusted Quantile Normalization (AQuN), to combine miRNA expression data from multiple studies in breast cancer into a single joint dataset for integrative analysis. By pooling multiple datasets, we obtain increased statistical power, surfacing patterns that do not emerge as statistically significant when separately analyzing these datasets. To merge several datasets, as we do here, one needs to overcome both technical and batch differences between these datasets. We compare several approaches for merging and jointly analyzing miRNA datasets. We investigate the statistical confidence for known results and highlight potential new findings that resulted from the joint analysis using AQuN. In particular, we detect several miRNAs to be differentially expressed in estrogen receptor (ER) positive versus ER negative samples. In addition, we identify new potential biomarkers and therapeutic targets for both clinical groups. As a specific example, using the AQuN-derived dataset we detect hsa-miR-193b-5p to have a statistically significant over-expression in the ER positive group, a phenomenon that was not previously reported. Furthermore, as demonstrated by functional assays in breast cancer cell lines, overexpression of hsa-miR-193b-5p in breast cancer cell lines resulted in decreased cell viability in addition to inducing apoptosis. Together, these observations suggest a novel functional role for this miRNA in breast cancer. Packages implementing AQuN are provided for Python and Matlab: https://github.com/YakhiniGroup/PyAQN.
Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images
Comprehensive Analysis of Radiomic Datasets by RadAR
Benelli, Matteo
Barucci, Andrea
Zoppetti, Nicola
Calusi, Silvia
Redapi, Laura
Della Gala, Giuseppe
Piffer, Stefano
Bernardi, Luca
Fusi, Franco
Pallotta, Stefania
Cancer research2020Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
NSCLC-Radiomics
OPC-Radiomics
Quantitative analysis of biomedical images, referred to as radiomics, is emerging as a promising approach to facilitate clinical decisions and improve patient stratification. The typical radiomic workflow includes image acquisition, segmentation, feature extraction, and analysis of high-dimensional datasets. While procedures for primary radiomic analyses have been established in recent years, processing the resulting radiomic datasets remains a challenge due to the lack of specific tools for doing so. Here we present RadAR (Radiomics Analysis with R), a new software to perform comprehensive analysis of radiomic features. RadAR allows users to process radiomic datasets in their entirety, from data import to feature processing and visualization, and implements multiple statistical methods for analysis of these data. We used RadAR to analyze the radiomic profiles of more than 850 patients with cancer from publicly available datasets and showed that it was able to recapitulate expected results. These results demonstrate RadAR as a reliable and valuable tool for the radiomics community. SIGNIFICANCE: A new computational tool performs comprehensive analysis of high-dimensional radiomic datasets, recapitulating expected results in the analysis of radiomic profiles of >850 patients with cancer from independent datasets.
Segmentation of three-dimensional images with parametric active surfaces and topology changes
Benninghoff, Heike
Garcke, Harald
Journal of Scientific Computing2017Journal Article, cited 1 times
Website
Algorithm Development
Segmentation
In this paper, we introduce a novel parametric finite element method for segmentation of three-dimensional images. We consider a piecewise constant version of the Mumford-Shah and the Chan-Vese functionals and perform a region-based segmentation of 3D image data. An evolution law is derived from energy minimization problems which push the surfaces to the boundaries of 3D objects in the image. We propose a parametric scheme which describes the evolution of parametric surfaces. An efficient finite element scheme is proposed for a numerical approximation of the evolution equations. Since standard parametric methods cannot handle topology changes automatically, an efficient method is presented to detect, identify and perform changes in the topology of the surfaces. One main focus of this paper are the algorithmic details to handle topology changes like splitting and merging of surfaces and change of the genus of a surface. Different artificial images are studied to demonstrate the ability to detect the different types of topology changes. Finally, the parametric method is applied to segmentation of medical 3D images.
Safer Motion Planning of Steerable Needles via a Shaft-to-Tissue Force Model
Bentley, Michael
Rucker, Caleb
Reddy, Chakravarthy
Salzman, Oren
Kuntz, Alan
2023Journal Article, cited 0 times
LCTSC
Steerable needles are capable of accurately targeting difficult-to-reach clinical sites in the body. By bending around sensitive anatomical structures, steerable needles have the potential to reduce the invasiveness of many medical procedures. However, inserting these needles with curved trajectories increases the risk of tissue damage due to perpendicular forces exerted on the surrounding tissue by the needle’s shaft, potentially resulting in lateral shearing through tissue. Such forces can cause significant tissue damage, negatively affecting patient outcomes. In this work, we derive a tissue and needle force model based on a Cosserat string formulation, which describes the normal forces and frictional forces along the shaft as a function of the planned needle path, friction model and parameters, and tip piercing force. We propose this new force model and associated cost function as a safer and more clinically relevant metric than those currently used in motion planning for steerable needles. We fit and validate our model through physical needle robot experiments in a gel phantom. We use this force model to define a bottleneck cost function for motion planning and evaluate it against the commonly used path-length cost function in hundreds of randomly generated three-dimensional (3D) environments. Plans generated with our force-based cost show a 62% reduction in the peak modeled tissue force with only a 0.07% increase in length on average compared to using the path-length cost in planning. Additionally, we demonstrate planning with our force-based cost function in a lung tumor biopsy scenario from a segmented computed tomography (CT) scan. By directly minimizing the modeled needle-to-tissue force, our method may reduce patient risk and improve medical outcomes from steerable needle interventions.
Adverse prognosis of glioblastoma contacting the subventricular zone: Biological correlates
Berendsen, S.
van Bodegraven, E.
Seute, T.
Spliet, W. G. M.
Geurts, M.
Hendrikse, J.
Schoysman, L.
Huiszoon, W. B.
Varkila, M.
Rouss, S.
Bell, E. H.
Kroonen, J.
Chakravarti, A.
Bours, V.
Snijders, T. J.
Robe, P. A.
PLoS One2019Journal Article, cited 2 times
Website
TCGA-GBM
Radiogenomics
Magnetic Resonance Imaging (MRI)
INTRODUCTION: The subventricular zone (SVZ) in the brain is associated with gliomagenesis and resistance to treatment in glioblastoma. In this study, we investigate the prognostic role and biological characteristics of subventricular zone (SVZ) involvement in glioblastoma. METHODS: We analyzed T1-weighted, gadolinium-enhanced MR images of a retrospective cohort of 647 primary glioblastoma patients diagnosed between 2005-2013, and performed a multivariable Cox regression analysis to adjust the prognostic effect of SVZ involvement for clinical patient- and tumor-related factors. Protein expression patterns of a.o. markers of neural stem cellness (CD133 and GFAP-delta) and (epithelial-) mesenchymal transition (NF-kappaB, C/EBP-beta and STAT3) were determined with immunohistochemistry on tissue microarrays containing 220 of the tumors. Molecular classification and mRNA expression-based gene set enrichment analyses, miRNA expression and SNP copy number analyses were performed on fresh frozen tissue obtained from 76 tumors. Confirmatory analyses were performed on glioblastoma TCGA/TCIA data. RESULTS: Involvement of the SVZ was a significant adverse prognostic factor in glioblastoma, independent of age, KPS, surgery type and postoperative treatment. Tumor volume and postoperative complications did not explain this prognostic effect. SVZ contact was associated with increased nuclear expression of the (epithelial-) mesenchymal transition markers C/EBP-beta and phospho-STAT3. SVZ contact was not associated with molecular subtype, distinct gene expression patterns, or markers of stem cellness. Our main findings were confirmed in a cohort of 229 TCGA/TCIA glioblastomas. CONCLUSION: In conclusion, involvement of the SVZ is an independent prognostic factor in glioblastoma, and associates with increased expression of key markers of (epithelial-) mesenchymal transformation, but does not correlate with stem cellness, molecular subtype, or specific (mi)RNA expression patterns.
Pulmonary nodule detection using a cascaded SVM classifier
Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.
Detection of Motion Artifacts in Thoracic CT Scans
The analysis of a lung CT scan can be a complicated task due to the presence of certain image artifacts such as cardiac motion, respiratory motion, beam hardening artefacts, and so on. In this project, we have built a deep learning based model for the detection of these motion artifacts in the image. Using biomedical image segmentation models we have trained the model on lung CT scans from the LIDC dataset. The developed model is able to identify the regions in the scan which are affected by motion by segmenting the image. Further it is also able to separate normal (or easy to analyze) CT scans from CT scans that may provide incorrect quantitative analysis, even when the examples of image artifacts or low quality scans are scarce. In addition, the model is able to evaluate a quality score for the scan based on the amount of artifacts detected which could hamper its authenticity for the further diagnosisof disease or disease progression. We used two main approaches during the experimentation process - 2D slice based approaches and 2D patch based approaches of which the patch based approaches yielded the final model. The final model gave an AUC of 0.814 in the ROC analysis of the evaluation study conducted. Discussions on the approaches and findings of the final model are provided and future directions are proposed.
Clinical capability of modern brain tumor segmentation models
Berkley, Adam
Saueressig, Camillo
Shukla, Utkarsh
Chowdhury, Imran
Munoz‐Gauna, Anthony
Shehu, Olalekan
Singh, Ritambhara
Munbodh, Reshma
Medical Physics2023Journal Article, cited 0 times
QIN-BRAIN-DSC-MRI
Glioma
PURPOSE: State-of-the-art automated segmentation methods achieve exceptionally high performance on the Brain Tumor Segmentation (BraTS) challenge, a dataset of uniformly processed and standardized magnetic resonance generated images (MRIs) of gliomas. However, a reasonable concern is that these models may not fare well on clinical MRIs that do not belong to the specially curated BraTS dataset. Research using the previous generation of deep learning models indicates significant performance loss on cross-institutional predictions. Here, we evaluate the cross-institutional applicability and generalzsability of state-of-the-art deep learning models on new clinical data.
METHODS: We train a state-of-the-art 3D U-Net model on the conventional BraTS dataset comprising low- and high-grade gliomas. We then evaluate the performance of this model for automatic tumor segmentation of brain tumors on in-house clinical data. This dataset contains MRIs of different tumor types, resolutions, and standardization than those found in the BraTS dataset. Ground truth segmentations to validate the automated segmentation for in-house clinical data were obtained from expert radiation oncologists.
RESULTS: We report average Dice scores of 0.764, 0.648, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively, in the clinical MRIs. These means are higher than numbers reported previously on same institution and cross-institution datasets of different origin using different methods. There is no statistically significant difference when comparing the dice scores to the inter-annotation variability between two expert clinical radiation oncologists. Although performance on the clinical data is lower than on the BraTS data, these numbers indicate that models trained on the BraTS dataset have impressive segmentation performance on previously unseen images obtained at a separate clinical institution. These images differ in the imaging resolutions, standardization pipelines, and tumor types from the BraTS data.
CONCLUSIONS: State-of-the-art deep learning models demonstrate promising performance on cross-institutional predictions. They considerably improve on previous models and can transfer knowledge to new types of brain tumors without additional modeling.
Optimization with Soft Dice Can Lead to a Volumetric Bias
Segmentation is a fundamental task in medical image analysis. The clinical interest is often to measure the volume of a structure. To evaluate and compare segmentation methods, the similarity between a segmentation and a predefined ground truth is measured using metrics such as the Dice score. Recent segmentation methods based on convolutional neural networks use a differentiable surrogate of the Dice score, such as soft Dice, explicitly as the loss function during the learning phase. Even though this approach leads to improved Dice scores, we find that, both theoretically and empirically on four medical tasks, it can introduce a volumetric bias for tasks with high inherent uncertainty. As such, this may limit the method’s clinical applicability.
Leveraging geodesic distances and the geometrical information they convey is key for many data-oriented applications in imaging. Geodesic distance computation has been used for long for image segmentation using Image based metrics. We introduce a new method by generating isotropic Riemannian metrics adapted to a problem using CNN and give as illustrations an example of application. We then apply this idea to the segmentation of brain tumours as unit balls for the geodesic distance computed with the metric potential output by a CNN, thus imposing geometrical and topological constraints on the output mask. We show that geodesic distance modules work well in machine learning frameworks and can be used to achieve state-of-the-art performances while ensuring geometrical and/or topological properties.
An Optimized U-Net for Unbalanced Multi-Organ Segmentation
Berzoini, Raffaele
Colombo, Aurora A.
Bardini, Susanna
Conelli, Antonello
D'Arnese, Eleonora
Santambrogio, Marco D.
2022Conference Proceedings, cited 0 times
CT-ORG
Medical practice is shifting towards the automation and standardization of the most repetitive procedures to speed up the time-to-diagnosis. Semantic segmentation repre-sents a critical stage in identifying a broad spectrum of regions of interest within medical images. Indeed, it identifies relevant objects by attributing to each image pixels a value representing pre-determined classes. Despite the relative ease of visually locating organs in the human body, automated multi-organ segmentation is hindered by the variety of shapes and dimensions of organs and computational resources. Within this context, we propose BIONET, a U-Net-based Fully Convolutional Net-work for efficiently semantically segmenting abdominal organs. BIONET deals with unbalanced data distribution related to the physiological conformation of the considered organs, reaching good accuracy for variable organs dimension with low variance, and a Weighted Global Dice Score score of 93.74 ± 1.1%, and an inference performance of 138 frames per second. Clinical Relevance - This work established a starting point for developing an automatic tool for semantic segmentation of variable-sized organs within the abdomen, reaching considerable accuracy on small and large organs with low variability, reaching a 93.74 ± 1.1 % of Weighted Global Dice Score.
On How to Push Efficient Medical Semantic Segmentation to the Edge: the SENECA approach
Berzoini, Raffaele
D'Arnese, Eleonora
Conficconi, Davide
2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)2022Journal Article, cited 0 times
CT-ORG
Segmentation
Graphics Processing Units (GPU)
U-Net
Semantic segmentation is the process of assigning each input image pixel a value representing a class, and it enables the clustering of pixels into object instances. It is a highly employed computer vision task in various fields such as autonomous driving and medical image analysis. In particular, in medical practice, semantic segmentation identifies different regions of interest within an image, like different organs or anomalies such as tumors. Fully Convolutional Networks (FCNs) have been employed to solve semantic segmentation in different fields and found their way in the medical one. In this context, the low contrast among semantically different areas, the constraint related to energy consumption, and computation resource availability increase the complexity and limit their adoption in daily practice. Based on these considerations, we propose SENECA to bring medical semantic segmentation to the edge with high energy efficiency and low segmentation time while preserving the accuracy. We reached a throughput of 335.4 ± 0.34 frames per second on the FPGA, 4.65× better than its GPU counterpart, with a global dice score of 93.04% ± 0.07 and an improvement in terms of energy efficiency with respect to the GPU of 12.7×.
NERONE: The Fast Way to Efficiently Execute Your Deep Learning Algorithm At the Edge
Berzoini, R.
D'Arnese, E.
Conficconi, D.
Santambrogio, M. D.
IEEE J Biomed Health Inform2023Journal Article, cited 0 times
Website
CT-ORG
Graphics Processing Units (GPU)
Segmentation
Classification
Semantic segmentation and classification are pivotal in many clinical applications, such as radiation dose quantification and surgery planning. While manually labeling images is highly time-consuming, the advent of Deep Learning (DL) has introduced a valuable alternative. Nowadays, DL models inference is run on Graphics Processing Units (GPUs), which are power-hungry devices, and, therefore, are not the most suited solution in constrained environments where Field Programmable Gate Arrays (FPGAs) become an appealing alternative given their remarkable performance per watt ratio. Unfortunately, FPGAs are hard to use for non-experts, and the creation of tools to open their employment to the computer vision community is still limited. For these reasons, we propose NERONE, which allows end users to seamlessly benefit from FPGA acceleration and energy efficiency without modifying their DL development flows. To prove the capability of NERONE to cover different network architectures, we have developed four models, one for each of the chosen datasets (three for segmentation and one for classification), and we deployed them, thanks to NERONE, on three different embedded FPGA-powered boards achieving top average energy efficiency improvements of 3.4x and 1.9x against a mobile and a datacenter GPU devices, respectively.
Accurate segmentation of lung nodule with low contrast boundaries by least weight navigation
Beula, R. Janefer
Wesley, A. Boyed
Multimedia Tools and Applications2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
LIDC-IDRI
Computed Tomography (CT)
Otsu's thresholding method
LUNG
Segmentation
Accurate segmentation of lung nodules with low contrast boundaries in CT images is a challenging task since the intensity of nodules and non-nodules overlap with each other. This work proposes a lung nodule segmentation scheme based on least weight navigation (LWN) that segments the lung nodule accurately with such low contrast boundaries. The complete lung nodule segmentation is categorized intothree stages namely, (i) Lung segmentation, (ii) Coarse segmentation of nodule, and (iii) Fine segmentation of nodule. The lung segmentation aims to eliminate the background other than the lung, whereas the coarse segmentation eliminates the lung leaving the nodules. The lung segmentation and coarse segmentation can be achieved using the traditional algorithms namely, dilation, erosion, and Otsu’s thresholding. The proposed work focused on fine segmentation where the boundaries are accurately detected by the LWN algorithm. The LWN algorithm estimates the edge points and then navigation is performed based on the least weight. This navigation is done till the final termination is reached, which results in accurate segmentation results. The experimental validation was done on LIDC and Cancer Imaging dataset with three different nodules such as Juxta vascular, Juxta pleura, and Solitary. The evaluation was done using the metrics such as dice similarity coefficient (DSC), sensitivity (SEN), positive prediction value (PPV). Hausdorff distance (HD) andProbability rand index(PRI). The proposed approach provides a DSC, SEN, and PPV of 84.27%, 89.92%, and 80.12% respectively. The result reveals that the proposed work outperforms the traditional lung nodule segmentation algorithms.
Optimizing Convolutional Neural Network by Hybridized Elephant Herding Optimization Algorithm for Magnetic Resonance Image Classification of Glioma Brain Tumor Grade
Gliomas belong to the group of the most frequent types of brain tumors. For this specific type of brain tumors, in its beginning stages, it is extremely complex to get the exact diagnosis. Even with the works from the most experienced doctors, it will not be possible without magnetic resonance imaging, which aids to make the diagnosis of brain tumors. In order to create classification of the images, to where the class of glioma belongs to, for achieving superior performance, convolutional neural networks can be used. For achieving high-level accuracy on the image classification, the convolutional network hyperparameters’ calibrations must reach a very accurate response of high accuracy results and this task proves to take up a lot of computational time and energy. Proceeding with the proposed solution, in this scientific research paper a metaheuristic method has been proposed to automatically search and target the near-optimal values of convolutional neural network hyperparameters based on hybridized version of elephant herding optimization swarm intelligence metaheuristics. The hybridized elephant herding optimization has been incorporated for convolutional neural network hyperparameters’ tuning to develop a system for automatic and instantaneous image classification of glioma brain tumors grades from the magnetic resonance imaging. Comparative analysis was performed with other methods tested on the same problem instance an results proved superiority of approach proposed in this paper.
Fuzzy volumetric delineation of brain tumor and survival prediction
Bhadani, Saumya
Mitra, Sushmita
Banerjee, Subhashis
Soft Computing2020Journal Article, cited 0 times
Website
BRATS datasets
A novel three-dimensional detailed delineation algorithm is introduced for Glioblastoma multiforme tumors in MRI. It efficiently delineates the whole tumor, enhancing core, edema and necrosis volumes using fuzzy connectivity and multi-thresholding, based on a single seed voxel. While the whole tumor volume delineation uses FLAIR and T2 MRI channels, the outlining of the enhancing core, necrosis and edema volumes employs the T1C channel. Discrete curve evolution is initially applied for multi-thresholding, to determine intervals around significant (visually critical) points, and a threshold is determined in each interval using bi-level Otsu’s method or Li and Lee’s entropy. This is followed by an interactive whole tumor volume delineation using FLAIR and T2 MRI sequences, requiring a single user-defined seed. An efficient and robust whole tumor extraction is executed using fuzzy connectedness and dynamic thresholding. Finally, the segmented whole tumor volume in T1C MRI channel is again subjected to multi-level segmentation, to delineate its sub-parts, encompassing enhancing core, necrosis and edema. This was followed by survival prediction of patients using the concept of habitats. Qualitative and quantitative evaluation, on FLAIR, T2 and T1C MR sequences of 29 GBM patients, establish its superiority over related methods, visually as well as in terms of Dice scores, Sensitivity and Hausdorff distance.
Brain Tumor Segmentation Based on 3D Residual U-Net
We propose a deep learning based approach for automatic brain tumor segmentation utilizing a three-dimensional U-Net extended by residual connections. In this work, we did not incorporate architectural modifications to the existing 3D U-Net, but rather evaluated different training strategies for potential improvement of performance. Our model was trained on the dataset of the International Brain Tumor Segmentation (BraTS) challenge 2019 that comprise multi-parametric magnetic resonance imaging (mpMRI) scans from 335 patients diagnosed with a glial tumor. Furthermore, our model was evaluated on the BraTS 2019 independent validation data that consisted of another 125 brain tumor mpMRI scans. The results that our 3D Residual U-Net obtained on the BraTS 2019 test data are Mean Dice scores of 0.697, 0.828, 0.772 and Hausdorff95; distances of 25.56, 14.64, 26.69 for enhancing tumor, whole tumor, and tumor core, respectively.
Cuckoo search based multi-objective algorithm with decomposition for detection of masses in mammogram images
Bhalerao, Pramod B.
Bonde, Sanjiv V.
International Journal of Information Technology2021Journal Article, cited 0 times
Website
CBIS-DDSM
mini-MIAS
Computer Aided Detection (CADe)
BREAST
Mammography
Machine Learning
Breast cancer is the most recurrent cancer in the United States after skin cancer. Early detection of masses in mammograms will help drop the death rate. This paper provides a hybrid approach based on a multiobjective evolutionary algorithm (MOEA) and cuckoo search. Using cuckoo search for decomposing problem into a single objective (single nest) for each Pareto optimal solution. The proposed method CS-MOEA/DE is evaluated using MIAS and DDSM datasets. A novel hybrid approach consists of nature-inspired cuckoo search and multiobjective optimization with Differential evolution, which is unique and includes detection of masses in a mammogram. The proposed work is evaluated based on 110 (50 + 60) images; the overall accuracy found for the proposed hybrid method is 96.74%. The experimental outcome shows that our proposed method provides better results than other state-of-the-art methods like the Otsu method, Kapur's Entropy, Cuckoo Search-based modified BHE.
A Reversible Medical Image Watermarking for ROI Tamper Detection and Recovery
Bhalerao, Siddharth
Ansari, Irshad Ahmad
Kumar, Anil
Circuits, Systems, and Signal Processing2023Journal Article, cited 0 times
Website
LIDC-IDRI
PDMR-BL0293-F563
Security
Algorithm Development
Medical data security is an active area of research. With the increasing rate of digitalization, telemedicine industry is experiencing rapid growth, and medical data security has become more important than ever. In this work, a region-based reversible medical image watermarking scheme has been proposed. The scheme has ROI (region of interest) tamper detection and recovery capabilities. The medical image is divided into ROI and RONI (region of noninterest) regions. In ROI region, authentication data have been embedded using prediction-error expansion technique. The compressed copy of ROI has been embedded in RONI region. Data embedding in RONI region have been performed using difference histogram expansion technique. Reversible techniques are used for data embedding in both ROI and RONI. The proposed scheme authenticates both ROI and RONI for tampering. The scheme is 100% reversible when there is no tampering. The scheme checks for ROI tampering and recovers the ROI in its original state when tampering is detected. The scheme is able to perform equally well on different classes of medical images. The scheme provides average PSNR and SSIM equal to 55 dB and 0.99, respectively, for different types of medical images.
Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images
Bhandary, Abhir
Prabhu, G. Ananth
Rajinikanth, V.
Thanaraj, K. Palani
Satapathy, Suresh Chandra
Robbins, David E.
Shasky, Charles
Zhang, Yu-Dong
Tavares, João Manuel R. S.
Raja, N. Sri Madhava
Pattern Recognition Letters2020Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Support Vector Machine (SVM)
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.
Investigation and benchmarking of U-Nets on prostate segmentation tasks
Bhandary, Shrajan
Kuhn, Dejan
Babaiee, Zahra
Fechter, Tobias
Benndorf, Matthias
Zamboglou, Constantinos
Grosu, Anca-Ligia
Grosu, Radu
Computerized Medical Imaging and Graphics2023Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PROSTATEx
Prostate
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Tumor Segmentation from Multimodal MRI Using Random Forest with Superpixel and Tensor Based Feature Extraction
Identification and localization of brain tumor tissues plays an important role in diagnosis and treatment planning of gliomas. A fully automated superpixel wise two-stage tumor tissue segmentation algorithm using random forest is proposed in this paper. First stage is used to identify total tumor and the second stage to segment sub-regions. Features for random forest classifier are extracted by constructing a tensor from multimodal MRI data and applying multi-linear singular value decomposition. The proposed method is tested on BRATS 2017 validation and test dataset. The first stage model has a Dice score of 83% for the whole tumor on the validation dataset. The total model achieves a performance of 77%, 50% and 61% Dice scores for whole tumor, enhancing tumor and tumor core, respectively on the test dataset.
COMPARISON OF A PATIENT-SPECIFIC COMPUTED TOMOGRAPHY ORGAN DOSE SOFTWARE WITH COMMERCIAL PHANTOM-BASED TOOLS
Computed Tomography imaging is an important diagnostic tool but carries some; risk due to radiation dose used to form the image. Currently, CT scanners report a; measure of radiation dose for each scan that reflects the radiation emitted by the scanner,; not the radiation dose absorbed by the patient. The radiation dose absorbed by organs,; known as organ dose, is a more relevant metric that is important for risk assessment and; CT protocol optimization. Tools for rapid organ-dose estimation are available but are; limited to using general patient models. These publicly available tools are unable to; model patient-specific anatomy and positioning within the scanner. To address these; limitations, the Personalized Rapid Estimator of Dose in Computed Tomography; (PREDICT) dosimetry tool was recently developed. This study validated the organ doses; estimated by ‘PREDICT’ with ground truth values. The patient-specific PREDICT; performance was also compared to two publicly available phantom-based methods:; VirtualDose and NCICT. The PREDICT tool demonstrated lower organ dose errors; compared to the phantom-based methods, demonstrating the benefit of patient-specific; modeling. This study also developed a method to extract the walls of cavity organs, such; as the bladder and the intestines, and quantified the effect of organ wall extraction on; organ dose. The study found that the exogenous material within the cavity organ can; affect organ dose estimate, therefore demonstrating the importance of boundary wall; extraction in dosimetry tools such as PREDICT.
Brain structural disorders detection and classification approaches: a review
Bhatele, Kirti Raj
Bhadauria, Sarita Singh
2019Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Machine Learning
This paper is an effort to encapsulate the various developments in the domain of different unsupervised, supervised and half supervised brain anomaly detection approaches or techniques proposed by the researchers working in the domain of the Medical image segmentation and classification. As researchers are constantly working hard in the domain of image segregation, interpretation and computer vision in order to automate the task of tumour segmentation, anomaly detection, classification and other structural disorder prediction at an early stage with the aid of computer. The different medical imaging modalities are used by the doctors in order to diagnose the brain tumour and other structural brain disorders which are an integral part of diagnosis and prognosis process. When these different medical image modalities are used along with various image segmentation methods and machine learning approaches tends to perform brain structural disorder detection and classification in a semi-automated or fully automated manner with high accuracy. This paper presents all such approaches using various medical image modalities for the accurate detection and classification of brain tumour and other brain structural disorders. In this paper, all the major phases of any brain tumour or brain structural disorder detection and classification approach is covered begin with the comparison of various medical image pre-processing techniques then major segmentation approaches followed by the approaches based on machine learning. This paper also presents an evaluation and comparison among the various popular texture and shape based feature extraction methods used in combination with different machine learning classifiers on the BRATS 2013 dataset. The fusion of MRI modalities used along with the hybrid features extraction methods and ensemble model delivers the best result in terms of accuracy.
Radial Cumulative Frequency Distribution: A New Imaging Signature to Detect Chromosomal Arms 1p/19q Co-deletion Status in Glioma
Gliomas are the most common primary brain tumor and are associated with high mortality. Gene mutations are one of the hallmarks of glioma formation, determining its aggressiveness as well as patients’s response towards treatment. The paper presents a novel approach to detect chromosomal arms 1p/19q co-deletion status non-invasively in low-graded glioma based on its textural characteristics in frequency domain. For this, we derived Radial Cumulative Frequency Distribution (RCFD) function from Fourier power spectrum of consecutive glioma slices. Multi-parametric MRIs of 159 grade-2 and grade-3 glioma patients, having biopsy proven 1p/19q mutational status (non-deletion: n = 57 and co-deletion: n = 102) was used in this study. Different RCFD textural features were extracted to quantify MRI image signature pattern of mutant and wildtype glioma. Owing to skewed dataset we have performed RUSBoost classification; yielding average accuracy of 73.5% for grade-2 and 83% for grade-3 glioma subjects. The efficacy of the proposed technique is discussed further in comparison with state-of-art methods.
A review of artificial intelligence in prostate cancer detection on imaging
Bhattacharya, Indrani
Khandwala, Yash S.
Vesal, Sulaiman
Shao, Wei
Yang, Qianye
Soerensen, Simon J.C.
Fan, Richard E.
Ghanouni, Pejman
Kunder, Christian A.
Brooks, James D.
Hu, Yipeng
Rusu, Mirabela
Sonn, Geoffrey A.
2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PROSTATEx
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision
G-DOC Plus–an integrative bioinformatics platform for precision medicine
Bhuvaneshwar, Krithika
Belouali, Anas
Singh, Varun
Johnson, Robert M
Song, Lei
Alaoui, Adil
Harris, Michael A
Clarke, Robert
Weiner, Louis M
Gusev, Yuriy
BMC Bioinformatics2016Journal Article, cited 14 times
Website
TCGA
REMBRANDT
Bioinformatics
Cloud computing
Precision medicine
Artificial intelligence in cancer imaging: Clinical challenges and applications
Bi, Wenya Linda
Hosny, Ahmed
Schabath, Matthew B
Giger, Maryellen L
Birkbak, Nicolai J
Mehrtash, Alireza
Allison, Tavis
Arnaout, Omar
Abbosh, Christopher
Dunn, Ian F
CA: a cancer journal for clinicians2019Journal Article, cited 0 times
Website
Radiomics
Challenge
A comparison of ground truth estimation methods
Biancardi, Alberto M
Jirapatnakul, Artit C
Reeves, Anthony P
International Journal of Computer Assisted Radiology and Surgery2010Journal Article, cited 17 times
Website
LIDC-IDRI
Algorithm Development
LUNG
PURPOSE: Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for the development of diagnostic tools by means of algorithm validation, measurement metric analysis, accurate size estimation. Four methods that estimate GTs from multiple readers' documentations by considering the spatial location of voxels were compared: thresholded Probability-Map at 0.50 (TPM(0.50)) and at 0.75 (TPM(0.75)), simultaneous truth and performance level estimation (STAPLE) and truth estimate from self distances (TESD). METHODS: A subset of the publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented by all four radiologists. The pair-wise similarities between the estimated GTs were analyzed by computing the respective Jaccard coefficients. Then, with respect to the readers' marking volumes, the estimated volumes were ranked and the sign test of the differences between them was performed. RESULTS: (a) the rank variations among the four methods and the volume differences between STAPLE and TESD are not statistically significant, (b) TPM(0.50) estimates are statistically larger (c) TPM(0.75) estimates are statistically smaller (d) there is some spatial disagreement in the estimates as the one-sided 90% confidence intervals between TPM(0.75) and TPM(0.50), TPM(0.75) and STAPLE, TPM(0.75) and TESD, TPM(0.50) and STAPLE, TPM(0.50) and TESD, STAPLE and TESD, respectively, show: [0.67, 1.00], [0.67, 1.00], [0.77, 1.00], [0.93, 1.00], [0.85, 1.00], [0.85, 1.00]. CONCLUSIONS: The method used to estimate the GT is important: the differences highlighted that STAPLE and TESD, notwithstanding a few weaknesses, appear to be equally viable as a GT estimator, while the increased availability of computing power is decreasing the appeal afforded to TPMs. Ultimately, the choice of which GT estimation method, between the two, should be preferred depends on the specific characteristics of the marked data that is used with respect to the two elements that differentiate the method approaches: relative reliabilities of the readers and the reliability of the region boundaries.
Impact of Lesion Delineation and Intensity Quantisation on the Stability of Texture Features from Lung Nodules on CT: A Reproducible Study
Bianconi, Francesco
Fravolini, Mario Luca
Palumbo, Isabella
Pascoletti, Giulia
Nuvoli, Susanna
Rondini, Maria
Spanu, Angela
Palumbo, Barbara
Diagnostics2021Journal Article, cited 0 times
LIDC-IDRI
Computer-assisted analysis of three-dimensional imaging data (radiomics) has received a lot of research attention as a possible means to improve the management of patients with lung cancer. Building robust predictive models for clinical decision making requires the imaging features to be stable enough to changes in the acquisition and extraction settings. Experimenting on 517 lung lesions from a cohort of 207 patients, we assessed the stability of 88 texture features from the following classes: first-order (13 features), Grey-level Co-Occurrence Matrix (24), Grey-level Difference Matrix (14), Grey-level Run-length Matrix (16), Grey-level Size Zone Matrix (16) and Neighbouring Grey-tone Difference Matrix (five). The analysis was based on a public dataset of lung nodules and open-access routines for feature extraction, which makes the study fully reproducible. Our results identified 30 features that had good or excellent stability relative to lesion delineation, 28 to intensity quantisation and 18 to both. We conclude that selecting the right set of imaging features is critical for building clinical predictive models, particularly when changes in lesion delineation and/or intensity quantisation are involved.
Correlation Between IBSI Morphological Features and Manually-Annotated Shape Attributes on Lung Lesions at CT
Bianconi, Francesco
Fravolini, Mario Luca
Pascoletti, Giulia
Palumbo, Isabella
Scialpi, Michele
Aristei, Cynthia
Palumbo, Barbara
2022Book Section, cited 0 times
LIDC-IDRI
Radiological examination of pulmonary nodules on CT involves the assessment of the nodules’ size and morphology, a procedure usually performed manually. In recent years computer-assisted analysis of indeterminate lung nodules has been receiving increasing research attention as a potential means to improve the diagnosis, treatment and follow-up of patients with lung cancer. Computerised analysis relies on the extraction of objective, reproducible and standardised imaging features. In this context the aim of this work was to evaluate the correlation between nine IBSI-compliant morphological features and three manually-assigned radiological attributes – lobulation, sphericity and spiculation. Experimenting on 300 lung nodules from the open-access LIDC-IDRI dataset we found that the correlation between the computer-calculated features and the manually-assigned visual scores was at best moderate (Pearson’s r between -0.61 and 0.59; Spearman’s $$\rho $$ between -0.59 and 0.56). We conclude that the morphological features investigated here have moderate ability to match/explain manually-annotated lobulation, sphericity and spiculation.
Comparative evaluation of conventional and deep learning methods for semi-automated segmentation of pulmonary nodules on CT
Bianconi, Francesco
Fravolini, Mario Luca
Pizzoli, Sofia
Palumbo, Isabella
Minestrini, Matteo
Rondini, Maria
Nuvoli, Susanna
Spanu, Angela
Palumbo, Barbara
Quant Imaging Med Surg2021Journal Article, cited 2 times
Website
LIDC-IDRI
Segmentation
Algorithm Development
Computed Tomography (CT)
Deep Learning
LUNG
Background: Accurate segmentation of pulmonary nodules on computed tomography (CT) scans plays a crucial role in the evaluation and management of patients with suspicion of lung cancer (LC). When performed manually, not only the process requires highly skilled operators, but is also tiresome and time-consuming. To assist the physician in this task several automated and semi-automated methods have been proposed in the literature. In recent years, in particular, the appearance of deep learning has brought about major advances in the field. Methods: Twenty-four (12 conventional and 12 based on deep learning) semi-automated-'one-click'-methods for segmenting pulmonary nodules on CT were evaluated in this study. The experiments were carried out on two datasets: a proprietary one (383 images from a cohort of 111 patients) and a public one (259 images from a cohort of 100). All the patients had a positive transcript for suspect pulmonary nodules. Results: The methods based on deep learning clearly outperformed the conventional ones. The best performance [Sorensen-Dice coefficient (DSC)] in the two datasets was, respectively, 0.853 and 0.763 for the deep learning methods, and 0.761 and 0.704 for the traditional ones. Conclusions: Deep learning is a viable approach for semi-automated segmentation of pulmonary nodules on CT scans.
Latent space arc therapy optimization
Bice, Noah
Fakhreddine, Mohamad
Li, Ruiqi
Nguyen, Dan
Kabat, Christopher
Myers, Pamela
Papanikolaou, Niko
Kirby, Neil
Physics in Medicine and Biology2021Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Volumetric modulated arc therapy planning is a challenging problem in high-dimensional, non-convex optimization. Traditionally, heuristics such as fluence-map-optimization-informed segment initialization use locally optimal solutions to begin the search of the full arc therapy plan space from a reasonable starting point. These routines facilitate arc therapy optimization such that clinically satisfactory radiation treatment plans can be created in a reasonable time frame. However, current optimization algorithms favor solutions near their initialization point and are slower than necessary due to plan overparameterization. In this work, arc therapy overparameterization is addressed by reducing the effective dimension of treatment plans with unsupervised deep learning. An optimization engine is then built based on low-dimensional arc representations which facilitates faster planning times.
Convolutional neural networks for head and neck tumor segmentation on 7-channel multiparametric MRI: a leave-one-out analysis
Bielak, Lars
Wiedenmann, Nicole
Berlin, Arnie
Nicolay, Nils Henrik
Gunashekar, Deepa Darshini
Hagele, Leonard
Lottner, Thomas
Grosu, Anca-Ligia
Bock, Michael
Radiat Oncol2020Journal Article, cited 1 times
Website
Head-Neck-Radiomics-HN1
Radiation Therapy
Magnetic Resonance Imaging (MRI)
Convolutional neural networks (CNN)
Segmentation
BACKGROUND: Automatic tumor segmentation based on Convolutional Neural Networks (CNNs) has shown to be a valuable tool in treatment planning and clinical decision making. We investigate the influence of 7 MRI input channels of a CNN with respect to the segmentation performance of head&neck cancer. METHODS: Head&neck cancer patients underwent multi-parametric MRI including T2w, pre- and post-contrast T1w, T2*, perfusion (ktrans, ve) and diffusion (ADC) measurements at 3 time points before and during radiochemotherapy. The 7 different MRI contrasts (input channels) and manually defined gross tumor volumes (primary tumor and lymph node metastases) were used to train CNNs for lesion segmentation. A reference CNN with all input channels was compared to individually trained CNNs where one of the input channels was left out to identify which MRI contrast contributes the most to the tumor segmentation task. A statistical analysis was employed to account for random fluctuations in the segmentation performance. RESULTS: The CNN segmentation performance scored up to a Dice similarity coefficient (DSC) of 0.65. The network trained without T2* data generally yielded the worst results, with DeltaDSCGTV-T = 5.7% for primary tumor and DeltaDSCGTV-Ln = 5.8% for lymph node metastases compared to the network containing all input channels. Overall, the ADC input channel showed the least impact on segmentation performance, with DeltaDSCGTV-T = 2.4% for primary tumor and DeltaDSCGTV-Ln = 2.2% respectively. CONCLUSIONS: We developed a method to reduce overall scan times in MRI protocols by prioritizing those sequences that add most unique information for the task of automatic tumor segmentation. The optimized CNNs could be used to aid in the definition of the GTVs in radiotherapy planning, and the faster imaging protocols will reduce patient scan times which can increase patient compliance. TRIAL REGISTRATION: The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under register number DRKS00003830 on August 20th, 2015.
Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views
Bier, B.
Goldmann, F.
Zaech, J. N.
Fotouhi, J.
Hegeman, R.
Grupp, R.
Armand, M.
Osgood, G.
Navab, N.
Maier, A.
Unberath, M.
Int J Comput Assist Radiol Surg2019Journal Article, cited 0 times
Website
CT Lymph Nodes
Image registration
Purpose; Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet.; ; Methods; In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ .; ; Results; On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping.; ; Conclusion; We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.
BC- MRI -SEG: A Breast Cancer MRI Tumor Segmentation Benchmark
Binary breast cancer tumor segmentation with Mag-netic Resonance Imaging (MRI) data is typically trained and evaluated on private medical data, which makes comparing deep learning approaches difficult. We propose a benchmark (BC-MRI -SEG) for binary breast cancer tumor segmentation based on publicly available MRI datasets. The benchmark consists of four datasets in total, where two datasets are used for supervised training and evaluation, and two are used for zero-shot evaluation. Additionally we compare state-of-the-art (SOTA) approaches on our benchmark and provide an exhaustive list of available public breast cancer MRI datasets. The source code has been made available at https://irulenot.github.ioIBC_MRI_SEG_Benchmarkl.
The Liver Tumor Segmentation Benchmark (LiTS)
Bilic, Patrick
Christ, Patrick
Li, Hongwei Bran
Vorontsov, Eugene
Ben-Cohen, Avi
Kaissis, Georgios
Szeskin, Adi
Jacobs, Colin
Mamani, Gabriel Efrain Humpire
Chartrand, Gabriel
Lohöfer, Fabian
Holch, Julian Walter
Sommer, Wieland
Hofmann, Felix
Hostettler, Alexandre
Lev-Cohain, Naama
Drozdzal, Michal
Amitai, Michal Marianne
Vivanti, Refael
Sosna, Jacob
Ezhov, Ivan
Sekuboyina, Anjany
Navarro, Fernando
Kofler, Florian
Paetzold, Johannes C.
Shit, Suprosanna
Hu, Xiaobin
Lipková, Jana
Rempfler, Markus
Piraud, Marie
Kirschke, Jan
Wiestler, Benedikt
Zhang, Zhiheng
Hülsemeyer, Christian
Beetz, Marcel
Ettlinger, Florian
Antonelli, Michela
Bae, Woong
Bellver, Míriam
Bi, Lei
Chen, Hao
Chlebus, Grzegorz
Dam, Erik B.
Dou, Qi
Fu, Chi-Wing
Georgescu, Bogdan
Giró-i-Nieto, Xavier
Gruen, Felix
Han, Xu
Heng, Pheng-Ann
Hesser, Jürgen
Moltz, Jan Hendrik
Igel, Christian
Isensee, Fabian
Jäger, Paul
Jia, Fucang
Kaluva, Krishna Chaitanya
Khened, Mahendra
Kim, Ildoo
Kim, Jae-Hun
Kim, Sungwoong
Kohl, Simon
Konopczynski, Tomasz
Kori, Avinash
Krishnamurthi, Ganapathy
Li, Fan
Li, Hongchao
Li, Junbo
Li, Xiaomeng
Lowengrub, John
Ma, Jun
Maier-Hein, Klaus
Maninis, Kevis-Kokitsi
Meine, Hans
Merhof, Dorit
Pai, Akshay
Perslev, Mathias
Petersen, Jens
Pont-Tuset, Jordi
Qi, Jin
Qi, Xiaojuan
Rippel, Oliver
Roth, Karsten
Sarasua, Ignacio
Schenk, Andrea
Shen, Zengming
Torres, Jordi
Wachinger, Christian
Wang, Chunliang
Weninger, Leon
Wu, Jianrong
Xu, Daguang
Yang, Xiaoping
Yu, Simon Chun-Ho
Yuan, Yading
Yue, Miao
Zhang, Liping
Cardoso, Jorge
Bakas, Spyridon
Braren, Rickmer
Heinemann, Volker
Pal, Christopher
Tang, An
Kadoury, Samuel
Soler, Luc
van Ginneken, Bram
Greenspan, Hayit
Joskowicz, Leo
Menze, Bjoern
Medical Image Analysis2023Journal Article, cited 612 times
Website
TCGA-LIHC
Segmentation
Liver
Liver tumor
Deep learning
CT
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Enhanced Region Growing for Brain Tumor MR Image Segmentation
Biratu, E. S.
Schwenker, F.
Debelee, T. G.
Kebede, S. R.
Negera, W. G.
Molla, H. T.
J Imaging2021Journal Article, cited 30 times
Website
BRATS 2015
U-Net
brain MRI image
region growing
skull stripping
tumor region
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach's performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.
Tensor-RT-Based Transfer Learning Model for Lung Cancer Classification
Bishnoi, V.
Goel, N.
J Digit Imaging2023Journal Article, cited 0 times
Website
LIDC-IDRI
Computed Tomography (CT)
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
DICOM
Lung cancer
Nvidia tensor-RT
Transfer learning
Cancer is a leading cause of death across the globe, in which lung cancer constitutes the maximum mortality rate. Early diagnosis through computed tomography scan imaging helps to identify the stages of lung cancer. Several deep learning-based classification methods have been employed for developing automatic systems for the diagnosis and detection of computed tomography scan lung slices. However, the diagnosis based on nodule detection is a challenging task as it requires manual annotation of nodule regions. Also, these computer-aided systems have yet not achieved the desired performance in real-time lung cancer classification. In the present paper, a high-speed real-time transfer learning-based framework is proposed for the classification of computed tomography lung cancer slices into benign and malignant. The proposed framework comprises of three modules: (i) pre-processing and segmentation of lung images using K-means clustering based on cosine distance and morphological operations; (ii) tuning and regularization of the proposed model named as weighted VGG deep network (WVDN); (iii) model inference in Nvidia tensor-RT during post-processing for the deployment in real-time applications. In this study, two pre-trained CNN models were experimented and compared with the proposed model. All the models have been trained on 19,419 computed tomography scan lung slices, which were obtained from the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset. The proposed model achieved the best classification metric, an accuracy of 0.932, precision, recall, an F1 score of 0.93, and Cohen's kappa score of 0.85. A statistical evaluation has also been performed on the classification parameters and achieved a p-value <0.0001 for the proposed model. The quantitative and statistical results validate the improved performance of the proposed model as compared to state-of-the-art methods. The proposed framework is based on complete computed tomography slices rather than the marked annotations and may help in improving clinical diagnosis.
Multiparametric MRI and auto-fixed volume of interest-based radiomics signature for clinically significant peripheral zone prostate cancer
Bleker, J.
Kwee, T. C.
Dierckx, Rajo
de Jong, I. J.
Huisman, H.
Yakar, D.
Eur Radiol2020Journal Article, cited 2 times
Website
Machine Learning
Magnetic Resonance Imaging (MRI)
PROSTATEx
OBJECTIVES: To create a radiomics approach based on multiparametric magnetic resonance imaging (mpMRI) features extracted from an auto-fixed volume of interest (VOI) that quantifies the phenotype of clinically significant (CS) peripheral zone (PZ) prostate cancer (PCa). METHODS: This study included 206 patients with 262 prospectively called mpMRI prostate imaging reporting and data system 3-5 PZ lesions. Gleason scores > 6 were defined as CS PCa. Features were extracted with an auto-fixed 12-mm spherical VOI placed around a pin point in each lesion. The value of dynamic contrast-enhanced imaging(DCE), multivariate feature selection and extreme gradient boosting (XGB) vs. univariate feature selection and random forest (RF), expert-based feature pre-selection, and the addition of image filters was investigated using the training (171 lesions) and test (91 lesions) datasets. RESULTS: The best model with features from T2-weighted (T2-w) + diffusion-weighted imaging (DWI) + DCE had an area under the curve (AUC) of 0.870 (95% CI 0.980-0.754). Removal of DCE features decreased AUC to 0.816 (95% CI 0.920-0.710), although not significantly (p = 0.119). Multivariate and XGB outperformed univariate and RF (p = 0.028). Expert-based feature pre-selection and image filters had no significant contribution. CONCLUSIONS: The phenotype of CS PZ PCa lesions can be quantified using a radiomics approach based on features extracted from T2-w + DWI using an auto-fixed VOI. Although DCE features improve diagnostic performance, this is not statistically significant. Multivariate feature selection and XGB should be preferred over univariate feature selection and RF. The developed model may be a valuable addition to traditional visual assessment in diagnosing CS PZ PCa. KEY POINTS: * T2-weighted and diffusion-weighted imaging features are essential components of a radiomics model for clinically significant prostate cancer; addition of dynamic contrast-enhanced imaging does not significantly improve diagnostic performance. * Multivariate feature selection and extreme gradient outperform univariate feature selection and random forest. * The developed radiomics model that extracts multiparametric MRI features with an auto-fixed volume of interest may be a valuable addition to visual assessment in diagnosing clinically significant prostate cancer.
Automated nuclear segmentation in head and neck squamous cell carcinoma (HNSCC) pathology reveals relationships between cytometric features and ESTIMATE stromal and immune scores
Blocker, Stephanie J.
Cook, James
Everitt, Jeffrey I.
Austin, Wyatt M.
Watts, Tammara L.
Mowery, Yvonne M.
The American Journal of Pathology2022Journal Article, cited 0 times
Website
CPTAC-HNSCC
Algorithm Development
pathomics
Image classification
Segmentation
Imaging features
The tumor microenvironment (TME) plays an important role in the progression of head and neck squamous cell carcinoma (HNSCC). Currently, pathological assessment of TME is non-standardized and subject to observer bias. Genome-wide transcriptomic approaches to understanding the TME, while less subject to bias, are expensive and not currently part of standard of care for HNSCC. To identify pathology-based biomarkers that correlate with genomic and transcriptomic signatures of TME in HNSCC, cytometric feature maps were generated in a publicly available cohort of patients with HNSCC with available whole-slide tissue images and genomic and transcriptomic phenotyping (N=49). Cytometric feature maps were generated based on whole-slide nuclear detection, using a deep learning algorithm trained for StarDist nuclear segmentation. Cytometric features were measured for each patient and compared to transcriptomic measurements, including Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) scores, as well as stemness scores. When corrected for multiple comparisons, one feature (nuclear circularity) demonstrated a significant linear correlation with ESTIMATE stromal score. Two features (nuclear maximum and minimum diameter) correlated significantly with ESTIMATE immune score. Three features (nuclear solidity, nuclear minimum diameter, and nuclear circularity) correlated significantly with transcriptomic stemness score. This study provides preliminary evidence that observer-independent, automated tissue slide analysis can provide insights into the HNSCC TME which correlate with genomic and transcriptomic assessments.
Unsupervised Data Drift Detection Using Convolutional Autoencoders: A Breast Cancer Imaging Scenario
Imaging AI models are starting to reach real clinical settings, where model drift can happen due to diverse factors. That is why model monitoring must be set up in order to prevent model degradation over time. In this context, we test and propose a data drift detection solution based on unsupervised deep learning for a breast cancer imaging setting. A convolutional autoencoder is trained on a baseline set of expected images and controlled drifts are introduced in the data in order to test if a set of metrics extracted from the reconstructions and the latent space are able to distinguish them. We prove that this is a valid tool that manages to detect subtle differences even within these complex kind of images.
Harnessing multimodal data integration to advance precision oncology
Boehm, Kevin M
Khosravi, Pegah
Vanguri, Rami
Gao, Jianjiong
Shah, Sohrab P
Nature Reviews Cancer2022Journal Article, cited 0 times
Website
Breast-MRI-NACT-Pilot
Multi-modal imaging
Machine Learning
Dynamic conformal arcs for lung stereotactic body radiation therapy: A comparison with volumetric-modulated arc therapy
Bokrantz, R.
Wedenberg, M.
Sandwall, P.
J Appl Clin Med Phys2020Journal Article, cited 1 times
Website
4D-Lung
Computed Tomography (CT)
This study constitutes a feasibility assessment of dynamic conformal arc (DCA) therapy as an alternative to volumetric-modulated arc therapy (VMAT) for stereotactic body radiation therapy (SBRT) of lung cancer. The rationale for DCA is lower geometric complexity and hence reduced risk for interplay errors induced by respiratory motion. Forward planned DCA and inverse planned DCA based on segment-weight optimization were compared to VMAT for single arc treatments of five lung patients. Analysis of dose-volume histograms and clinical goal fulfillment revealed that DCA can generate satisfactory and near equivalent dosimetric quality to VMAT, except for complex tumor geometries. Segment-weight optimized DCA provided spatial dose distributions qualitatively similar to those for VMAT. Our results show that DCA, and particularly segment-weight optimized DCA, may be an attractive alternative to VMAT for lung SBRT treatments if the patient anatomy is favorable.
BCDNet: A Deep Learning Model with Improved Convolutional Neural Network for Efficient Detection of Bone Cancer Using Histology Images
Bolleddu Devananda, Rao
K. Madhavi
2024Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
Bone Cancer
Computer Aided Detection (CADe)
Deep Learning
Convolutional Neural Network (CNN)
Among the several types of cancer, bone cancer is the most lethal prevailing in the world. Its prevention is better than cure. Besides early detection of bone cancer has potential to have medical intervention to prevent spread of malignant cells and help patients to recover from the disease. Many medical imaging modalities such as histology, histopathology, radiology, X-rays, MRIs, CT scans, phototherapy, PET and ultrasounds are being used in bone cancer detection research. However, hematoxylin and eosin stained histology images are found crucial for early diagnosis of bone cancer. Existing Convolutional Neural Network (CNN) based deep learning techniques are found suitable for medical image analytics. However, the models are prone to mediocre performance unless configured properly with empirical study. Within this article, we suggested a framework centered on deep learning for automatic bone cancer detection. We also proposed a CNN variant known as Bone Cancer Detection Network (BCDNet) which is configured and optimized for detection of a common kind of bone cancer named Osteosarcoma. An algorithm known as Learning based Osteosarcoma Detection (LbOD). It exploits BCDNet model for both binomial and multi-class classification. Osteosarcoma-Tumor-Assessment is the histology dataset used for our empirical study. Our the outcomes of the trial showed that BCDNet outperforms baseline models with 96.29% accuracy in binary classification and 94.69% accuracy in multi-class classification.
Segmentation of gliomas is essential to aid clinical diagnosis and treatment; however, imaging artifacts and heterogeneous shape complicate this task. In the last few years, researchers have shown the effectiveness of 3D UNets on this problem. They have found success using 3D patches to predict the class label for the center voxel; however, even a single patch-based UNet may miss representations that another UNet could learn. To circumvent this issue, I developed PieceNet, a deep learning model using a novel ensemble of patch-based 3D UNets. In particular, I used uncorrected modalities to train a standard 3D UNet for all label classes as well as one 3D UNet for each individual label class. Initial results indicate this 4-network ensemble is potentially a superior technique to a traditional patch-based 3D UNet on uncorrected images; however, further work needs to be done to allow for more competitive enhancing tumor segmentation. Moreover, I developed a linear probability model using radiomic and non-imaging features that predicts post-surgery survival.
Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline
Bonavita, I.
Rafael-Palou, X.
Ceresa, M.
Piella, G.
Ribas, V.
Gonzalez Ballester, M. A.
Comput Methods Programs Biomed2020Journal Article, cited 3 times
Website
Machine Learning
LIDC-IDRI
LUNG
Convolutional Neural Network (CNN)
BACKGROUND AND OBJECTIVE: The early identification of malignant pulmonary nodules is critical for a better lung cancer prognosis and less invasive chemo or radio therapies. Nodule malignancy assessment done by radiologists is extremely useful for planning a preventive intervention but is, unfortunately, a complex, time-consuming and error-prone task. This explains the lack of large datasets containing radiologists malignancy characterization of nodules; METHODS: In this article, we propose to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection. For training and testing purposes we used independent subsets of the LIDC dataset; RESULTS: Adding the probabilities of nodules malignity in a baseline lung cancer pipeline improved its F1-weighted score by 14.7%, whereas integrating the malignancy model itself using transfer learning outperformed the baseline prediction by 11.8% of F1-weighted score; CONCLUSIONS: Despite the limited size of the lung cancer datasets, integrating predictive models of nodule malignancy improves prediction of lung cancer.
CT Colonography: External Clinical Validation of an Algorithm for Computer-assisted Prone and Supine Registration
Boone, Darren J
Halligan, Steve
Roth, Holger R
Hampshire, Tom E
Helbren, Emma
Slabaugh, Greg G
McQuillan, Justine
McClelland, Jamie R
Hu, Mingxing
Punwani, Shonit
Radiology2013Journal Article, cited 5 times
Website
CT COLONOGRAPHY
Image registration
Computer Assisted Detection (CAD)
PURPOSE: To perform external validation of a computer-assisted registration algorithm for prone and supine computed tomographic (CT) colonography and to compare the results with those of an existing centerline method. MATERIALS AND METHODS: All contributing centers had institutional review board approval; participants provided informed consent. A validation sample of CT colonographic examinations of 51 patients with 68 polyps (6-55 mm) was selected from a publicly available, HIPAA compliant, anonymized archive. No patients were excluded because of poor preparation or inadequate distension. Corresponding prone and supine polyp coordinates were recorded, and endoluminal surfaces were registered automatically by using a computer algorithm. Two observers independently scored three-dimensional endoluminal polyp registration success. Results were compared with those obtained by using the normalized distance along the colonic centerline (NDACC) method. Pairwise Wilcoxon signed rank tests were used to compare gross registration error and McNemar tests were used to compare polyp conspicuity. RESULTS: Registration was possible in all 51 patients, and 136 paired polyp coordinates were generated (68 polyps) to test the algorithm. Overall mean three-dimensional polyp registration error (mean +/- standard deviation, 19.9 mm +/- 20.4) was significantly less than that for the NDACC method (mean, 27.4 mm +/- 15.1; P = .001). Accuracy was unaffected by colonic segment (P = .76) or luminal collapse (P = .066). During endoluminal review by two observers (272 matching tasks, 68 polyps, prone to supine and supine to prone coordinates), 223 (82%) polyp matches were visible (120 degrees field of view) compared with just 129 (47%) when the NDACC method was used (P < .001). By using multiplanar visualization, 48 (70%) polyps were visible after scrolling +/- 15 mm in any multiplanar axis compared with 16 (24%) for NDACC (P < .001). CONCLUSION: Computer-assisted registration is more accurate than the NDACC method for mapping the endoluminal surface and matching the location of polyps in corresponding prone and supine CT colonographic acquisitions.
Dataset on renal tumor diameter assessment by multiple observers in normal-dose and low-dose CT
Borgbjerg, J.
Larsen, N. E.
Salte, I. M.
Gronli, N. R.
Klaestrup, E.
Negard, A.
Data Brief2023Journal Article, cited 0 times
Website
C4KC-KiTS
KIDNEY
Computed Tomography (CT)
Inter-observer variability
Renal tumor
Tumor diameter
Low-dose CT
Computed tomography-based active surveillance is increasingly used to manage small renal tumors, regardless of patient age. However, there is an unmet need for decreasing radiation exposure while maintaining the necessary accuracy and reproducibility in radiographic measurements, allowing for detecting even minor changes in renal mass size. In this article, we present supplementary data from a multiobserver investigation. We explored the accuracy and reproducibility of low-dose CT (75% dose reduction) compared to normal-dose CT in assessing maximum axial renal tumor diameter. Open-access CT datasets from the 2019 Kidney and Kidney Tumor Segmentation Challenge were used. A web-based platform for assessing observer performance was used by six radiologist observers to obtain and provide data on tumor diameters and accompanying viewing settings, in addition to key images of each measurement and an interactive module for exploring diameter measurements. These data can serve as a baseline and inform future studies investigating and validating lower-dose CT protocols for active surveillance of small renal masses.
Radiation dose in CT-based active surveillance of small renal masses may be reduced by 75%: A retrospective exploratory multiobserver study
Borgbjerg, Jens
Larsen, Nis Elbrønd
Salte, Ivar Mjåland
Grønli, Niklas Revold
Klæstrup, Elise
Negård, Anne
2023Journal Article, cited 0 times
C4KC-KiTS
Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram
Borguezan, Bruno Max
Lopes, Agnaldo José
Saito, Eduardo Haruo
Higa, Claudio
Silva, Aristófanes Corrêa
Nunes, Rodolfo Acatauassú
Pulmonary Medicine2019Journal Article, cited 0 times
Website
Radiomics
Lung
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.
A full pipeline to analyze lung histopathology images
Borras Ferris, Lluis
Püttmann, Simon
Marini, Niccolò
Vatrano, Simona
Fragetta, Filippo
Caputo, Alessandro
Ciompi, Francesco
Atzori, Manfredo
Müller, Henning
Tomaszewski, John E.
Ward, Aaron D.
2024Conference Paper, cited 0 times
TCGA-LUSC
TCGA-LUAD
Whole Slide Imaging (WSI)
Cell segmentation
Classification
Pathomics
Self-supervised
Histopathology images involve the analysis of tissue samples to diagnose several diseases, such as cancer. The analysis of tissue samples is a time-consuming procedure, manually made by medical experts, namely pathologists. Computational pathology aims to develop automatic methods to analyze Whole Slide Images (WSI), which are digitized histopathology images, showing accurate performance in terms of image analysis. Although the amount of available WSIs is increasing, the capacity of medical experts to manually analyze samples is not expanding proportionally. This paper presents a full automatic pipeline to classify lung cancer WSIs, considering four classes: Small Cell Lung Cancer (SCLC), non-small cell lung cancer divided into LUng ADenocarcinoma (LUAD) and LUng Squamous cell Carcinoma (LUSC), and normal tissue. The pipeline includes a self-supervised algorithm for pre-training the model and Multiple Instance Learning (MIL) for WSI classification. The model is trained with 2,226 WSIs and it obtains an AUC of 0.8558 ± 0.0051 and a weighted f1-score of 0.6537 ± 0.0237 for the 4-class classification on the test set. The capability of the model to generalize was evaluated by testing it on the public The Cancer Genome Atlas (TCGA) dataset on LUAD and LUSC classification. In this task, the model obtained an AUC of 0.9433 ± 0.0198 and a weighted f1-score of 0.7726 ± 0.0438.
A SIMPLI (Single-cell Identification from MultiPLexed Images) approach for spatially-resolved tissue phenotyping at single-cell resolution
Bortolomeazzi, M.
Montorsi, L.
Temelkovski, D.
Keddar, M. R.
Acha-Sagredo, A.
Pitcher, M. J.
Basso, G.
Laghi, L.
Rodriguez-Justo, M.
Spencer, J.
Ciccarelli, F. D.
Nat Commun2022Journal Article, cited 1 times
Website
CRC_FFPE-CODEX_CellNeighs
Digital pathology
Pathomics
COLON
Antibodies
Colon/diagnostic imaging/pathology
Diagnostic Imaging/*methods
Humans
Image Processing
Computer-Assisted/*methods
Intestinal Mucosa/diagnostic imaging/pathology
Neoplasms/diagnostic imaging/pathology
Reproducibility of Results
*Single-Cell Analysis
T-Lymphocytes/pathology
Multiplexed imaging technologies enable the study of biological tissues at single-cell resolution while preserving spatial information. Currently, high-dimension imaging data analysis is technology-specific and requires multiple tools, restricting analytical scalability and result reproducibility. Here we present SIMPLI (Single-cell Identification from MultiPLexed Images), a flexible and technology-agnostic software that unifies all steps of multiplexed imaging data analysis. After raw image processing, SIMPLI performs a spatially resolved, single-cell analysis of the tissue slide as well as cell-independent quantifications of marker expression to investigate features undetectable at the cell level. SIMPLI is highly customisable and can run on desktop computers as well as high-performance computing environments, enabling workflow parallelisation for large datasets. SIMPLI produces multiple tabular and graphical outputs at each step of the analysis. Its containerised implementation and minimum configuration requirements make SIMPLI a portable and reproducible solution for multiplexed imaging data analysis. Software is available at "SIMPLI [ https://github.com/ciccalab/SIMPLI ]".
Integration of operator-validated contours in deformable image registration for dose accumulation in radiotherapy
Bosma, L. S.
Ries, M.
Denis de Senneville, B.
Raaymakers, B. W.
Zachiu, C.
Phys Imaging Radiat Oncol2023Journal Article, cited 0 times
Website
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
Semi-automatic segmentation
Adaptive radiotherapy
Constrained motion estimation
Contour guidance
Deformable dose warping
Deformable image registration
Preconditioning
BACKGROUND AND PURPOSE: Deformable image registration (DIR) is a core element of adaptive radiotherapy workflows, integrating daily contour propagation and/or dose accumulation in their design. Propagated contours are usually manually validated and may be edited, thereby locally invalidating the registration result. This means the registration cannot be used for dose accumulation. In this study we proposed and evaluated a novel multi-modal DIR algorithm that incorporated contour information to guide the registration. This integrates operator-validated contours with the estimated deformation vector field and warped dose. MATERIALS AND METHODS: The proposed algorithm consisted of both a normalized gradient field-based data-fidelity term on the images and an optical flow data-fidelity term on the contours. The Helmholtz-Hodge decomposition was incorporated to ensure anatomically plausible deformations. The algorithm was validated for same- and cross-contrast Magnetic Resonance (MR) image registrations, Computed Tomography (CT) registrations, and CT-to-MR registrations for different anatomies, all based on challenging clinical situations. The contour-correspondence, anatomical fidelity, registration error, and dose warping error were evaluated. RESULTS: The proposed contour-guided algorithm considerably and significantly increased contour overlap, decreasing the mean distance to agreement by a factor of 1.3 to 13.7, compared to the best algorithm without contour-guidance. Importantly, the registration error and dose warping error decreased significantly, by a factor of 1.2 to 2.0. CONCLUSIONS: Our contour-guided algorithm ensured that the deformation vector field and warped quantitative information were consistent with the operator-validated contours. This provides a feasible semi-automatic strategy for spatially correct warping of quantitative information even in difficult and artefacted cases.
Exploring de-anonymization risks in PET imaging: Insights from a comprehensive analysis of 853 patient scans
Bou Hanna, E.
Partarrieu, S.
Berenbaum, A.
Allassonniere, S.
Besson, F. L.
Sci Data2024Journal Article, cited 1 times
Website
FDG-PET-CT-Lesions
Due to their high resolution, anonymized CT scans can be reidentified using face recognition tools. However, little is known regarding PET deanonymization because of its lower resolution. In this study, we analysed PET/CT scans of 853 patients from a TCIA-restricted dataset (AutoPET). First, we built denoised 2D morphological reconstructions of both PET and CT scans, and then we determined how frequently a PET reconstruction could be matched to the correct CT reconstruction with no other metadata. Using the CT morphological reconstructions as ground truth allows us to frame the problem as a face recognition problem and to quantify our performance using traditional metrics (top k accuracies) without any use of patient pictures. Using our denoised PET 2D reconstructions, we achieved 72% top 10 accuracy after the realignment of all CTs in the same reference frame, and 71% top 10 accuracy after realignment and mixing within a larger face dataset of 10, 168 pictures. This highlights the need to consider face identification issues when dealing with PET imaging data.
High Capacity and Reversible Fragile Watermarking Method for Medical Image Authentication and Patient Data Hiding
Bouarroudj, Riadh
Bellala, Fatma Zohra
Souami, Feryel
Journal of Medical Systems2024Journal Article, cited 0 times
Website
TCGA-LUAD
Development of a multi-task learning V-Net for pulmonary lobar segmentation on CT and application to diseased lungs
AIM: To develop a multi-task learning (MTL) V-Net for pulmonary lobar segmentation on computed tomography (CT) and application to diseased lungs. MATERIALS AND METHODS: The described methodology utilises tracheobronchial tree information to enhance segmentation accuracy through the algorithm's spatial familiarity to define lobar extent more accurately. The method undertakes parallel segmentation of lobes and auxiliary tissues simultaneously by employing MTL in conjunction with V-Net-attention, a popular convolutional neural network in the imaging realm. Its performance was validated by an external dataset of patients with four distinct lung conditions: severe lung cancer, COVID-19 pneumonitis, collapsed lungs, and chronic obstructive pulmonary disease (COPD), even though the training data included none of these cases. RESULTS: The following Dice scores were achieved on a per-segment basis: normal lungs 0.97, COPD 0.94, lung cancer 0.94, COVID-19 pneumonitis 0.94, and collapsed lung 0.92, all at p<0.05. CONCLUSION: Despite severe abnormalities, the model provided good performance at segmenting lobes, demonstrating the benefit of tissue learning. The proposed model is poised for adoption in the clinical setting as a robust tool for radiologists and researchers to define the lobar distribution of lung diseases and aid in disease treatment planning.
Levels Propagation Approach to Image Segmentation: Application to Breast MR Images
Bouchebbah, Fatah
Slimani, Hachem
Journal of Digital Imaging2019Journal Article, cited 0 times
RIDER Breast MRI
Breast
MRI
Accurate segmentation of a breast tumor region is fundamental for treatment. Magnetic resonance imaging (MRI) is a widely used diagnostic tool. In this paper, a new semi-automatic segmentation approach for MRI breast tumor segmentation called Levels Propagation Approach (LPA) is introduced. The introduced segmentation approach takes inspiration from tumor propagation and relies on a finite set of nested and non-overlapped levels. LPA has several features: it is highly suitable to parallelization and offers a simple and dynamic possibility to automate the threshold selection. Furthermore, it allows stopping of the segmentation at any desired limit. Particularly, it allows to avoid to reach the breast skin-line region which is known as a significant issue that reduces the precision and the effectiveness of the breast tumor segmentation. The proposed approach have been tested on two clinical datasets, namely RIDER breast tumor dataset and CMH-LIMED breast tumor dataset. The experimental evaluations have shown that LPA has produced competitive results to some state-of-the-art methods and has acceptable computation complexity.
3D automatic levels propagation approach to breast MRI tumor segmentation
Bouchebbah, Fatah
Slimani, Hachem
Expert Systems with Applications2020Journal Article, cited 0 times
Website
RIDER Breast MRI
Segmentation
Magnetic Resonance Imaging MRI is a relevant tool for breast cancer screening. Moreover, an accurate 3D segmentation of breast tumors from MRI scans plays a key role in the analysis of the disease. In this manuscript, we propose a novel 3D automatic method for segmenting MRI breast tumors, called 3D Automatic Levels Propagation Approach (3D-ALPA). The proposed method performs the segmentation automatically in two steps: in the first step, the entire MRI volume to process is segmented slice by slice. Specifically, using a new automatic approach called 2D Automatic Levels Propagation Approach (2D-ALPA) which is an improved version of a previous semi-automatic approach, named 2D Levels Propagation Approach (2D-LPA). In the second step, the partial segmentations obtained after the application of 2D-ALPA are recombined to rebuild the complete volume(s) of tumor(s). 3D-ALPA has many characteristics, mainly: it is an automatic method which can take into consideration multi-tumor segmentation, and it has the property to be easily applicable according to the Axial, Coronal, as well as Sagittal planes. Therefore, it offers a multi-view representation of the segmented tumor(s). To validate the new 3D-ALPA method, we have firstly performed tests on a 2D private dataset composed of eighteen patients to estimate the accuracy of the new 2D-ALPA in comparison to the previous 2D-LPA. The obtained results have been in favor of the proposed 2D-ALPA, showing hence an improvement in accuracy after integrating the automatization in the 2D-ALPA approach. Then, we have evaluated the complete 3D-ALPA method on a 3D private dataset constituted of MRI exams of twenty-two patients having real breast tumors of different types, and on the public RIDER dataset. Essentially, 3D-ALPA has been evaluated regarding two main features: segmentation accuracy and running time, by considering two kinds of breast tumors: non-enhanced and enhanced tumors. The experimental studies have shown that 3D-ALPA has produced better results for the both kinds of tumors than a recent and concurrent method in the literature that addresses the same problematic.
An Efficient Cascade of U-Net-Like Convolutional Neural Networks Devoted to Brain Tumor Segmentation
Bouchet, Philippe
Deloges, Jean-Baptiste
Canton-Bacara, Hugo
Pusel, Gaëtan
Pinot, Lucas
Elbaz, Othman
Boutry, Nicolas
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
A glioma is a fast-growing and aggressive tumor that starts in the glial cells of the brain. They make up about 30% of all brain tumors, and 80% of all malignant brain tumors. Gliomas are considered to be rare tumors, affecting less than 10,000 people each year, with a 5-year survival rate of 6%. If intercepted at an early stage, they pose no danger; however, providing an accurate diagnosis has proven to be difficult. In this paper, we propose a cascade approach using state-of-the-art Convolutional Neural Networks, in order to maximize accuracy in tumor detection. Various U-Net-like networks have been implemented and tested in order to select the network best suited for this problem.
Enhanced breast mass mammography classification approach based on pre-processing and hybridization of transfer learning models
Boudouh, S. S.
Bouakkaz, M.
J Cancer Res Clin Oncol2023Journal Article, cited 0 times
Website
CBIS-DDSM
Humans
*Neural Networks
Computer
Mammography/methods
Breast/diagnostic imaging
*Breast Neoplasms/diagnostic imaging
Machine Learning
Tumor Microenvironment
Breast cancer
Breast mass detection
Deep learning
Mammography processing
Image denoising
BACKGROUND AND OBJECTIVE: The second most prevalent cause of death among women is now breast cancer, surpassing heart disease. Mammography images must accurately identify breast masses to diagnose early breast cancer, which can significantly increase the patient's survival percentage. Although, due to the diversity of breast masses and the complexity of their microenvironment, it is still a significant issue. Hence, an issue that researchers need to continue searching into is how to establish a reliable breast mass detection approach in an effective factor application to increase patient survival. Even though several machine and deep learning-based approaches were proposed to address these issues, pre-processing strategies and network architectures were insufficient for breast mass detection in mammogram scans, which directly influences the accuracy of the proposed models. METHODS: Aiming to resolve these issues, we propose a two-stage classification method for breast mass mammography scans. First, we introduce a pre-processing stage divided into three sub-strategies, which include several filters for Region Of Interest (ROI) extraction, noise removal, and image enhancements. Secondly, we propose a classification stage based on transfer learning techniques for feature extraction, and global pooling for classification instead of standard machine learning algorithms or fully connected layers. However, instead of using the traditional fine-tuning feature extraction phase, we proposed a hybrid model where we concatenate two recent pre-trained CNNs to assist the feature extraction phase, rather than using one. RESULTS: Using the CBIS-DDSM dataset, we managed to increase mainly each of the accuracy, sensitivity, and specificity reaching the highest accuracy of 98,1% using the Median filter for noise removal. Followed by the Gaussian filter trial with 96% accuracy, meanwhile, the winner filter attained the lowest accuracy of 94.13%. Moreover, the usage of global average pooling as a classifier is suitable in our case better than global max pooling. CONCLUSION: The experimental findings demonstrate that the suggested strategy of breast Mass detection in mammography can outperform the top-ranked methods currently in use in terms of classification performance.
Breast cancer: toward an accurate breast tumor detection model in mammography using transfer learning techniques
Boudouh, Saida Sarra
Bouakkaz, Mustapha
Multimedia Tools and Applications2023Journal Article, cited 0 times
CMMD
Female breast cancer has now surpassed lung cancer as the most common form of cancer globally. Although several methods exist for breast cancer detection and diagnosis, mammography is the most effective and widely used technique. In this study, our purpose is to propose an accurate breast tumor detection model as the first step into cancer detection. To guarantee diversity and a larger amount of data, we collected samples from three different databases: the Mammographic Image Analysis Society MiniMammographic (MiniMIAS), the Digital Database for Screening Mammography (DDSM), and the Chinese Mammography Database (CMMD). Several filters were used in the pre-processing phase to extract the Region Of Interest (ROI), remove noise, and enhance images. Next, transfer learning, data augmentation, and Global Pooling (GAP/GMP) techniques were used to avoid imagery overfitting and to increase accuracy. To do so, seven pre-trained Convolutional Neural Networks (CNNs) were modified in several trials with different hyper-parameters to determine which ones are the most suitable for our situation and the criteria that influenced our results. The selected pre-trained CNNs were Xception, InceptionV3, ResNet101V2, ResNet50V2, ALexNet, VGG16, and VGG19. The obtained results were satisfying, especially for ResNet50V2 followed by InceptionV3 reaching the highest accuracy of 99.9%, and 99.54% respectively. Meanwhile, the remaining models achieved great results as well, proving that our approach starting from the chosen filters, databases, and pre-trained models with the fine-tuning phase and the used global pooling technique is effective for breast tumor detection. Furthermore, we also managed to determine the most suitable hyper-parameters for each model using our collected dataset.
Glioblastoma Surgery Imaging–Reporting and Data System: Validation and Performance of the Automated Segmentation Task
Simple Summary; Neurosurgical decisions for patients with glioblastoma depend on visual inspection of a preoperative MR scan to determine the tumor characteristics. To avoid subjective estimates and manual tumor delineation, automatic methods and standard reporting are necessary. We compared and extensively assessed the performances of two deep learning architectures on the task of automatic tumor segmentation. A total of 1887 patients from 14 institutions, manually delineated by a human rater, were compared to automated segmentations generated by neural networks. The automated segmentations were in excellent agreement with the manual segmentations, and external validity, as well as generalizability were demonstrated. Together with automatic tumor feature computation and standardized reporting, our Glioblastoma Surgery Imaging Reporting And Data System (GSI-RADS) exhibited the potential for more accurate data-driven clinical decisions. The trained models and software are open-source and open-access, enabling comparisons among surgical cohorts, multicenter trials, and patient registries.; Abstract; For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime.
PET/CT-Based Radiogenomics Supports KEAP1/NFE2L2 Pathway Targeting for Non–Small Cell Lung Cancer Treated with Curative Radiotherapy
Bourbonne, Vincent
Morjani, Moncef
Pradier, Olivier
Hatt, Mathieu
Jaouen, Vincent
Querellou, Solène
Visvikis, Dimitris
Lucia, François
Schick, Ulrike
Journal of Nuclear Medicine2024Journal Article, cited 0 times
CPTAC-LSCC
CPTAC-LUAD
TCGA-LUAD
TCGA-LUSC
In lung cancer patients, radiotherapy is associated with a increased risk of local relapse (LR) when compared with surgery but with a preferable toxicity profile. The KEAP1/NFE2L2 mutational status (MutKEAP1/NFE2L2) is significantly correlated with LR in patients treated with radiotherapy but is rarely available. Prediction of MutKEAP1/NFE2L2 with noninvasive modalities could help to further personalize each therapeutic strategy. Methods: Based on a public cohort of 770 patients, model RNA (M-RNA) was first developed using continuous gene expression levels to predict MutKEAP1/NFE2L2, resulting in a binary output. The model PET/CT (M-PET/CT) was then built to predict M-RNA binary output using PET/CT-extracted radiomics features. M-PET/CT was validated on an external cohort of 151 patients treated with curative volumetric modulated arc radiotherapy. Each model was built, internally validated, and evaluated on a separate cohort using a multilayer perceptron network approach. Results: The M-RNA resulted in a C statistic of 0.82 in the testing cohort. With a training cohort of 101 patients, the retained M-PET/CT resulted in an area under the curve of 0.90 (P < 0.001). With a probability threshold of 20% applied to the testing cohort, M-PET/CT achieved a C statistic of 0.7. The same radiomics model was validated on the volumetric modulated arc radiotherapy cohort as patients were significantly stratified on the basis of their risk of LR with a hazard ratio of 2.61 (P = 0.02). Conclusion: Our approach enables the prediction of MutKEAP1/NFE2L2 using PET/CT-extracted radiomics features and efficiently classifies patients at risk of LR in an external cohort treated with radiotherapy.
Health Vigilance for Medical Imaging Diagnostic Optimization: Automated segmentation of COVID-19 lung infection from CT images
Bourekkadi, S.
Mohamed, Chala
Nsiri, Benayad
Abdelmajid, Soulaymani
Abdelghani, Mokhtari
Brahim, Benaji
Hami, H.
Mokhtari, A.
Slimani, K.
Soulaymani, A.
E3S Web of Conferences2021Journal Article, cited 0 times
Website
CT Images in COVID-19
Python
Computed Tomography (CT)
COVID-19
LUNG
Segmentation
Computer Aided Diagnosis (CADx)
Covid-19 disease has confronted the world with an unprecedented health crisis, faced with its quick spread, the health system is called upon to increase its vigilance. So, it is essential to set up a quick and automated diagnosis that can alleviate pressure on health systems. Many techniques used to diagnose the covid-19 disease, including imaging techniques, like computed tomography (CT). In this paper, we present an automatic method for COVID-19 Lung Infection Segmentation from CT Images, that can be integrated into a decision support system for the diagnosis of covid-19 disease. To achieve this goal, we focused to new techniques based on artificial intelligent concept, in particular the uses of deep convolutional neural network, and we are interested in our study to the most popular architecture used in the medical imaging community based on encoder-decoder models. We use an open access data collection for Artificial Intelligence COVID-19 CT segmentation or classification as dataset, the proposed model implemented on keras framework in python. A short description of model, training, validation and predictions is given, at the end we compare the result with an existing labeled data. We tested our trained model on new images, we obtained for Area under the ROC Curve the value 0.884 from the prediction result compared with manual expert segmentation. Finally, an overview is given for future works, and use of the proposed model into homogeneous framework in a medical imaging context for clinical purpose.
Using Separated Inputs for Multimodal Brain Tumor Segmentation with 3D U-Net-like Architectures
The work presented in this paper addresses the MICCAI BraTS 2019 challenge devoted to brain tumor segmentation using magnetic resonance images. For each task of the challenge, we proposed and submitted for evaluation an original method. For the tumor segmentation task (Task 1), our convolutional neural network is based on a variant of the U-Net architecture of Ronneberger et al. with two modifications: first, we separate the four convolution parts to decorrelate the weights corresponding to each modality, and second, we provide volumes of size 240∗240∗3 as inputs in these convolution parts. This way, we profit of the 3D aspect of the input signal, and we do not use the same weights for separate inputs. For the overall survival task (Task 2), we compute explainable features and use a kernel PCA embedding followed by a Random Forest classifier to build a predictor with very few training samples. For the uncertainty estimation task (Task 3), we introduce and compare lightweight methods based on simple principles which can be applied to any segmentation approach. The overall performance of each of our contribution is honorable given the low computational requirements they have both for training and testing.
A CT-based transfer learning approach to predict NSCLC recurrence: The added-value of peritumoral region
Bove, S.
Fanizzi, A.
Fadda, F.
Comes, M. C.
Catino, A.
Cirillo, A.
Cristofaro, C.
Montrone, M.
Nardone, A.
Pizzutilo, P.
Tufaro, A.
Galetta, D.
Massafra, R.
PLoS One2023Journal Article, cited 0 times
Website
NSCLC Radiogenomics
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/genetics
*Lung Neoplasms/diagnostic imaging/genetics
Tomography
X-Ray Computed/methods
Machine Learning
Non-small cell lung cancer (NSCLC) represents 85% of all new lung cancer diagnoses and presents a high recurrence rate after surgery. Thus, an accurate prediction of recurrence risk in NSCLC patients at diagnosis could be essential to designate risk patients to more aggressive medical treatments. In this manuscript, we apply a transfer learning approach to predict recurrence in NSCLC patients, exploiting only data acquired during its screening phase. Particularly, we used a public radiogenomic dataset of NSCLC patients having a primary tumor CT image and clinical information. Starting from the CT slice containing the tumor with maximum area, we considered three different dilatation sizes to identify three Regions of Interest (ROIs): CROP (without dilation), CROP 10 and CROP 20. Then, from each ROI, we extracted radiomic features by means of different pre-trained CNNs. The latter have been combined with clinical information; thus, we trained a Support Vector Machine classifier to predict the NSCLC recurrence. The classification performances of the devised models were finally evaluated on both the hold-out training and hold-out test sets, in which the original sample has been previously divided. The experimental results showed that the model obtained analyzing CROP 20 images, which are the ROIs containing more peritumoral area, achieved the best performances on both the hold-out training set, with an AUC of 0.73, an Accuracy of 0.61, a Sensitivity of 0.63, and a Specificity of 0.60, and on the hold-out test set, with an AUC value of 0.83, an Accuracy value of 0.79, a Sensitivity value of 0.80, and a Specificity value of 0.78. The proposed model represents a promising procedure for early predicting recurrence risk in NSCLC patients.
Radiogenomics of Clear Cell Renal Cell Carcinoma: Associations Between mRNA-Based Subtyping and CT Imaging Features
Bowen, Lan
Xiaojing, Li
Academic Radiology2018Journal Article, cited 0 times
Website
TCGA_RCC
clear cell renal cell carcinoma
PBRM1
BAP1
SETD2
JARID1C
PTEN
Singular value decomposition using block least mean square method for image denoising and compression
Image denoising is a well documented part of Image processing. It has always posed a problem for researchers and there is no dearth of solutions extended. Obtaining a denoised and perfectly similar image after application of processes represents a mirage that has been chased a lot. In this paper, we attempt to combine the effects of block least mean square algorithm (BLMS) to maximizes the Peak Signal to Noise Ratio (PSNR), along with singular valued decomposition (SVD), so as to achieve results that bring us closer to our aim of perfect reconstruction. The results showed that the combination of these methods provides easy computation, coupled with efficiency and as such is an effective way of approaching the problem.
Classifying the Acquisition Sequence for Brain MRIs Using Neural Networks on Single Slices
Background Neural networks for analyzing MRIs are oftentimes trained on particular combinations of perspectives and acquisition sequences. Since real-world data are less structured and do not follow a standard denomination of acquisition sequences, this impedes the transition from deep learning research to clinical application. The purpose of this study is therefore to assess the feasibility of classifying the acquisition sequence from a single MRI slice using convolutional neural networks. Methods A total of 113 MRI slices from 52 patients were used in a transfer learning approach to train three convolutional neural networks of different complexities to predict the acquisition sequence, while 27 slices were used for internal validation. The model then underwent external validation on 600 slices from 273 patients belonging to one of four classes (T1-weighted without contrast enhancement, T1-weighted with contrast enhancement, T2-weighted, and diffusion-weighted). Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results The neural networks achieved a categorical accuracy of 0.79, 0.81, and 0.84 on the external validation data. The implementation of Grad-CAM showed no clear pattern of focus except for T2-weighted slices, where the network focused on areas containing cerebrospinal fluid. Conclusion Automatically classifying the acquisition sequence using neural networks seems feasible and could be used to facilitate the automatic labelling of MRI data.
Radiomics and deep learning methods for the prediction of 2-year overall survival in LUNG1 dataset
In this study, we tested and compared radiomics and deep learning-based approaches on the public LUNG1 dataset, for the prediction of 2-year overall survival (OS) in non-small cell lung cancer patients. Radiomic features were extracted from the gross tumor volume using Pyradiomics, while deep features were extracted from bi-dimensional tumor slices by convolutional autoencoder. Both radiomic and deep features were fed to 24 different pipelines formed by the combination of four feature selection/reduction methods and six classifiers. Direct classification through convolutional neural networks (CNNs) was also performed. Each approach was investigated with and without the inclusion of clinical parameters. The maximum area under the receiver operating characteristic on the test set improved from 0.59, obtained for the baseline clinical model, to 0.67 +/- 0.03, 0.63 +/- 0.03 and 0.67 +/- 0.02 for models based on radiomic features, deep features, and their combination, and to 0.64 +/- 0.04 for direct CNN classification. Despite the high number of pipelines and approaches tested, results were comparable and in line with previous works, hence confirming that it is challenging to extract further imaging-based information from the LUNG1 dataset for the prediction of 2-year OS.
Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer
Braman, Nathaniel
Prasanna, Prateek
Whitney, Jon
Singh, Salendra
Beig, Niha
Etesami, Maryam
Bates, David D. B.
Gallagher, Katherine
Bloch, B. Nicolas
Vulchi, Manasa
Turk, Paulette
Bera, Kaustav
Abraham, Jame
Sikov, William M.
Somlo, George
Harris, Lyndsay N.
Gilmore, Hannah
Plecha, Donna
Varadan, Vinay
Madabhushi, Anant
JAMA Netw Open2019Journal Article, cited 0 times
Website
Radiogenomics
TCGA-BRCA
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer.; ; Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy.; ; Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019.; ; Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting.; ; Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002).; ; Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.
The relationship between radiomics and pathomics in Glioblastoma patients: Preliminary results from a cross-scale association study
Brancato, Valentina
Cavaliere, Carlo
Garbino, Nunzia
Isgrò, Francesco
Salvatore, Marco
Aiello, Marco
Frontiers in Oncology2022Journal Article, cited 0 times
CPTAC-GBM
Glioblastoma multiforme (GBM) typically exhibits substantial intratumoral heterogeneity at both microscopic and radiological resolution scales. Diffusion Weighted Imaging (DWI) and dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) are two functional MRI techniques that are commonly employed in clinic for the assessment of GBM tumor characteristics. This work presents initial results aiming at determining if radiomics features extracted from preoperative ADC maps and post-contrast T1 (T1C) images are associated with pathomic features arising from H&E digitized pathology images. 48 patients from the public available CPTAC-GBM database, for which both radiology and pathology images were available, were involved in the study. 91 radiomics features were extracted from ADC maps and post-contrast T1 images using PyRadiomics. 65 pathomic features were extracted from cell detection measurements from H&E images. Moreover, 91 features were extracted from cell density maps of H&E images at four different resolutions. Radiopathomic associations were evaluated by means of Spearman's correlation (ρ) and factor analysis. p values were adjusted for multiple correlations by using a false discovery rate adjustment. Significant cross-scale associations were identified between pathomics and ADC, both considering features (n = 186, 0.45 < ρ < 0.74 in absolute value) and factors (n = 5, 0.48 < ρ < 0.54 in absolute value). Significant but fewer ρ values were found concerning the association between pathomics and radiomics features (n = 53, 0.5 < ρ < 0.65 in absolute value) and factors (n = 2, ρ = 0.63 and ρ = 0.53 in absolute value). The results of this study suggest that cross-scale associations may exist between digital pathology and ADC and T1C imaging. This can be useful not only to improve the knowledge concerning GBM intratumoral heterogeneity, but also to strengthen the role of radiomics approach and its validation in clinical practice as "virtual biopsy", introducing new insights for omics integration toward a personalized medicine approach.
Impact of radiogenomics in esophageal cancer on clinical outcomes: A pilot study
Brancato, Valentina
Garbino, Nunzia
Mannelli, Lorenzo
Aiello, Marco
Salvatore, Marco
Franzese, Monica
Cavaliere, Carlo
2021Journal Article, cited 0 times
TCGA-ESCA
BACKGROUND: Esophageal cancer (ESCA) is the sixth most common malignancy in the world, and its incidence is rapidly increasing. Recently, several microRNAs (miRNAs) and messenger RNA (mRNA) targets were evaluated as potential biomarkers and regulators of epigenetic mechanisms involved in early diagnosis. In addition, computed tomography (CT) radiomic studies on ESCA improved the early stage identification and the prediction of response to treatment. Radiogenomics provides clinically useful prognostic predictions by linking molecular characteristics such as gene mutations and gene expression patterns of malignant tumors with medical images and could provide more opportunities in the management of patients with ESCA.
AIM: To explore the combination of CT radiomic features and molecular targets associated with clinical outcomes for characterization of ESCA patients.
METHODS: Of 15 patients with diagnosed ESCA were included in this study and their CT imaging and transcriptomic data were extracted from The Cancer Imaging Archive and gene expression data from The Cancer Genome Atlas, respectively. Cancer stage, history of significant alcohol consumption and body mass index (BMI) were considered as clinical outcomes. Radiomic analysis was performed on CT images acquired after injection of contrast medium. In total, 1302 radiomics features were extracted from three-dimensional regions of interest by using PyRadiomics. Feature selection was performed using a correlation filter based on Spearman's correlation (ρ) and Wilcoxon-rank sum test respect to clinical outcomes. Radiogenomic analysis involved ρ analysis between radiomic features associated with clinical outcomes and transcriptomic signatures consisting of eight N6-methyladenosine RNA methylation regulators and five up-regulated miRNA. The significance level was set at P < 0.05.
RESULTS: Of 25, five and 29 radiomic features survived after feature selection, considering stage, alcohol history and BMI as clinical outcomes, respectively. Radiogenomic analysis with stage as clinical outcome revealed that six of the eight mRNA regulators and two of the five up-regulated miRNA were significantly correlated with ten and three of the 25 selected radiomic features, respectively (-0.61 < ρ < -0.60 and 0.53 < ρ < 0.69, P < 0.05). Assuming alcohol history as clinical outcome, no correlation was found between the five selected radiomic features and mRNA regulators, while a significant correlation was found between one radiomic feature and three up-regulated miRNAs (ρ = -0.56, ρ = -0.64 and ρ = 0.61, P < 0.05). Radiogenomic analysis with BMI as clinical outcome revealed that four mRNA regulators and one up-regulated miRNA were significantly correlated with 10 and two radiomic features, respectively (-0.67 < ρ < -0.54 and 0.53 < ρ < 0.71, P < 0.05).
CONCLUSION: Our study revealed interesting relationships between the expression of eight N6-methyladenosine RNA regulators, as well as five up-regulated miRNAs, and CT radiomic features associated with clinical outcomes of ESCA patients.
A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis
Brassey, Charlotte A
O'Mahoney, Thomas G
Chamberlain, Andrew T
Sellers, William I
Journal of human evolution2018Journal Article, cited 3 times
Website
NAF-Prostate
Australopithecus Afarensis
Anthropology
Constructing 3D-Printable CAD Models of Prostates from MR Images
This paper describes the development of a procedure to generate patient-specific, three-dimensional (3D) solid models of prostates (and related anatomy) from magnetic resonance (MR) images. The 3D models are rendered in STL file format which can be physically printed or visualized on a holographic display system. An example is presented in which a 3D model is printed following this procedure.
COVID-19 detection on Chest X-ray images: A comparison of CNN architectures and ensembles
Breve, Fabricio Aparecido
Expert Systems with Applications2022Journal Article, cited 0 times
MIDRC-RICORD-1C
COVID-19 quickly became a global pandemic after only four months of its first detection. It is crucial to detect this disease as soon as possible to decrease its spread. The use of chest X-ray (CXR) images became an effective screening strategy, complementary to the reverse transcription-polymerase chain reaction (RT-PCR). Convolutional neural networks (CNNs) are often used for automatic image classification and they can be very useful in CXR diagnostics. In this paper, 21 different CNN architectures are tested and compared in the task of identifying COVID-19 in CXR images. They were applied to the COVIDx8B dataset, a large COVID-19 dataset with 16,352 CXR images coming from patients of at least 51 countries. Ensembles of CNNs were also employed and they showed better efficacy than individual instances. The best individual CNN instance results were achieved by DenseNet169, with an accuracy of 98.15% and an F1 score of 98.12%. These were further increased to 99.25% and 99.24%, respectively, through an ensemble with five instances of DenseNet169. These results are higher than those obtained in recent works using the same dataset.
An investigation of the conformity, feasibility, and expected clinical benefits of multiparametric MRI-guided dose painting radiotherapy in glioblastoma
Brighi, Caterina
Keall, Paul J
Holloway, Lois C
Walker, Amy
Whelan, Brendan
de Witt Hamer, Philip C
Verburg, Niels
Aly, Farhannah
Chen, Cathy
Koh, Eng-Siew
Waddington, David E J
Neuro-Oncology Advances2022Journal Article, cited 0 times
QIN GBM Treatment Response
Background: New technologies developed to improve survival outcomes for glioblastoma (GBM) continue to have limited success. Recently, image-guided dose painting (DP) radiotherapy has emerged as a promising strategy to increase local control rates. In this study, we evaluate the practical application of a multiparametric MRI model of glioma infiltration for DP radiotherapy in GBM by measuring its conformity, feasibility, and expected clinical benefits against standard of care treatment.
Methods: Maps of tumor probability were generated from perfusion/diffusion MRI data from 17 GBM patients via a previously developed model of GBM infiltration. Prescriptions for DP were linearly derived from tumor probability maps and used to develop dose optimized treatment plans. Conformity of DP plans to dose prescriptions was measured via a quality factor. Feasibility of DP plans was evaluated by dose metrics to target volumes and critical brain structures. Expected clinical benefit of DP plans was assessed by tumor control probability. The DP plans were compared to standard radiotherapy plans.
Results: The conformity of the DP plans was >90%. Compared to the standard plans, DP (1) did not affect dose delivered to organs at risk; (2) increased mean and maximum dose and improved minimum dose coverage for the target volumes; (3) reduced minimum dose within the radiotherapy treatment margins; (4) improved local tumor control probability within the target volumes for all patients.
Conclusions: A multiparametric MRI model of GBM infiltration can enable conformal, feasible, and potentially beneficial dose painting radiotherapy plans.
Comparative study of preclinical mouse models of high-grade glioma for nanomedicine research: the importance of reproducing blood-brain barrier heterogeneity
Brighi, C.
Reid, L.
Genovesi, L. A.
Kojic, M.
Millar, A.
Bruce, Z.
White, A. L.
Day, B. W.
Rose, S.
Whittaker, A. K.
Puttick, S.
THERANOSTICS2020Journal Article, cited 32 times
Website
The clinical translation of new nanoparticle-based therapies for high-grade glioma (HGG) remains extremely poor. This has partly been due to the lack of suitable preclinical mouse models capable of replicating the complex characteristics of recurrent HGG (rHGG), namely the heterogeneous structural and functional characteristics of the blood-brain barrier (BBB). The goal of this study is to compare the characteristics of the tumor BBB of rHGG with two different mouse models of HGG, the ubiquitously used U87 cell line xenograft model and a patient-derived cell line WK1 xenograft model, in order to assess their suitability for nanomedicine research. Method: Structural MRI was used to assess the extent of BBB opening in mouse models with a fully developed tumor, and dynamic contrast enhanced MRI was used to obtain values of BBB permeability in contrast enhancing tumor. H&E and immunofluorescence staining were used to validate results obtained from the in vivo imaging studies. Results: The extent of BBB disruption and permeability in the contrast enhancing tumor was significantly higher in the U87 model than in rHGG. These values in the WK1 model are similar to those of rHGG. The U87 model is not infiltrative, has an entirely abnormal and leaky vasculature and it is not of glial origin. The WK1 model infiltrates into the non-neoplastic brain parenchyma, it has both regions with intact BBB and regions with leaky BBB and remains of glial origin. Conclusion: The WK1 mouse model more accurately reproduces the extent of BBB disruption, the level of BBB permeability and the histopathological characteristics found in rHGG patients than the U87 mouse model, and is therefore a more clinically relevant model for preclinical evaluations of emerging nanoparticle-based therapies for HGG.
Repeatability of radiotherapy dose-painting prescriptions derived from a multiparametric magnetic resonance imaging model of glioblastoma infiltration
Brighi, C.
Verburg, N.
Koh, E. S.
Walker, A.
Chen, C.
Pillay, S.
de Witt Hamer, P. C.
Aly, F.
Holloway, L. C.
Keall, P. J.
Waddington, D. E. J.
Phys Imaging Radiat Oncol2022Journal Article, cited 0 times
Website
QIN GBM Treatment Response
Background and purpose: Glioblastoma (GBM) patients have a dismal prognosis. Tumours typically recur within months of surgical resection and post-operative chemoradiation. Multiparametric magnetic resonance imaging (mpMRI) biomarkers promise to improve GBM outcomes by identifying likely regions of infiltrative tumour in tumour probability (TP) maps. These regions could be treated with escalated dose via dose-painting radiotherapy to achieve higher rates of tumour control. Crucial to the technical validation of dose-painting using imaging biomarkers is the repeatability of the derived dose prescriptions. Here, we quantify repeatability of dose-painting prescriptions derived from mpMRI. Materials and methods: TP maps were calculated with a clinically validated model that linearly combined apparent diffusion coefficient (ADC) and relative cerebral blood volume (rBV) or ADC and relative cerebral blood flow (rBF) data. Maps were developed for 11 GBM patients who received two mpMRI scans separated by a short interval prior to chemoradiation treatment. A linear dose mapping function was applied to obtain dose-painting prescription (DP) maps for each session. Voxel-wise and group-wise repeatability metrics were calculated for parametric, TP and DP maps within radiotherapy margins. Results: DP maps derived from mpMRI were repeatable between imaging sessions (ICC > 0.85). ADC maps showed higher repeatability than rBV and rBF maps (Wilcoxon test, p = 0.001). TP maps obtained from the combination of ADC and rBF were the most stable (median ICC: 0.89). Conclusions: Dose-painting prescriptions derived from a mpMRI model of tumour infiltration have a good level of repeatability and can be used to generate reliable dose-painting plans for GBM patients.
Fitting Segmentation Networks on Varying Image Resolutions Using Splatting
Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step.
Cancer as a Model System for Testing Metabolic Scaling Theory
Brummer, Alexander B.
Savage, Van M.
Frontiers in Ecology and Evolution2021Journal Article, cited 0 times
Website
NSCLC Radiogenomics
LUNG
Biological allometries, such as the scaling of metabolism to mass, are hypothesized to result from natural selection to maximize how vascular networks fill space yet minimize internal transport distances and resistance to blood flow. Metabolic scaling theory argues two guiding principles—conservation of fluid flow and space-filling fractal distributions—describe a diversity of biological networks and predict how the geometry of these networks influences organismal metabolism. Yet, mostly absent from past efforts are studies that directly, and independently, measure metabolic rate from respiration and vascular architecture for the same organ, organism, or tissue. Lack of these measures may lead to inconsistent results and conclusions about metabolism, growth, and allometric scaling. We present simultaneous and consistent measurements of metabolic scaling exponents from clinical images of lung cancer, serving as a first-of-its-kind test of metabolic scaling theory, and identifying potential quantitative imaging biomarkers indicative of tumor growth. We analyze data for 535 clinical PET-CT scans of patients with non-small cell lung carcinoma to establish the presence of metabolic scaling between tumor metabolism and tumor volume. Furthermore, we use computer vision and mathematical modeling to examine predictions of metabolic scaling based on the branching geometry of the tumor-supplying blood vessel networks in a subset of 56 patients diagnosed with stage II-IV lung cancer. Examination of the scaling of maximum standard uptake value with metabolic tumor volume, and metabolic tumor volume with gross tumor volume, yield metabolic scaling exponents of 0.64 (0.20) and 0.70 (0.17), respectively. We compare these to the value of 0.85 (0.06) derived from the geometric scaling of the tumor-supplying vasculature. These results: (1) inform energetic models of growth and development for tumor forecasting; (2) identify imaging biomarkers in vascular geometry related to blood volume and flow; and (3) highlight unique opportunities to develop and test the metabolic scaling theory of ecology in tumors transitioning from avascular to vascular geometries.
An ensemble learning approach for brain cancer detection exploiting radiomic features
Brunese, Luca
Mercaldo, Francesco
Reginelli, Alfonso
Santone, Antonella
Comput Methods Programs Biomed2019Journal Article, cited 1 times
Website
REMBRANDT
BraTS
Classification
Radiopaedia
Magnetic Resonance Imaging (MRI)
BACKGROUND AND OBJECTIVE: The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS: A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS: We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION: The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.
Formal methods for prostate cancer gleason score and treatment prediction using radiomic biomarkers
Brunese, Luca
Mercaldo, Francesco
Reginelli, Alfonso
Santone, Antonella
Magnetic resonance imaging2020Journal Article, cited 11 times
Website
PROSTATE-DIAGNOSIS
Fused Radiology-Pathology Prostate
Gleason scoring
radiomics
Biomarker
A Novel Hybridized Feature Extraction Approach for Lung Nodule Classification Based on Transfer Learning Technique
Bruntha, P. M.
Pandian, S. I. A.
Anitha, J.
Abraham, S. S.
Kumar, S. N.
J Med Phys2022Journal Article, cited 0 times
Website
LIDC-IDRI
Convolutional Neural Network (CNN)
Support Vector Machine (SVM)
Classification
residual neural network
transfer learning
Purpose: In the field of medical diagnosis, deep learning-based computer-aided detection of diseases will reduce the burden of physicians in the diagnosis of diseases especially in the case of lung cancer nodule classification. Materials and Methods: A hybridized model which integrates deep features from Residual Neural Network using transfer learning and handcrafted features from the histogram of oriented gradients feature descriptor is proposed to classify the lung nodules as benign or malignant. The intrinsic convolutional neural network (CNN) features have been incorporated and they can resolve the drawbacks of handcrafted features that do not completely reflect the specific characteristics of a nodule. In the meantime, they also reduce the need for a large-scale annotated dataset for CNNs. For classifying malignant nodules and benign nodules, radial basis function support vector machine is used. The proposed hybridized model is evaluated on the LIDC-IDRI dataset. Results: It has achieved an accuracy of 97.53%, sensitivity of 98.62%, specificity of 96.88%, precision of 95.04%, F1 score of 0.9679, false-positive rate of 3.117%, and false-negative rate of 1.38% and has been compared with other state of the art techniques. Conclusions: The performance of the proposed hybridized feature-based classification technique is better than the deep features-based classification technique in lung nodule classification.
Two Stages CNN-Based Segmentation of Gliomas, Uncertainty Quantification and Prediction of Overall Patient Survival
This paper proposes, in the context of brain tumor study, a fast automatic method that segments tumors and predicts patient overall survival. The segmentation stage is implemented using two fully convolutional networks based on VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2019 BraTS Challenge. The first network yields to a binary segmentation (background vs lesion) and the second one focuses on the enhancing and non-enhancing tumor classes. The final multiclass segmentation is a fusion of the results of these two networks. The prediction stage is implemented using kernel principal component analysis and random forest classifiers. It only requires a predicted segmentation of the tumor and a homemade atlas. Its simplicity allows to train it with very few examples and it can be used after any segmentation process.
Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model
Buch, Karen
Kuno, Hirofumi
Qureshi, Muhammad M
Li, Baojun
Sakai, Osamu
Journal of applied clinical medical physics2018Journal Article, cited 0 times
Website
RIDER
TCGA
texture analysis
MRI
Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers
Buckler, AndrewJ
Ouellette, M.
Danagoulian, J.
Wernsing, G.
Liu, TiffanyTing
Savig, Erica
Suzek, BarisE
Rubin, DanielL
Paik, David
Journal of Digital Imaging2013Journal Article, cited 17 times
Website
Imaging biomarker
Ontology development
Quantitative imaging
Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm
Buda, Mateusz
Saha, Ashirbani
Mazurowski, Maciej A
Computers in Biology and Medicine2019Journal Article, cited 1 times
Website
TCGA-LGG
Radiomics
Deep learning
Radiogenomics
Brain
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes.; ; We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant.; ; We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.
Radiomics analysis of contrast-enhanced CT scans can distinguish between clear cell and non-clear cell renal cell carcinoma in different imaging protocols
Budai, Bettina Katalin
Stollmayer, Róbert
Rónaszéki, Aladár Dávid
Körmendy, Borbála
Zsombor, Zita
Palotás, Lõrinc
Fejér, Bence
Szendrõi, Attila
Székely, Eszter
Maurovich-Horvat, Pál
Kaposi, Pál Novák
2022Journal Article, cited 0 times
C4KC-KiTS
TCGA-KIRC
TCGA-KIRP
Introduction: This study aimed to construct a radiomics-based machine learning (ML) model for differentiation between non-clear cell and clear cell renal cell carcinomas (ccRCC) that is robust against institutional imaging protocols and scanners.
Materials and methods: Preoperative unenhanced (UN), corticomedullary (CM), and excretory (EX) phase CT scans from 209 patients diagnosed with RCCs were retrospectively collected. After the three-dimensional segmentation, 107 radiomics features (RFs) were extracted from the tumor volumes in each contrast phase. For the ML analysis, the cases were randomly split into training and test sets with a 3:1 ratio. Highly correlated RFs were filtered out based on Pearson's correlation coefficient (r > 0.95). Intraclass correlation coefficient analysis was used to select RFs with excellent reproducibility (ICC ≥ 0.90). The most predictive RFs were selected by the least absolute shrinkage and selection operator (LASSO). A support vector machine algorithm-based binary classifier (SVC) was constructed to predict tumor types and its performance was evaluated based-on receiver operating characteristic curve (ROC) analysis. The "Kidney Tumor Segmentation 2019" (KiTS19) publicly available dataset was used during external validation of the model. The performance of the SVC was also compared with an expert radiologist's.
Results: The training set consisted of 121 ccRCCs and 38 non-ccRCCs, while the independent internal test set contained 40 ccRCCs and 13 non-ccRCCs. For external validation, 50 ccRCCs and 23 non-ccRCCs were identified from the KiTS19 dataset with the available UN, CM, and EX phase CTs. After filtering out the highly correlated and poorly reproducible features, the LASSO algorithm selected 10 CM phase RFs that were then used for model construction. During external validation, the SVC achieved an area under the ROC curve (AUC) value, accuracy, sensitivity, and specificity of 0.83, 0.78, 0.80, and 0.74, respectively. UN and/or EX phase RFs did not further increase the model's performance. Meanwhile, in the same comparison, the expert radiologist achieved similar performance with an AUC of 0.77, an accuracy of 0.79, a sensitivity of 0.84, and a specificity of 0.69.
Conclusion: Radiomics analysis of CM phase CT scans combined with ML can achieve comparable performance with an expert radiologist in differentiating ccRCCs from non-ccRCCs.
Medical Physics2015Journal Article, cited 4 times
Website
Algorithm Development
Image registration
PROSTATE
Magnetic Resonance Imaging (MRI)
PURPOSE: T2-weighted magnetic resonance imaging (MRI) is commonly used for anatomical visualization in the pelvis area, such as the prostate, with high soft-tissue contrast. MRI can also provide functional information such as diffusion-weighted imaging (DWI) which depicts the molecular diffusion processes in biological tissues. The combination of anatomical and functional imaging techniques is widely used in oncology, e.g., for prostate cancer diagnosis and staging. However, acquisition-specific distortions as well as physiological motion lead to misalignments between T2 and DWI and consequently to a reduced diagnostic value. Image registration algorithms are commonly employed to correct for such misalignment. METHODS: The authors compare the performance of five state-of-the-art nonrigid image registration techniques for accurate image fusion of DWI with T2. RESULTS: Image data of 20 prostate patients with cancerous lesions or cysts were acquired. All registration algorithms were validated using intensity-based as well as landmark-based techniques. CONCLUSIONS: The authors' results show that the "fast elastic image registration" provides most accurate results with a target registration error of 1.07 +/- 0.41 mm at minimum execution times of 11 +/- 1 s.
E1D3 U-Net for Brain Tumor Segmentation: Submission to the RSNA-ASNR-MICCAI BraTS 2021 challenge
Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in medical image segmentation tasks. A common feature in most top-performing CNNs is an encoder-decoder architecture inspired by the U-Net. For multi-region brain tumor segmentation, 3D U-Net architecture and its variants provide the most competitive segmentation performances. In this work, we propose an interesting extension of the standard 3D U-Net architecture, specialized for brain tumor segmentation. The proposed network, called E1D3 U-Net, is a one-encoder, three-decoder fully-convolutional neural network architecture where each decoder segments one of the hierarchical regions of interest: whole tumor, tumor core, and enhancing core. On the BraTS 2018 validation (unseen) dataset, E1D3 U-Net demonstrates single-prediction performance comparable with most state-of-the-art networks in brain tumor segmentation, with reasonable computational requirements and without ensembling. As a submission to the RSNA-ASNR-MICCAI BraTS 2021 challenge, we also evaluate our proposal on the BraTS 2021 dataset. E1D3 U-Net showcases the flexibility in the standard 3D U-Net architecture which we exploit for the task of brain tumor segmentation.
Decision region analysis to deconstruct the subgroup influence on AI/ML predictions
Burgon, Alexis
Petrick, Nicholas
Sahiner, Berkman
Pennello, Gene
Samala, Ravi K.
2023Conference Proceedings, cited 0 times
COVID-19-NY-SBU
MIDRC-RICORD-1C
Assessing the generalizability of deep learning algorithms based on the size and diversity of the training data is not trivial. This study uses the mapping of samples in the image data space to the decision regions in the prediction space to understand how different subgroups in the data impact the neural network learning process and affect model generalizability. Using vicinal distribution-based linear interpolation, a plane of the decision region space spanned by the random ‘triplet’ of three images can be constructed. Analyzing these decision regions for many random triplets can provide insight into the relationships between distinct subgroups. In this study, a contrastive self-supervised approach is used to develop a ‘base’ classification model trained on a large chest x-ray (CXR) dataset. The base model is fine-tuned on COVID-19 CXR data to predict image acquisition technology (computed radiography (CR) or digital radiography (DX) and patient sex (male (M) or female (F)). Decision region analysis shows that the model’s image acquisition technology decision space is dominated by CR, regardless of the acquisition technology for the base images. Similarly, the Female class dominates the decision space. This study shows that decision region analysis has the potential to provide insights into subgroup diversity, sources of imbalances in the data, and model generalizability.
Yet anOther Dose Algorithm (YODA) for independent computations of dose and dose changes due to anatomical changes
Burlacu, T.
Lathouwers, D.
Perko, Z.
Phys Med Biol2024Journal Article, cited 1 times
Website
Pelvic-Reference-Data
Objective.To assess the viability of a physics-based, deterministic and adjoint-capable algorithm for performing treatment planning system independent dose calculations and for computing dosimetric differences caused by anatomical changes.Approach.A semi-numerical approach is employed to solve two partial differential equations for the proton phase-space density which determines the deposited dose. Lateral hetereogeneities are accounted for by an optimized (Gaussian) beam splitting scheme. Adjoint theory is applied to approximate the change in the deposited dose caused by a new underlying patient anatomy.Main results.The dose engine's accuracy was benchmarked through three-dimensional gamma index comparisons against Monte Carlo simulations done in TOPAS. For a lung test case, the worst passing rate with (1 mm, 1%, 10% dose cut-off) criteria is 94.55%. The effect of delivering treatment plans on repeat CTs was also tested. For non-robustly optimized plans the adjoint component was accurate to 5.7% while for a robustly optimized plan it was accurate to 4.8%.Significance.Yet anOther Dose Algorithm is capable of accurate dose computations in both single and multi spot irradiations when compared to TOPAS. Moreover, it is able to compute dosimetric differences due to anatomical changes with small to moderate errors thereby facilitating its use for patient-specific quality assurance in online adaptive proton therapy.
Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage
Selective segmentation of a feature that has two distinct intensities
Burrows, Liam
Chen, Ke
Torella, Francesco
Journal of Algorithms & Computational Technology2021Journal Article, cited 0 times
Website
H&E-stained slides
MiMM_SBILab Dataset: Microscopic Images of Multiple Myeloma
Segmentation
Pathomics
It is common for a segmentation model to compute and locate edges or regions separated by edges according to a certain distribution of intensity. However such edge information is not always useful to extract an object or feature that has two distinct intensities e.g. segmentation of a building with signages in front or of an organ that has diseased regions, unless some of kind of manual editing is applied or a learning idea is used. This paper proposes an automatic and selective segmentation model that can segment a feature that has two distinct intensities by a single click. A patch like idea is employed to design our two stage model, given only one geometric marker to indicate the location of the inside region. The difficult case where the inside region is leaning towards the boundary of the interested feature is investigated with recommendations given and reliability tested. The model is mainly presented 2D but it can be easily generalised to 3D. We have implemented the model for segmenting both 2D and 3D images.
Diffusion tensor transformation for personalizing target volumes in radiation therapy
Buti, G.
Ajdari, A.
Bridge, C. P.
Sharp, G. C.
Bortfeld, T.
Med Image Anal2024Journal Article, cited 0 times
Website
GLIS-RT
Diffusion tensor imaging (DTI) is used in tumor growth models to provide information on the infiltration pathways of tumor cells into the surrounding brain tissue. When a patient-specific DTI is not available, a template image such as a DTI atlas can be transformed to the patient anatomy using image registration. This study investigates a model, the invariance under coordinate transform (ICT), that transforms diffusion tensors from a template image to the patient image, based on the principle that the tumor growth process can be mapped, at any point in time, between the images using the same transformation function that we use to map the anatomy. The ICT model allows the mapping of tumor cell densities and tumor fronts (as iso-levels of tumor cell density) from the template image to the patient image for inclusion in radiotherapy treatment planning. The proposed approach transforms the diffusion tensors to simulate tumor growth in locally deformed anatomy and outputs the tumor cell density distribution over time. The ICT model is validated in a cohort of ten brain tumor patients. Comparative analysis with the tumor cell density in the original template image shows that the ICT model accurately simulates tumor cell densities in the deformed image space. By creating radiotherapy target volumes as tumor fronts, this study provides a framework for more personalized radiotherapy treatment planning, without the use of additional imaging.
The influence of anisotropy on the clinical target volume of brain tumor patients
Buti, Gregory
Ajdari, Ali
Hochreuter, Kim
Shih, Helen
Bridge, Christopher P
Sharp, Gregory C
Bortfeld, Thomas
Physics in Medicine and Biology2024Journal Article, cited 0 times
GLIS-RT
Diffusion Tensor Imaging
Radiotherapy Planning
Glioma
Objective.Current radiotherapy guidelines for glioma target volume definition recommend a uniform margin expansion from the gross tumor volume (GTV) to the clinical target volume (CTV), assuming uniform infiltration in the invaded brain tissue. However, glioma cells migrate preferentially along white matter tracts, suggesting that white matter directionality should be considered in an anisotropic CTV expansion. We investigate two models of anisotropic CTV expansion and evaluate their clinical feasibility.Approach.To incorporate white matter directionality into the CTV, a diffusion tensor imaging (DTI) atlas is used. The DTI atlas consists of water diffusion tensors that are first spatially transformed into local tumor resistance tensors, also known as metric tensors, and secondly fed to a CTV expansion algorithm to generate anisotropic CTVs. Two models of spatial transformation are considered in the first step. The first model assumes that tumor cells experience reduced resistance parallel to the white matter fibers. The second model assumes that the anisotropy of tumor cell resistance is proportional to the anisotropy observed in DTI, with an 'anisotropy weighting parameter' controlling the proportionality. The models are evaluated in a cohort of ten brain tumor patients.Main results.To evaluate the sensitivity of the model, a library of model-generated CTVs was computed by varying the resistance and anisotropy parameters. Our results indicate that the resistance coefficient had the most significant effect on the global shape of the CTV expansion by redistributing the target volume from potentially less involved gray matter to white matter tissue. In addition, the anisotropy weighting parameter proved useful in locally increasing CTV expansion in regions characterized by strong tissue directionality, such as near the corpus callosum.Significance.By incorporating anisotropy into the CTV expansion, this study is a step toward an interactive CTV definition that can assist physicians in incorporating neuroanatomy into a clinically optimized CTV.
Four‐Dimensional Machine Learning Radiomics for the Pretreatment Assessment of Breast Cancer Pathologic Complete Response to Neoadjuvant Chemotherapy in Dynamic Contrast‐Enhanced MRI
Caballo, Marco
Sanderink, Wendelien BG
Han, Luyi
Gao, Yuan
Athanasiou, Alexandra
Mann, Ritse M
Journal of Magnetic Resonance Imaging2022Journal Article, cited 1 times
Website
Duke-Breast-Cancer-MRI
Machine Learning
Radiomic feature
breast cancer
4D radiomics in dynamic contrast-enhanced MRI: prediction of pathological complete response and systemic recurrence in triple-negative breast cancer
Caballo, Marco
Sanderink, Wendelien B. G.
Han, Luyi
Gao, Yuan
Athanasiou, Alexandra
Mann, Ritse M.
2022Conference Proceedings, cited 0 times
Duke-Breast-Cancer-MRI
We developed a four-dimensional (4D) radiomics approach for the analysis of breast cancer on dynamic contrast-enhanced (DCE) MRI scans. This approach quantifies 348 features related to kinetics, enhancement heterogeneity, and timedependent textural variation in 4D (3D over time) from the tumors and the peritumoral regions, leveraging both spatial and temporal image information. The potential of these features was studied for two clinical applications: the prediction of pathological complete response (pCR) to neoadjuvant chemotherapy (NAC), and of systemic recurrence (SR) in triplenegative (TN) breast cancers. For this, 72 pretreatment images of TN cancers (19 achieving pCR, 14 recurrence events), retrieved from a publicly available dataset (The Cancer Imaging Archive, Duke-Breast-Cancer-MRI dataset), were used. For both clinical problems, radiomic features were extracted from each case and used to develop a machine learning logistic regression model for outcome prediction. The model was trained and validated in a supervised leave-one-out cross validation fashion, with the input feature space reduced through statistical analysis and forward selection for overfitting prevention. The model was tested using the area under the receiver operating characteristics (ROC) curve (AUC), and statistical significance was assessed using the associated 95% confidence interval estimated through bootstrapping. The model achieved an AUC of 0.80 and 0.86, respectively for pCR and SR prediction. Both AUC values were statistically significant (p<0.05, adjusted for repeated testing). In conclusion, the developed approach could quantify relevant imaging biomarkers from TN breast cancers in pretreatment DCE-MRI images. These biomarkers were promising in the prediction of pCR to NAC and SR.
Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
Cabrera, Yaniel
Fetit, Ahmed E.
IEEE Access2022Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Convolutional neural networks (CNNs) have become the de facto algorithms of choice for semantic segmentation tasks in biomedical image processing. Yet, models based on CNNs remain susceptible to the domain shift problem, where a mismatch between source and target distributions could lead to a drop in performance. CNNs were recently shown to exhibit a textural bias when processing natural images, and recent studies suggest that this bias also extends to the context of biomedical imaging. In this paper, we focus on Magnetic Resonance Images (MRI) and investigate textural bias in the context of ${k}$ -space artifacts (Gibbs, spike, and wraparound artifacts), which naturally manifest in clinical MRI scans. We show that carefully introducing such artifacts at training time can help reduce textural bias, and consequently lead to CNN models that are more robust to acquisition noise and out-of-distribution inference, including scans from hospitals not seen during training. We also present Gibbs ResUnet; a novel, end-to-end framework that automatically finds an optimal combination of Gibbs ${k}$ -space stylizations and segmentation model weights. We illustrate our findings on multimodal and multi-institutional clinical MRI datasets obtained retrospectively from the Medical Segmentation Decathlon $(n=750)$ and The Cancer Imaging Archive $(n=243)$ .
Prognostic generalization of multi-level CT-dose fusion dosiomics from primary tumor and lymph node in nasopharyngeal carcinoma
Cai, C.
Lv, W.
Chi, F.
Zhang, B.
Zhu, L.
Yang, G.
Zhao, S.
Zhu, Y.
Han, X.
Dai, Z.
Wang, X.
Lu, L.
Med Phys2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
HNSCC
Radiomics
Computed Tomography (CT)
dosiomics
multi-level fusion
Segmentation
Algorithm Development
OBJECTIVES: To investigate the prognostic performance of multi-level CT-dose fusion dosiomics at the image-, matrix- and feature-levels from the gross tumor volume at nasopharynx and the involved lymph node for nasopharyngeal carcinoma (NPC) patients. MATERIALS AND METHODS: Two hundred and nineteen NPC patients (175 vs. 44 for training vs. internal validation) were used to train prediction model, and thirty two NPC patients were used for external validation. We first extracted CT and dose information from intratumoral nasopharynx (GTV_nx) and lymph node (GTV_nd) regions. Then the corresponding peritumoral regions (RING_3mm and RING_5mm) were also considered. Thus, the individual and combination of intra- and peri-tumoral regions were as follows: GTV_nx, GTV_nd, RING_3mm_nx, RING_3mm_nd, RING_5mm_nx, RING_5mm_nd, GTV_nxnd, RING_3mm_nxnd, RING_5mm_nxnd, GTV+RING_3mm_nxnd and GTV+RING_5mm_nxnd. For each region, eleven models were built by combining 5 clinical parameters and 127 features from (1) dose images alone; (2-7) fused dose and CT images via wavelet-based fusion (WF) using CT weights of 0.2, 0.4, 0.6 and 0.8, gradient transfer fusion (GTF), and guided filtering-based fusion (GFF); (8) fused matrices (sumMat); (9-10) fused features derived via feature averaging (avgFea) and feature concatenation (conFea); and finally, (11) CT images alone. The C-index and Kaplan-Meier curves with log-rank test were used to assess model performance. RESULTS: The fusion models' performance was better than single CT/dose model on both internal and external validation. Models combined the information from both GTV_nx and GTV_nd regions outperformed the single region model. For internal validation, GTV+RING_3mm_nxnd GFF model achieved the highest C-index both in recurrence-free survival (RFS) and metastasis-free survival (MFS) predictions (RFS: 0.822; MFS: 0.786). The highest C-index in external validation set was achieved by RING_3mm_nxnd model (RFS: 0.762; MFS: 0.719). The GTV+RING_3mm_nxnd GFF model is able to significantly separate patients into high-risk and low-risk groups compared to dose-only or CT-only models. CONCLUSION: Fusion dosiomics model combining the primary tumor, the involved lymph node, and 3mm peritumoral information outperformed single modality models for different outcome predictions, which is helpful for clinical decision-making and the development of personalized treatment. This article is protected by copyright. All rights reserved.
An Online Mammography Database with Biopsy Confirmed Types
Cai, Hongmin
Wang, Jinhua
Dan, Tingting
Li, Jiao
Fan, Zhihao
Yi, Weiting
Cui, Chunyan
Jiang, Xinhua
Li, Li
Scientific Data2023Journal Article, cited 1 times
Website
CMMD
Mammography
breast cancer
Breast carcinoma is the second largest cancer in the world among women. Early detection of breast cancer has been shown to increase the survival rate, thereby significantly increasing patients’ lifespan. Mammography, a noninvasive imaging tool with low cost, is widely used to diagnose breast disease at an early stage due to its high sensitivity. Although some public mammography datasets are useful, there is still a lack of open access datasets that expand beyond the white population as well as missing biopsy confirmation or with unknown molecular subtypes. To fill this gap, we build a database containing two online breast mammographies. The dataset named by Chinese Mammography Database (CMMD) contains 3712 mammographies involved 1775 patients, which is divided into two branches. The first dataset CMMD1 contains 1026 cases (2214 mammographies) with biopsy confirmed type of benign or malignant tumors. The second dataset CMMD2 includes 1498 mammographies for 749 patients with known molecular subtypes. Our database is constructed to enrich the diversity of mammography data and promote the development of relevant fields.
Pancreas Segmentation in MRI Using Graph-Based Decision Fusion on Convolutional Neural Networks
Deep neural networks have demonstrated very promising performance on accurate segmentation of challenging organs (e.g., pancreas) in abdominal CT and MRI scans. The current deep learning approaches conduct pancreas segmentation by processing sequences of 2D image slices independently through deep, dense per-pixel masking for each image, without explicitly enforcing spatial consistency constraint on segmentation of successive slices. We propose a new convolutional/recurrent neural network architecture to address the contextual learning and segmentation consistency problem. A deep convolutional sub-network is first designed and pre-trained from scratch. The output layer of this network module is then connected to recurrent layers and can be fine-tuned for contextual learning, in an end-to-end manner. Our recurrent sub-network is a type of Long short-term memory (LSTM) network that performs segmentation on an image by integrating its neighboring slice segmentation predictions, in the form of a dependent sequence processing. Additionally, a novel segmentation-direct loss function (named Jaccard Loss) is proposed and deep networks are trained to optimize Jaccard Index (JI) directly. Extensive experiments are conducted to validate our proposed deep models, on quantitative pancreas segmentation using both CT and MRI scans. Our method outperforms the state-of-the-art work on CT [11] and MRI pancreas segmentation [1], respectively.
Feature Learning by Attention and Ensemble with 3D U-Net to Glioma Tumor Segmentation
Cai, Xiaohong
Lou, Shubin
Shuai, Mingrui
An, Zhulin
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS2021 Task1 is research on segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans. Base on BraTS 2020 top ten team’s solution (open brats2020, ranked among the top ten teams work), we proposed a similar as 3D U-Net neural network, called as TE U-Net, to differentiate glioma sub-regions class. According that automatically learns to focus on sub-regions class structures of varying shapes and sizes, we proposed TE U-Net which is similar with U-Net++ network architecture. Firstly, we reserved encoder second and third stage’s skip connect design, then also cut off first stage skip connect design. Secondly, multiple stage features through attention gate block before features skip connect, so as to ensemble channels and space region information to suppress irrelevant regions. Finally, in order to improve model performance, on network post-processing stage, we ensemble multiple similar 3D U-Net with attention module. On the online validation database, the TE U-Net architecture get best result is that the GD-enhancing tumor (ET) dice is 83.79%, the peritumoral edematous/invaded tissue (TC) dice is 86.47%, and the necrotic tumor core (WT) dice is 91.98%, Hausdorff(95%) values is 6.39,7.81,3.86and Sensitivity values is 82.20%, 83.99%, 91.92% respectively. And our solution achieved a dice of 85.62%,86.70%,90.64% for ET,TC and WT, as well as Hausdorff(95%) is 18.70,21.06,10.88 on final private test dataset.
Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing
Cai, Yiheng
Li, Yuanyuan
Qiu, Changyan
Ma, Jie
Gao, Xurong
IEEE Access2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Pancreas-CT
RIDER NEURO MRI
TCGA-BLCA
RIDER Lung CT
Content based image retrieval (CBIR)
Convolutional Neural Network (CNN)
Deep learning
machine learning
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.
The University of California San Francisco Preoperative Diffuse Glioma MRI Dataset
A real use case of semi-supervised learning for mammogram classification in a local clinic of Costa Rica
Calderon-Ramirez, S.
Murillo-Hernandez, D.
Rojas-Salazar, K.
Elizondo, D.
Yang, S.
Moemeni, A.
Molina-Cabello, M.
Med Biol Eng Comput2022Journal Article, cited 13 times
Website
CBIS-DDSM
Costa Rica
*Diagnosis
Computer-Assisted/methods
Humans
Mammography
Reproducibility of Results
*Supervised Machine Learning
Breast cancer
Data imbalance
Mammogram
Semi-supervised deep learning
Transfer learning
The implementation of deep learning-based computer-aided diagnosis systems for the classification of mammogram images can help in improving the accuracy, reliability, and cost of diagnosing patients. However, training a deep learning model requires a considerable amount of labelled images, which can be expensive to obtain as time and effort from clinical practitioners are required. To address this, a number of publicly available datasets have been built with data from different hospitals and clinics, which can be used to pre-train the model. However, using models trained on these datasets for later transfer learning and model fine-tuning with images sampled from a different hospital or clinic might result in lower performance. This is due to the distribution mismatch of the datasets, which include different patient populations and image acquisition protocols. In this work, a real-world scenario is evaluated where a novel target dataset sampled from a private Costa Rican clinic is used, with few labels and heavily imbalanced data. The use of two popular and publicly available datasets (INbreast and CBIS-DDSM) as source data, to train and test the models on the novel target dataset, is evaluated. A common approach to further improve the model's performance under such small labelled target dataset setting is data augmentation. However, often cheaper unlabelled data is available from the target clinic. Therefore, semi-supervised deep learning, which leverages both labelled and unlabelled data, can be used in such conditions. In this work, we evaluate the semi-supervised deep learning approach known as MixMatch, to take advantage of unlabelled data from the target dataset, for whole mammogram image classification. We compare the usage of semi-supervised learning on its own, and combined with transfer learning (from a source mammogram dataset) with data augmentation, as also against regular supervised learning with transfer learning and data augmentation from source datasets. It is shown that the use of a semi-supervised deep learning combined with transfer learning and data augmentation can provide a meaningful advantage when using scarce labelled observations. Also, we found a strong influence of the source dataset, which suggests a more data-centric approach needed to tackle the challenge of scarcely labelled data. We used several different metrics to assess the performance gain of using semi-supervised learning, when dealing with very imbalanced test datasets (such as the G-mean and the F2-score), as mammogram datasets are often very imbalanced. Graphical Abstract Description of the test-bed implemented in this work. Two different source data distributions were used to fine-tune the different models tested in this work. The target dataset is the in-house CR-Chavarria-2020 dataset.
A robust index for metal artifact quantification in computed tomography
Cammin, Jochen
Journal of applied clinical medical physics2024Journal Article, cited 0 times
Website
TCGA-PRAD
NaF PROSTATE
COVID-19-AR
Impact of Prior Y90 Dosimetry on Toxicity and Outcomes Following SBRT for Hepatocellular Carcinoma
Campbell, Shauna
Juloori, Aditya
Smile, Timothy
LaHurd, Danielle
Yu, Naichang
Woody, Neil
Stephans, Kevin
2020Journal Article, cited 0 times
PROSTATEx
Lung Cancer Identification via Deep Learning: A Multi-Stage Workflow
Lung cancer diagnosis involves different screening exams concluding with a biopsy. Although it is among the most diagnosed, lung cancer is characterized by a very high mortality rate caused by its aggressive nature. Though a swift identification is essential, the current procedure requires multiple physicians to visually inspect many images, leading to a lengthy analysis time. In this context, to support the radiologists and automate such repetitive processes, Deep Learning (DL) techniques have found their way as helpful diagnosis support tools. With this work, we propose an end-to-end multi-step framework for lung cancer localization within routinely acquired Computed Tomography images. The framework is composed of a first step of lung segmentation, followed by a patch classification model, and ends with a mass segmentation module. Lung segmentation reaches an accuracy of 99.6% even when considerable damages are present, while the patch classifier achieves a sensitivity of 85.48% in identifying patches containing masses. Finally, we evaluate the end-to-end framework for mass segmentation, which proves to be the most challenging task reaching a mean Dice coefficient of 68.56%.
A quantitative model based on clinically relevant MRI features differentiates lower grade gliomas and glioblastoma
Cao, H.
Erson-Omay, E. Z.
Li, X.
Gunel, M.
Moliterno, J.
Fulbright, R. K.
Eur Radiol2020Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
Radiomics
Radiogenomics
Magnetic Resonance Imaging (MRI)
Machine Learning
OBJECTIVES: To establish a quantitative MR model that uses clinically relevant features of tumor location and tumor volume to differentiate lower grade glioma (LRGG, grades II and III) and glioblastoma (GBM, grade IV). METHODS: We extracted tumor location and tumor volume (enhancing tumor, non-enhancing tumor, peritumor edema) features from 229 The Cancer Genome Atlas (TCGA)-LGG and TCGA-GBM cases. Through two sampling strategies, i.e., institution-based sampling and repeat random sampling (10 times, 70% training set vs 30% validation set), LASSO (least absolute shrinkage and selection operator) regression and nine-machine learning method-based models were established and evaluated. RESULTS: Principal component analysis of 229 TCGA-LGG and TCGA-GBM cases suggested that the LRGG and GBM cases could be differentiated by extracted features. For nine machine learning methods, stack modeling and support vector machine achieved the highest performance (institution-based sampling validation set, AUC > 0.900, classifier accuracy > 0.790; repeat random sampling, average validation set AUC > 0.930, classifier accuracy > 0.850). For the LASSO method, regression model based on tumor frontal lobe percentage and enhancing and non-enhancing tumor volume achieved the highest performance (institution-based sampling validation set, AUC 0.909, classifier accuracy 0.830). The formula for the best performance of the LASSO model was established. CONCLUSIONS: Computer-generated, clinically meaningful MRI features of tumor location and component volumes resulted in models with high performance (validation set AUC > 0.900, classifier accuracy > 0.790) to differentiate lower grade glioma and glioblastoma. KEY POINTS: * Lower grade glioma and glioblastoma have significant different location and component volume distributions. * We built machine learning prediction models that could help accurately differentiate lower grade gliomas and GBM cases. We introduced a fast evaluation model for possible clinical differentiation and further analysis.
A CNN-transformer fusion network for COVID-19 CXR image classification
Cao, K.
Deng, T.
Zhang, C.
Lu, L.
Li, L.
PLoS One2022Journal Article, cited 0 times
Website
MIDRC-RICORD-1C
COVID-19
Humans
*COVID-19/diagnostic imaging
*Deep Learning
Neural Networks
Computer
Algorithms
*Pneumonia
The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers' proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19.
A 4D-CBCT correction network based on contrastive learning for dose calculation in lung cancer
Cao, N.
Wang, Z.
Ding, J.
Zhang, H.
Zhang, S.
Gao, L.
Sun, J.
Xie, K.
Ni, X.
Radiat Oncol2024Journal Article, cited 0 times
Website
4D-Lung
*Lung Neoplasms/diagnostic imaging/radiotherapy
*Carcinoma
Non-Small-Cell Lung
*Spiral Cone-Beam Computed Tomography
Cone-Beam Computed Tomography/methods
Image Processing
Computer-Assisted/methods
Four-Dimensional Computed Tomography
Radiotherapy Planning
Computer-Assisted/methods
4d-cbct
Deep learning
Image quality correction
Lung cancer
OBJECTIVE: This study aimed to present a deep-learning network called contrastive learning-based cycle generative adversarial networks (CLCGAN) to mitigate streak artifacts and correct the CT value in four-dimensional cone beam computed tomography (4D-CBCT) for dose calculation in lung cancer patients. METHODS: 4D-CBCT and 4D computed tomography (CT) of 20 patients with locally advanced non-small cell lung cancer were used to paired train the deep-learning model. The lung tumors were located in the right upper lobe, right lower lobe, left upper lobe, and left lower lobe, or in the mediastinum. Additionally, five patients to create 4D synthetic computed tomography (sCT) for test. Using the 4D-CT as the ground truth, the quality of the 4D-sCT images was evaluated by quantitative and qualitative assessment methods. The correction of CT values was evaluated holistically and locally. To further validate the accuracy of the dose calculations, we compared the dose distributions and calculations of 4D-CBCT and 4D-sCT with those of 4D-CT. RESULTS: The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) of the 4D-sCT increased from 87% and 22.31 dB to 98% and 29.15 dB, respectively. Compared with cycle consistent generative adversarial networks, CLCGAN enhanced SSIM and PSNR by 1.1% (p < 0.01) and 0.42% (p < 0.01). Furthermore, CLCGAN significantly decreased the absolute mean differences of CT value in lungs, bones, and soft tissues. The dose calculation results revealed a significant improvement in 4D-sCT compared to 4D-CBCT. CLCGAN was the most accurate in dose calculations for left lung (V5Gy), right lung (V5Gy), right lung (V20Gy), PTV (D98%), and spinal cord (D2%), with the relative dose difference were reduced by 6.84%, 3.84%, 1.46%, 0.86%, 3.32% compared to 4D-CBCT. CONCLUSIONS: Based on the satisfactory results obtained in terms of image quality, CT value measurement, it can be concluded that CLCGAN-based corrected 4D-CBCT can be utilized for dose calculation in lung cancer.
A Comprehensive Review of Computer-Aided Diagnosis of Pulmonary Nodules Based on Computed Tomography Scans
Cao, Wenming
Wu, Rui
Cao, Guitao
He, Zhihai
IEEE Access2020Journal Article, cited 0 times
QIN LUNG CT
Lung cancer is one of the malignant tumor diseases with the fastest increase in morbidity and mortality, which poses a great threat to human health. Low-Dose Computed Tomography (LDCT) screening has been proved as a practical technique for improving the accuracy of pulmonary nodule detection and classification at early cancer diagnosis, which helps to reduce mortality. Therefore, with the explosive growth of CT data, it is of great clinical significance to exploit an effective Computer-Aided Diagnosis (CAD) system for radiologists on automatic nodule analysis. In this article, a comprehensive review of the application and development of CAD systems is presented. The experimental benchmarks for nodule analysis are first described and summarized, covering public datasets of lung CT scans, commonly used evaluation metrics and various medical competitions. We then introduce the main structure of a CAD system and present some efficient methodologies. For the extensive use of Convolutional Neural Network (CNN) based methods in pulmonary nodule investigations recently, we summarized the advantages of CNNs over traditional image processing methods. Besides, we mainly select the CAD systems developed by state-of-the-art CNNs with excellent performance and analyze their objectives, algorithms as well as results. Finally, research trends, existing challenges, and future directions in this field are discussed.
Multi-scale features exist widely in biomedical images. For example, the scale of lesions may vary greatly according to different diseases. Effective representation of multi-scale features is essential for fully perceiving and understanding objects, which guarantees the performance of models. However, in biomedical image tasks, the insufficiency of data may prevent models from effectively capturing multi-scale features. In this paper, we propose Feature Pyramid Block (FPB), a novel structure to improve multi-scale feature representation within a single convolutional layer, which can be easily plugged into existing convolutional networks. Experiments on public biomedical image datasets prove consistent performance improvement with FPB. Furthermore, the convergence speed is faster and the computational costs are lower when using FPB, which proves high efficiency of our method.
Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.
Head and neck cancer patient images for determining auto‐segmentation accuracy in T2‐weighted magnetic resonance imaging through expert manual segmentations
Cardenas, C. E.
Mohamed, A. S. R.
Yang, J. Z.
Gooding, M.
Veeraraghavan, H.
Kalpathy-Cramer, J.
Ng, S. P.
Ding, Y.
Wang, J. H.
Lai, S. Y.
Fuller, C. D.
Sharp, G.
Medical Physics2020Journal Article, cited 40 times
Website
AAPM RT-MAC
automatic segmentation
grand challenge
head and neck cancer
mri
radiation therapy
radiation-therapy
ncic ctg
quality
delineation
hknpcsg
dahanca
eortc
ncri
Purpose The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations.
Acquisition and validation methods T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary.
Data format and usage notes The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection “AAPM RT-MAC Grand Challenge 2019” (https://doi.org/10.7937/tcia.2019.bcfjqfqb).
Potential applications This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.
MultiATTUNet: Brain Tumor Segmentation and Survival Multitasking
Carmo, Diedre
Rittner, Leticia
Lotufo, Roberto
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Segmentation of Glioma from three dimensional magnetic resonance imaging (MRI) is useful for diagnosis and surgical treatment of patients with brain tumor. Manual segmentation is expensive, requiring medical specialists. In the recent years, the Brain Tumor Segmentation Challenge (BraTS) has been calling researchers to submit automated glioma segmentation and survival prediction methods for evaluation and discussion over their public, multimodality MRI dataset, with manual annotations. This work presents an exploration of different solutions to the problem, using 3D UNets and self attention for multitasking both predictions and also training (2D) EfficientDet derived segmentations, with the best results submitted for the official challenge leaderboard. We show that end-to-end multitasking survival and segmentation, in this case, led to better results.
AutoComBat: a generic method for harmonizing MRI-based radiomic features
The use of multicentric data is becoming essential for developing generalizable radiomic signatures. In particular, Magnetic Resonance Imaging (MRI) data used in brain oncology are often heterogeneous in terms of scanners and acquisitions, which significantly impact quantitative radiomic features. Various methods have been proposed to decrease dependency, including methods acting directly on MR images, i.e., based on the application of several preprocessing steps before feature extraction or the ComBat method, which harmonizes radiomic features themselves. The ComBat method used for radiomics may be misleading and presents some limitations, such as the need to know the labels associated with the "batch effect". In addition, a statistically representative sample is required and the applicability of a signature whose batch label is not present in the train set is not possible. This work aimed to compare a priori and a posteriori radiomic harmonization methods and propose a code adaptation to be machine learning compatible. Furthermore, we have developed AutoComBat, which aims to automatically determine the batch labels, using either MRI metadata or quality metrics as inputs of the proposed constrained clustering. A heterogeneous dataset consisting of high and low-grade gliomas coming from eight different centers was considered. The different methods were compared based on their ability to decrease relative standard deviation of radiomic features extracted from white matter and on their performance on a classification task using different machine learning models. ComBat and AutoComBat using image-derived quality metrics as inputs for batch assignment and preprocessing methods presented promising results on white matter harmonization, but with no clear consensus for all MR images. Preprocessing showed the best results on the T1w-gd images for the grading task. For T2w-flair, AutoComBat, using either metadata plus quality metrics or metadata alone as inputs, performs better than the conventional ComBat, highlighting its potential for data harmonization. Our results are MRI weighting, feature class and task dependent and require further investigations on other datasets.
Automatic Brain Tumor Segmentation with a Bridge-Unet Deeply Supervised Enhanced with Downsampling Pooling Combination, Atrous Spatial Pyramid Pooling, Squeeze-and-Excitation and EvoNorm
Segmentation of brain tumors is a critical task for patient disease management. Since this task is time-consuming and subject to inter-expert delineation variation, automatic methods are of significant interest. The Multimodal Brain Tumor Segmentation Challenge (BraTS) has been in place for about a decade and provides a common platform to compare different automatic segmentation algorithms based on multiparametric magnetic resonance imaging (mpMRI) of gliomas. This year the challenge has taken a big step forward by multiplying the total data by approximately 3. We address the image segmentation challenge by developing a network based on a Bridge-Unet and improved with a concatenation of max and average pooling for downsampling, Squeeze-and-Excitation (SE) block, Atrous Spatial Pyramid Pooling (ASSP), and EvoNorm-S0. Our model was trained using the 1251 training cases from the BraTS 2021 challenge and achieved an average Dice similarity coefficient (DSC) of 0.92457, 0.87811 and 0.84094, as well as a 95% Hausdorff distance (HD) of 4.19442, 7.55256 and 14.13390 mm for the whole tumor, tumor core, and enhanced tumor, respectively on the online validation platform composed of 219 cases. Similarly, our solution achieved a DSC of 0.92548, 0.87628 and 0.87122, as well as HD95 of 4.30711, 17.84987 and 12.23361 mm on the test dataset composed of 530 cases. Overall, our approach yielded well balanced performance for each tumor subregion.
Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics
Carré, Alexandre
Klausner, Guillaume
Edjlali, Myriam
Lerousseau, Marvin
Briend-Diop, Jade
Sun, Roger
Ammari, Samy
Reuzé, Sylvain
Andres, Emilie Alvarez
Estienne, Théo
Scientific RepoRtS2020Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
radiomics
A Federated Content Distribution System to Build Health Data Synchronization Services
In organizational environments, such as in hospitals, data have to be processed, preserved, and shared with other organizations in a cost-efficient manner. Moreover, organizations have to accomplish different mandatory non-functional requirements imposed by the laws, protocols, and norms of each country. In this context, this paper presents a Federated Content Distribution System to build infrastructure-agnostic health data synchronization services. In this federation, each hospital manages local and federated services based on a pub/sub model. The local services manage users and contents (i.e., medical imagery) inside the hospital, whereas federated services allow the cooperation of different hospitals sharing resources and data. Data preparation schemes were implemented to add non-functional requirements to data. Moreover, data published in the content distribution system are automatically synchronized to all users subscribed to the catalog where the content was published.
PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms
Carter, L. M.
Crawford, T. M.
Sato, T.
Furuta, T.
Choi, C.
Kim, C. H.
Brown, J. L.
Bolch, W. E.
Zanzonico, P. B.
Lewis, J. S.
J Nucl Med2019Journal Article, cited 0 times
Website
Anti-PD-1_MELANOMA
Radiation Dosage
Radiation Therapy
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.
Multimodal mixed reality visualisation for intraoperative surgical guidance
Cartucho, João
Shapira, David
Ashrafian, Hutan
Giannarou, Stamatia
International Journal of Computer Assisted Radiology and Surgery2020Journal Article, cited 0 times
Website
TCGA-LIHC
Visualization
surgical guidance
A Multimodal Ensemble Driven by Multiobjective Optimisation to Predict Overall Survival in Non-Small-Cell Lung Cancer
Caruso, C. M.
Guarrasi, V.
Cordelli, E.
Sicilia, R.
Gentile, S.
Messina, L.
Fiore, M.
Piccolo, C.
Beomonte Zobel, B.
Iannello, G.
Ramella, S.
Soda, P.
J Imaging2022Journal Article, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
Convolutional Neural Network (CNN)
medical imaging
multiexpert systems
multimodal deep learning
oncology
optimisation
precision medicine
tabular data
Training
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary information from the data of different modalities for prognostic and predictive purposes. This knowledge could be used to optimise current treatments and maximise their effectiveness. To predict overall survival, in this work, we investigate the use of multimodal learning on the CLARO dataset, which includes CT images and clinical data collected from a cohort of non-small-cell lung cancer patients. Our method allows the identification of the optimal set of classifiers to be included in the ensemble in a late fusion approach. Specifically, after training unimodal models on each modality, it selects the best ensemble by solving a multiobjective optimisation problem that maximises both the recognition performance and the diversity of the predictions. In the ensemble, the labels of each sample are assigned using the majority voting rule. As further validation, we show that the proposed ensemble outperforms the models learning a single modality, obtaining state-of-the-art results on the task at hand.
Deep learning-based tumor segmentation and classification in breast MRI with 3TP method
Carvalho, Edson Damasceno
da Silva Neto, Otilio Paulo
de Carvalho Filho, Antônio Oseas
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
QIN Breast DCE-MRI
Breast cancer
Magnetic Resonance Imaging (MRI)
Tumor segmentation
Classification
Automatic Segmentation
Abstract; Background and Objective:; Timely diagnosis of early breast cancer plays a critical role in improving patient outcome and increasing treatment effectiveness. Dynamic contrast-enhancing magnetic resonance imaging (DCE-MRI) is a minimally invasive test widely used in the analysis of breast cancer. Manual analysis of DCE-MRI images by the specialist is extremely complex, exhaustive, and can lead to misunderstandings. Thus, the development of automated methods for analyzing DCE-MRI images of the breast is increasing. In this research, we propose an automatic methodology capable of detecting tumors and classifying their malignancy in a DCE-MRI breast image.; ; Methodology:; The proposed method consists of the use of two deep learning architectures, that is, SegNet and UNet, for breast tumor segmentation and the three-time-point (3TP) method for classifying the malignancy of segmented tumors.; ; Results:; The proposed methodology was tested on the public Quantitative Imaging Network (QIN) Breast DCE-MRI image set, and the best result in segmentation was a Dice of 0.9332 and IoU of 0.9799. For the classification of tumor malignancy, the methodology presented an accuracy of 100%.; ; Conclusions:; In our research, we demonstrate that the problem of mammary tumor segmentation in DCE-MRI images can be efficiently solved using deep learning architectures, and tumor malignancy classification can be done through the three-time method. The method can be integrated as a support system for the specialist in treating patients with breast cancer.
The Impact of Edema on MRI Radiomics for the Prediction of Lung Metastasis in Soft Tissue Sarcoma
Casale, Roberto
De Angelis, Riccardo
Coquelet, Nicolas
Mokhtari, Ayoub
Bali, Maria Antonietta
Diagnostics2023Journal Article, cited 0 times
Soft-tissue-Sarcoma
INTRODUCTION: This study aimed to evaluate whether radiomic features extracted solely from the edema of soft tissue sarcomas (STS) could predict the occurrence of lung metastasis in comparison with features extracted solely from the tumoral mass.
MATERIALS AND METHODS: We retrospectively analyzed magnetic resonance imaging (MRI) scans of 32 STSs, including 14 with lung metastasis and 18 without. A segmentation of the tumor mass and edema was assessed for each MRI examination. A total of 107 radiomic features were extracted for each mass segmentation and 107 radiomic features for each edema segmentation. A two-step feature selection process was applied. Two predictive features for the development of lung metastasis were selected from the mass-related features, as well as two predictive features from the edema-related features. Two Random Forest models were created based on these selected features; 100 random subsampling runs were performed. Key performance metrics, including accuracy and area under the ROC curve (AUC), were calculated, and the resulting accuracies were compared.
RESULTS: The model based on mass-related features achieved a median accuracy of 0.83 and a median AUC of 0.88, while the model based on edema-related features achieved a median accuracy of 0.75 and a median AUC of 0.79. A statistical analysis comparing the accuracies of the two models revealed no significant difference.
CONCLUSION: Both models showed promise in predicting the occurrence of lung metastasis in soft tissue sarcomas. These findings suggest that radiomic analysis of edema features can provide valuable insights into the prediction of lung metastasis in soft tissue sarcomas.
Development and external validation of a non-invasive molecular status predictor of chromosome 1p/19q co-deletion based on MRI radiomics analysis of Low Grade Glioma patients
Casale, R.
Lavrova, E.
Sanduleanu, S.
Woodruff, H. C.
Lambin, P.
Eur J Radiol2021Journal Article, cited 0 times
Website
Algorithm Development
TCGA-LGG
LGG-1p19qDeletion
Radiomics
BRAIN
PURPOSE: The 1p/19q co-deletion status has been demonstrated to be a prognostic biomarker in lower grade glioma (LGG). The objective of this study was to build a magnetic resonance (MRI)-derived radiomics model to predict the 1p/19q co-deletion status. METHOD: 209 pathology-confirmed LGG patients from 2 different datasets from The Cancer Imaging Archive were retrospectively reviewed; one dataset with 159 patients as the training and discovery dataset and the other one with 50 patients as validation dataset. Radiomics features were extracted from T2- and T1-weighted post-contrast MRI resampled data using linear and cubic interpolation methods. For each of the voxel resampling methods a three-step approach was used for feature selection and a random forest (RF) classifier was trained on the training dataset. Model performance was evaluated on training and validation datasets and clinical utility indexes (CUIs) were computed. The distributions and intercorrelation for selected features were analyzed. RESULTS: Seven radiomics features were selected from the cubic interpolated features and five from the linear interpolated features on the training dataset. The RF classifier showed similar performance for cubic and linear interpolation methods in the training dataset with accuracies of 0.81 (0.75-0.86) and 0.76 (0.71-0.82) respectively; in the validation dataset the accuracy dropped to 0.72 (0.6-0.82) using cubic interpolation and 0.72 (0.6-0.84) using linear resampling. CUIs showed the model achieved satisfactory negative values (0.605 using cubic interpolation and 0.569 for linear interpolation). CONCLUSIONS: MRI has the potential for predicting the 1p/19q status in LGGs. Both cubic and linear interpolation methods showed similar performance in external validation.
Predicting risk of metastases and recurrence in soft-tissue sarcomas via Radiomics and Formal Methods
Casale, R.
Varriano, G.
Santone, A.
Messina, C.
Casale, C.
Gitto, S.
Sconfienza, L. M.
Bali, M. A.
Brunese, L.
JAMIA Open2023Journal Article, cited 0 times
Website
Soft-tissue-Sarcoma
Formal Methods
Radiomics
magnetic resonance imaging
metastases
model checking
soft-tissue sarcoma
OBJECTIVE: Soft-tissue sarcomas (STSs) of the extremities are a group of malignancies arising from the mesenchymal cells that may develop distant metastases or local recurrence. In this article, we propose a novel methodology aimed to predict metastases and recurrence risk in patients with these malignancies by evaluating magnetic resonance radiomic features that will be formally verified through formal logic models. MATERIALS AND METHODS: This is a retrospective study based on a public dataset evaluating MRI scans T2-weighted fat-saturated or short tau inversion recovery and patients having "metastases/local recurrence" (group B) or "no metastases/no local recurrence" (group A) as clinical outcomes. Once radiomic features are extracted, they are included in formal models, on which is automatically verified the logic property written by a radiologist and his computer scientists coworkers. RESULTS: Evaluating the Formal Methods efficacy in predicting distant metastases/local recurrence in STSs (group A vs group B), our methodology showed a sensitivity and specificity of 0.81 and 0.67, respectively; this suggests that radiomics and formal verification may be useful in predicting future metastases or local recurrence development in soft tissue sarcoma. DISCUSSION: Authors discussed about the literature to consider Formal Methods as a valid alternative to other Artificial Intelligence techniques. CONCLUSIONS: An innovative and noninvasive rigourous methodology can be significant in predicting local recurrence and metastases development in STSs. Future works can be the assessment on multicentric studies to extract objective disease information, enriching the connection between the radiomic quantitative analysis and the radiological clinical evidences.
Cascaded V-Net Using ROI Masks for Brain Tumor Segmentation
In this work we approach the brain tumor segmentation problem with a cascade of two CNNs inspired in the V-Net architecture [13], reformulating residual connections and making use of ROI masks to constrain the networks to train only on relevant voxels. This architecture allows dense training on problems with highly skewed class distributions, such as brain tumor segmentation, by focusing training only on the vecinity of the tumor area. We report results on BraTS2017 Training and Validation sets.
Explainable Machine Learning for Early Assessment of COVID-19 Risk Prediction in Emergency Departments
Casiraghi, Elena
Malchiodi, Dario
Trucco, Gabriella
Frasca, Marco
Cappelletti, Luca
Fontana, Tommaso
Esposito, Alessandro Andrea
Avola, Emanuele
Jachetti, Alessandro
Reese, Justin
Rizzi, Alessandro
Robinson, Peter N.
Valentini, Giorgio
IEEE Access2020Journal Article, cited 0 times
LCTSC
Between January and October of 2020, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus has infected more than 34 million persons in a worldwide pandemic leading to over one million deaths worldwide (data from the Johns Hopkins University). Since the virus begun to spread, emergency departments were busy with COVID-19 patients for whom a quick decision regarding in- or outpatient care was required. The virus can cause characteristic abnormalities in chest radiographs (CXR), but, due to the low sensitivity of CXR, additional variables and criteria are needed to accurately predict risk. Here, we describe a computerized system primarily aimed at extracting the most relevant radiological, clinical, and laboratory variables for improving patient risk prediction, and secondarily at presenting an explainable machine learning system, which may provide simple decision criteria to be used by clinicians as a support for assessing patient risk. To achieve robust and reliable variable selection, Boruta and Random Forest (RF) are combined in a 10-fold cross-validation scheme to produce a variable importance estimate not biased by the presence of surrogates. The most important variables are then selected to train a RF classifier, whose rules may be extracted, simplified, and pruned to finally build an associative tree, particularly appealing for its simplicity. Results show that the radiological score automatically computed through a neural network is highly correlated with the score computed by radiologists, and that laboratory variables, together with the number of comorbidities, aid risk prediction. The prediction performance of our approach was compared to that that of generalized linear models and shown to be effective and robust. The proposed machine learning-based computational system can be easily deployed and used in emergency departments for rapid and accurate risk prediction in COVID-19 patients.
The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status
Castaldo, Rossana
Pane, Katia
Nicolai, Emanuele
Salvatore, Marco
Franzese, Monica
Cancers (Basel)2020Journal Article, cited 0 times
Website
TCGA-BRCA
Radiomics
Radiogenomics
In breast cancer studies, combining quantitative radiomic with genomic signatures can help identifying and characterizing radiogenomic phenotypes, in function of molecular receptor status. Biomedical imaging processing lacks standards in radiomic feature normalization methods and neglecting feature normalization can highly bias the overall analysis. This study evaluates the effect of several normalization techniques to predict four clinical phenotypes such as estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and triple negative (TN) status, by quantitative features. The Cancer Imaging Archive (TCIA) radiomic features from 91 T1-weighted Dynamic Contrast Enhancement MRI of invasive breast cancers were investigated in association with breast invasive carcinoma miRNA expression profiling from the Cancer Genome Atlas (TCGA). Three advanced machine learning techniques (Support Vector Machine, Random Forest, and Naive Bayesian) were investigated to distinguish between molecular prognostic indicators and achieved an area under the ROC curve (AUC) values of 86%, 93%, 91%, and 91% for the prediction of ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative, respectively. In conclusion, radiomic features enable to discriminate major breast cancer molecular subtypes and may yield a potential imaging biomarker for advancing precision medicine.
AI applications to medical images: From machine learning to deep learning
Castiglioni, Isabella
Rundo, Leonardo
Codari, Marina
Di Leo, Giovanni
Salvatore, Christian
Interlenghi, Matteo
Gallivanone, Francesca
Cozzi, Andrea
D'Amico, Natascha Claudia
Sardanelli, Francesco
Physica Medica2021Journal Article, cited 0 times
Crowds-Cure-2017
PURPOSE: Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context.
METHODS: A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections.
RESULTS: We first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way.
CONCLUSIONS: Biomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.
Brain Tumor Segmentation and Parsing on MRIs Using Multiresolution Neural Networks
Brain lesion segmentation is a critical application of computer vision to the biomedical image analysis. The difficulty is derived from the great variance between instances, and the high computational cost of processing three dimensional data. We introduce a neural network for brain tumor semantic segmentation that parses their internal structures and is capable of processing volumetric data from multiple MRI modalities simultaneously. As a result, the method is able to learn from small training datasets. We develop an architecture that has four parallel pathways with residual connections. It receives patches from images with different spatial resolutions and analyzes them independently. The results are then combined using fully-connected layers to obtain a semantic segmentation of the brain tumor. We evaluated our method using the 2017 BraTS Challenge dataset, reaching average dice coefficients of 89%$$89\%$$, 88%$$88\%$$ and 86%$$86\%$$ over the training, validation and test images, respectively.
Classification of Clinically Significant Prostate Cancer on Multi-Parametric MRI: A Validation Study Comparing Deep Learning and Radiomics
Selección de un algoritmo para la clasificación de Nódulos Pulmonares Solitarios
Castro, Arelys Rivero
Correa, Luis Manuel Cruz
Lezcano, Jeffrey Artiles
Revista Cubana de Informática Médica2016Journal Article, cited 0 times
Website
LIDC-IDRI
MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer
Cattell, Renee F.
Kang, James J.
Ren, Thomas
Huang, Pauline B.
Muttreja, Ashima
Dacosta, Sarah
Li, Haifang
Baer, Lea
Clouston, Sean
Palermo, Roxanne
Fisher, Paul
Bernstein, Cliff
Cohen, Jules A.
Duong, Tim Q.
Clinical Breast Cancer2019Journal Article, cited 0 times
Website
ISPY1
ACRIN 6657
Breast
Radiomics
Introduction; Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC).; ; Materials and Methods; Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance.; ; Results; Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82.; ; Conclusion; Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.
Highly accurate model for prediction of lung nodule malignancy with CT scans
Causey, Jason L
Zhang, Junyu
Ma, Shiqian
Jiang, Bo
Qualls, Jake A
Politte, David G
Prior, Fred
Zhang, Shuzhong
Huang, Xiuzhen
Scientific RepoRtS2018Journal Article, cited 5 times
Website
LIDC-IDRI
Radiomics
LUNG
Classification
Convolutional Neural Network (CNN)
Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at http://bioinformatics.astate.edu/NoduleX .
Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality
Cavicchioli, M.
Moglia, A.
Pierelli, L.
Pugliese, G.
Cerveri, P.
Comput Med Imaging Graph2024Journal Article, cited 0 times
Website
Pancreas-CT
Medical Decathlon
Artificial intelligence surgery
Artificial intelligence surgical planning
Deep learning
PANCREAS
Medical imaging dataset acquisition
Medical imaging dataset curation
Pancreas dataset
Segmentation
Accurate segmentation of the pancreas in computed tomography (CT) holds paramount importance in diagnostics, surgical planning, and interventions. Recent studies have proposed supervised deep-learning models for segmentation, but their efficacy relies on the quality and quantity of the training data. Most of such works employed small-scale public datasets, without proving the efficacy of generalization to external datasets. This study explored the optimization of pancreas segmentation accuracy by pinpointing the ideal dataset size, understanding resource implications, examining manual refinement impact, and assessing the influence of anatomical subregions. We present the AIMS-1300 dataset encompassing 1,300 CT scans. Its manual annotation by medical experts required 938 h. A 2.5D UNet was implemented to assess the impact of training sample size on segmentation accuracy by partitioning the original AIMS-1300 dataset into 11 smaller subsets of progressively increasing numerosity. The findings revealed that training sets exceeding 440 CTs did not lead to better segmentation performance. In contrast, nnU-Net and UNet with Attention Gate reached a plateau for 585 CTs. Tests on generalization on the publicly available AMOS-CT dataset confirmed this outcome. As the size of the partition of the AIMS-1300 training set increases, the number of error slices decreases, reaching a minimum with 730 and 440 CTs, for AIMS-1300 and AMOS-CT datasets, respectively. Segmentation metrics on the AIMS-1300 and AMOS-CT datasets improved more on the head than the body and tail of the pancreas as the dataset size increased. By carefully considering the task and the characteristics of the available data, researchers can develop deep learning models without sacrificing performance even with limited data. This could accelerate developing and deploying artificial intelligence tools for pancreas surgery and other surgical data science applications.
A survey on deep learning applied to medical images: from simple artificial neural networks to generative models
Celard, P.
Iglesias, E. L.
Sorribes-Fdez, J. M.
Romero, R.
Vieira, A. Seara
Borrajo, L.
Neural Computing and Applications2022Journal Article, cited 0 times
Breast-Cancer-Screening-DBT
Pancreas-CT
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Detection of Tumor Slice in Brain Magnetic Resonance Images by Feature Optimized Transfer Learning
Celik, Salih
KASIM, Ömer
Aksaray University Journal of Science and Engineering2020Journal Article, cited 0 times
Website
REMBRANDT
RIDER NEURO MRI
Brain-Tumor-Progression
radiomic features
Equivariant neural networks for inverse problems
Celledoni, Elena
Ehrhardt, Matthias J
Etmann, Christian
Owren, Brynjulf
Schönlieb, Carola-Bibiane
Sherry, Ferdia
2021Journal Article, cited 0 times
LIDC-IDRI
In recent years the use of convolutional layers to encode an inductive bias (translational equivariance) in neural networks has proven to be a very fruitful idea. The successes of this approach have motivated a line of research into incorporating other symmetries into deep learning methods, in the form of group equivariant convolutional neural networks. Much of this work has been focused on roto-translational symmetry of R d , but other examples are the scaling symmetry of R d and rotational symmetry of the sphere. In this work, we demonstrate that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach. Indeed, if the regularisation functional is invariant under a group symmetry, the corresponding proximal operator will satisfy an equivariance property with respect to the same group symmetry. As a result of this observation, we design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks. We use roto-translationally equivariant operations in the proposed methodology and apply it to the problems of low-dose computerised tomography reconstruction and subsampled magnetic resonance imaging reconstruction. The proposed methodology is demonstrated to improve the reconstruction quality of a learned reconstruction method with a little extra computational cost at training time but without any extra cost at test time.
Improving the classification of veterinary thoracic radiographs through inter-species and inter-pathology self-supervised pre-training of deep learning models
The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach.
Renal cell carcinoma: predicting RUNX3 methylation level and its consequences on survival with CT features
Dongzhi Cen
Li Xu
Siwei Zhang
Zhiguang Chen
Yan Huang
Ziqi Li
Bo Liang
European Radiology2019Journal Article, cited 0 times
Website
TCGA-KIRC
clear cell renal cell carcinoma (ccRCC)
Computed Tomography (CT)
Radiogenomics
Cox regression
PURPOSE: To investigate associations between CT imaging features, RUNX3 methylation level, and survival in clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients were divided into high RUNX3 methylation and low RUNX3 methylation groups according to RUNX3 methylation levels (the threshold was identified by using X-tile). The CT scanning data from 106 ccRCC patients were retrospectively analyzed. The relationship between RUNX3 methylation level and overall survivals was evaluated using the Kaplan-Meyer analysis and Cox regression analysis (univariate and multivariate). The relationship between RUNX3 methylation level and CT features was evaluated using chi-square test and logistic regression analysis (univariate and multivariate). RESULTS: beta value cutoff of 0.53 to distinguish high methylation (N = 44) from low methylation tumors (N = 62). Patients with lower levels of methylation had longer median overall survival (49.3 vs. 28.4) months (low vs. high, adjusted hazard ratio [HR] 4.933, 95% CI 2.054-11.852, p < 0.001). On univariate logistic regression analysis, four risk factors (margin, side, long diameter, and intratumoral vascularity) were associated with RUNX3 methylation level (all p < 0.05). Multivariate logistic regression analysis found that three risk factors (side: left vs. right, odds ratio [OR] 2.696; p = 0.024; 95% CI 1.138-6.386; margin: ill-defined vs. well-defined, OR 2.685; p = 0.038; 95% CI 1.057-6.820; and intratumoral vascularity: yes vs. no, OR 3.286; p = 0.008; 95% CI 1.367-7.898) were significant independent predictors of high methylation tumors. This model had an area under the receiver operating characteristic curve (AUC) of 0.725 (95% CI 0.623-0.827). CONCLUSIONS: Higher levels of RUNX3 methylation are associated with shorter survival in ccRCC patients. And presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene. KEY POINTS: * RUNX3 methylation level is negatively associated with overall survival in ccRCC patients. * Presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene.
Survey of Image Processing Techniques for Brain Pathology Diagnosis: Challenges and Opportunities
In recent years, a number of new products introduced to the global market combine intelligent robotics, artificial intelligence and smart interfaces to provide powerful tools to support professional decision making. However, while brain disease diagnosis from the brain scan images is supported by imaging robotics, the data analysis to form a medical diagnosis is performed solely by highly trained medical professionals. Recent advances in medical imaging techniques, artificial intelligence, machine learning and computer vision present new opportunities to build intelligent decision support tools to aid the diagnostic process, increase the disease detection accuracy, reduce error, automate the monitoring of patient's recovery, and discover new knowledge about the disease cause and its treatment. This article introduces the topic of medical diagnosis of brain diseases from the MRI based images. We describe existing, multi-modal imaging techniques of the brain's soft tissue and describe in detail how are the resulting images are analyzed by a radiologist to form a diagnosis. Several comparisons between the best results of classifying natural scenes and medical image analysis illustrate the challenges of applying existing image processing techniques to the medical image analysis domain. The survey of medical image processing methods also identified several knowledge gaps, the need for automation of image processing analysis, and the identification of the brain structures in the medical images that differentiate healthy tissue from a pathology. This survey is grounded in the cases of brain tumor analysis and the traumatic brain injury diagnoses, as these two case studies illustrate the vastly different approaches needed to define, extract, and synthesize meaningful information from multiple MRI image sets for a diagnosis. Finally, the article summarizes artificial intelligence frameworks that are built as multi-stage, hybrid, hierarchical information processing work-flows and the benefits of applying these models for medical diagnosis to build intelligent physician's aids with knowledge transparency, expert knowledge embedding, and increased analytical quality.
The Río Hortega University Hospital Glioblastoma dataset: A comprehensive collection of preoperative, early postoperative and recurrence MRI scans (RHUH-GBM)
Cepeda, Santiago
García-García, Sergio
Arrese, Ignacio
Herrero, Francisco
Escudero, Trinidad
Zamora, Tomás
Sarabia, Rosario
Data in Brief2023Journal Article, cited 0 times
RHUH-GBM
Glioblastoma, a highly aggressive primary brain tumor, is associated with poor patient outcomes. Although magnetic resonance imaging (MRI) plays a critical role in diagnosing, characterizing, and forecasting glioblastoma progression, public MRI repositories present significant drawbacks, including insufficient postoperative and follow-up studies as well as expert tumor segmentations. To address these issues, we present the "Río Hortega University Hospital Glioblastoma Dataset (RHUH-GBM)," a collection of multiparametric MRI images, volumetric assessments, molecular data, and survival details for glioblastoma patients who underwent total or near-total enhancing tumor resection. The dataset features expert-corrected segmentations of tumor subregions, offering valuable ground truth data for developing algorithms for postoperative and follow-up MRI scans.
Non-navigated 2D intraoperative ultrasound: An unsophisticated surgical tool to achieve high standards of care in glioma surgery
Cepeda, S.
Garcia-Garcia, S.
Arrese, I.
Sarabia, R.
J Neurooncol2024Journal Article, cited 0 times
Website
RHUH-GBM
Glioma
Intraoperative imaging
Intraoperative ultrasound
Surgery
PURPOSE: In an era characterized by rapid progression in neurosurgical technologies, traditional tools such as the non-navigated two-dimensional intraoperative ultrasound (nn-2D-IOUS) risk being overshadowed. Against this backdrop, this study endeavors to provide a comprehensive assessment of the clinical efficacy and surgical relevance of nn-2D-IOUS, specifically in the context of glioma resections. METHODS: This retrospective study undertaken at a single center evaluated 99 consecutive, non-selected patients diagnosed with both high-grade and low-grade gliomas. The primary objective was to assess the proficiency of nn-2D-IOUS in generating satisfactory image quality, identifying residual tumor tissue, and its influence on the extent of resection. To validate these results, early postoperative MRI data served as the reference standard. RESULTS: The nn-2D-IOUS exhibited a high level of effectiveness, successfully generating good quality images in 79% of the patients evaluated. With a sensitivity rate of 68% and a perfect specificity of 100%, nn-2D-IOUS unequivocally demonstrated its utility in intraoperative residual tumor detection. Notably, when total tumor removal was the surgical objective, a resection exceeding 95% of the initial tumor volume was achieved in 86% of patients. Additionally, patients in whom residual tumor was not detected by nn-2D-IOUS, the mean volume of undetected tumor tissue was remarkably minimal, averaging at 0.29 cm(3). CONCLUSION: Our study supports nn-2D-IOUS's invaluable role in glioma surgery. The results highlight the utility of traditional technologies for enhanced surgical outcomes, even when compared to advanced alternatives. This is particularly relevant for resource-constrained settings and emphasizes optimizing existing tools for efficient patient care. NCT05873946 - 24/05/2023 - Retrospectively registered.
Deep learning-based postoperative glioblastoma segmentation and extent of resection evaluation: Development, external validation, and model comparison
Cepeda, S.
Romero, R.
Luque, L.
Garcia-Perez, D.
Blasco, G.
Luppino, L. T.
Kuttner, S.
Esteban-Sinovas, O.
Arrese, I.
Solheim, O.
Eikenes, L.
Karlberg, A.
Perez-Nunez, A.
Zanier, O.
Serra, C.
Staartjes, V. E.
Bianconi, A.
Rossi, L. F.
Garbossa, D.
Escudero, T.
Hornero, R.
Sarabia, R.
Neurooncol Adv2024Journal Article, cited 0 times
Website
Burdenko-GBM-Progression
IvyGAP
QIN GBM Treatment Response
Abstract Background The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms. Methods To develop the segmentation model, a training cohort from three research institutions and three public databases was used. Multiparametric MRI scans with ground truth labels for contrast enhancing tumor, edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used. Results The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast enhancing tumor, 0.77 for edema, and 0.81 for surgical cavity. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort. Conclusion Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.
Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning
This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon's non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman's rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT.
Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT
Cha, Jungwon
Farhangi, Mohammad Mehdi
Dunlap, Neal
Amini, Amir A
Medical Physics2018Journal Article, cited 5 times
Website
LIDC-IDRI
Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning
Cha, K. H.
Petrick, N.
Pezeshk, A.
Graff, C. G.
Sharma, D.
Badal, A.
Sahiner, B.
J Med Imaging (Bellingham)2020Journal Article, cited 1 times
Website
CBIS-DDSM
BREAST
Phantom
Computer Assisted Detection (CAD)
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.
Performance of Deep Learning Model in Detecting Operable Lung Cancer With Chest Radiographs
Cha, Min Jae
Chung, Myung Jin
Lee, Jeong Hyun
Lee, Kyung Soo
Journal of thoracic imaging2019Journal Article, cited 0 times
LIDC-IDRI
Deep Learning
Lung
PURPOSE: The aim of this study was to evaluate the diagnostic performance of a trained deep convolutional neural network (DCNN) model for detecting operable lung cancer with chest radiographs (CXRs).
MATERIALS AND METHODS: The institutional review board approved this study. A deep learning model (DLM) based on DCNN was trained with 17,211 CXRs (5700 CT-confirmed lung nodules in 3500 CXRs and 13,711 normal CXRs), finally augmented to 600,000 images. For validation, a trained DLM was tested with 1483 CXRs with surgically resected lung cancer, marked and scored by 2 radiologists. Furthermore, diagnostic performances of DLM and 6 human observers were compared with 500 cases (200 visible T1 lung cancer on CXR and 300 normal CXRs) and analyzed using free-response receiver-operating characteristics curve (FROC) analysis.
RESULTS: The overall detection rate of DLM for resected lung cancers (27.2±14.6 mm) was a sensitivity of 76.8% (1139/1483) with a false positive per image (FPPI) of 0.3 and area under the FROC curve (AUC) of 0.732. In the comparison with human readers, DLM demonstrated a sensitivity of 86.5% at 0.1 FPPI and a sensitivity of 92% at 0.3 FPPI with AUC of 0.899 at an FPPI range of 0.03 to 0.44 for detecting visible T1 lung cancers, which were superior to the average of 6 human readers [mean sensitivity; 78% (range, 71.6% to 82.6%) at an FPPI of 0.1% and 85% (range, 80.2% to 89.2%) at an FPPI of 0.3, AUC of 0.819 (range, 0.754 to 0.862) at an FPPI of 0.03 to 0.44).
CONCLUSIONS: A DLM has high diagnostic performance in detecting operable lung cancer with CXR, demonstrating a potential of playing a pivotal role for lung cancer screening.
Qualitative stomach cancer assessment by multi-slice computed tomography
Chacón, Gerardo
Rodríguez, Johel E.
Bermúdez, Valmore
Vera, Miguel
Hernandez, Juan Diego
Pardo, Aldo
Lameda, Carlos
Madriz, Delia
Bravo, Antonio José
Ingeniare. Revista chilena de ingeniería2020Journal Article, cited 0 times
Website
TCGA-STAD
Radiomics
Radiogenomics
STOMACH
Computed Tomography (CT)
ABSTRACT; ; A theoretical framework based on the Borrmann classification and the Japanese gastric cancer classification is proposed in order to qualitatively assess the stomach cancer from the three-dimensional (3-D) images obtained using multi-slice computerized tomography (MSCT). The main goal of this paper is to demonstrate through visual inspection, the MSCT capacity to effectively reflect the morphopathological characteristics of the stomach adenocarcinoma types. The idea is to contrast the pathological theoretic characteristics with those that are possible to understand from MSCT images available in clinical datasets. This research corresponds to a study with a mixed approach (qualitative and quantitative), applied to a total of 46 images available for patients diagnosed, from the data collection included of the Cancer Genome Atlas Stomach Adenocarcinoma (TCGA-STAD). The conclusions are established from a comparative analysis based on the document review and direct observation, the product being a matrix of compliance with the specific qualities of the theoretical standards, in the visualization of images performed by the clinical specialist from the datasets. A total of 6210 slices from 46 MSCT explorations are visually inspected, and then visual characteristics are contrasted with respect to the theoretic characteristics obtained from the cancer classifications. These characteristics match into about 96% of images inspected. The approach effectiveness measured using the positive predictive value is about 96.50%. The results of the images data also show a sensitivity of 97.83%, and specificity of 98.27%. MSCT is a precise imaging modality in the qualitative assessment of the staging of stomach cancer.; ; Keywords: Stomach cancer; adenocarcinoma; macroscopic assessment; Borrmann classification; Japanese classification; medical imaging; multi-slice computerized tomography; ; RESUMEN; ; En el presente artículo se propone un marco teórico basado en la clasificación de Borrmann y la clasificación japonesa del cáncer gástrico para evaluar cualitativamente el cáncer a partir de imágenes tridimensionales (3-D) obtenidas mediante tomografía computarizada multicorte (MSCT). El objetivo es demostrar a través de la inspección visual, la capacidad de MSCT para reflejar efectivamente las características morfopatológicas de los tipos de adenocarcinoma de estómago. La idea es contrastar las características teóricas patológicas con aquellas que son posibles de comprender en las imágenes disponibles. Esta investigación corresponde a un estudio con un enfoque mixto (cualitativo y cuantitativo), aplicado a un total de 46 imágenes de pacientes diagnosticados, incluidos en el Atlas del Genoma del Cáncer (TCGA-STAD). Las conclusiones se establecen mediante un análisis comparativo basado en la revisión documental y observación directa, siendo el producto una matriz de cumplimiento de las cualidades específicas de los estándares teóricos, a partir de la visualización de imágenes realizadas por el especialista clínico. Se inspeccionaron visualmente un total de 6210 cortes de tomografía de 46 exploraciones de MSCT, y luego se contrastaron las características visuales patológicas con respecto a los criterios patológicos obtenidos de las clasificaciones de cáncer. Las características coinciden con aproximadamente el 96% de las imágenes inspeccionadas. La efectividad del enfoque medida usando el valorpredictivo positivo es aproximadamente 96,50%. Los resultados también muestran una sensibilidad de 97,83% y especificidad de 99,18%. MSCT es una modalidad de imagen precisa en la evaluación cualitativa de la estadificación del cáncer de estómago.; ; Palabras clave: Cáncer de estómago; adenocarcinoma; evaluación macroscópica; clasificación de Borrmann; clasificación japonesa; imágenes médicas; tomografía computarizada
Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer
Chacón, Gerardo
Rodríguez, Johel E
Bermúdez, Valmore
Vera, Miguel
Hernández, Juan Diego
Vargas, Sandra
Pardo, Aldo
Lameda, Carlos
Madriz, Delia
Bravo, Antonio J
F1000Research2018Journal Article, cited 0 times
Website
TCGA-STAD
STOMACH
region growing method
Algorithm Development
Background: The multi-slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three-dimensional (3-D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3-D shape computationally segmented from the each dataset. These 3-D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.;
Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models
Chaddad, Ahmad
Journal of Biomedical Imaging2015Journal Article, cited 29 times
Website
Radiomics
Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme
Chaddad, Ahmad
Desrosiers, Christian
Toews, Matthew
2016Conference Proceedings, cited 11 times
Website
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
Machine Learning
Magnetic Resonance Imaging (MRI)
Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI.
Phenotypic characterization of glioblastoma identified through shape descriptors
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.
GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes
Glioblastoma multiforme (GBM) is the most common malignant primary tumor of the central nervous system, characterized among other traits by rapid metastatis. Three tissue phenotypes closely associated with GBMs, namely, necrosis (N), contrast enhancement (CE), and edema/invasion (E), exhibit characteristic patterns of texture heterogeneity in magnetic resonance images (MRI). In this study, we propose a novel model to characterize GBM tissue phenotypes using gray level co-occurrence matrices (GLCM) in three anatomical planes. The GLCM encodes local image patches in terms of informative, orientation-invariant texture descriptors, which are used here to sub-classify GBM tissue phenotypes. Experiments demonstrate the model on MRI data of 41 GBM patients, obtained from the cancer genome atlas (TCGA). Intensity-based automatic image registration is applied to align corresponding pairs of fixed T1˗weighted (T1˗WI) post-contrast and fluid attenuated inversion recovery (FLAIR) images. GBM tissue regions are then segmented using the 3D Slicer tool. Texture features are computed from 12 quantifier functions operating on GLCM descriptors, that are generated from MRI intensities within segmented GBM tissue regions. Various classifier models are used to evaluate the effectiveness of texture features for discriminating between GBM phenotypes. Results based on T1-WI scans showed a phenotype classification accuracy of over 88.14%, a sensitivity of 85.37% and a specificity of 96.1%, using the linear discriminant analysis (LDA) classifier. This model has the potential to provide important characteristics of tumors, which can be used for the sub-classification of GBM phenotypes.
Predicting survival time of lung cancer patients using radiomic analysis
Chaddad, Ahmad
Desrosiers, Christian
Toews, Matthew
Abdulkarim, Bassam
Oncotarget2017Journal Article, cited 4 times
Website
Radiomics
LUNG
Non Small Cell Lung Cancer (NSCLC)
Computed Tomography (CT)
Classification
Computer Assisted Diagnosis (CAD)
Objectives: This study investigates the prediction of Non-small cell lung cancer (NSCLC) patient survival outcomes based on radiomic texture and shape features automatically extracted from tumor image data.; Materials and Methods: Retrospective analysis involves CT scans of 315 NSCLC patients from The Cancer Imaging Archive (TCIA). A total of 24 image features are computed from labeled tumor volumes of patients within groups defined using NSCLC subtype and TNM staging information. Spearman's rank correlation, Kaplan-Meier estimation and log-rank tests were used to identify features related to long/short NSCLC patient survival groups. Automatic random forest classification was used to predict patient survival group from multivariate feature data. Significance is assessed at P < 0.05 following Holm-Bonferroni correction for multiple comparisons.; Results: Significant correlations between radiomic features and survival were observed for four clinical groups: (group, [absolute correlation range]): (large cell carcinoma (LCC) [0.35, 0.43]), (tumor size T2, [0.31, 0.39]), (non lymph node metastasis N0, [0.3, 0.33]), (TNM stage I, [0.39, 0.48]). Significant log-rank relationships between features and survival time were observed for three clinical groups: (group, hazard ratio): (LCC, 3.0), (LCC, 3.9), (T2, 2.5) and (stage I, 2.9). Automatic survival prediction performance (i.e. below/above median) is superior for combined radiomic features with age-TNM in comparison to standard TNM clinical staging information (clinical group, mean area-under-the-ROC-curve (AUC)): (LCC, 75.73%), (N0, 70.33%), (T2, 70.28%) and (TNM-I, 76.17%).; Conclusion: Quantitative lung CT imaging features can be used as indicators of survival, in particular for patients with large-cell-carcinoma (LCC), primary-tumor-sizes (T2) and no lymph-node-metastasis (N0).
Deep Survival Analysis With Clinical Variables for COVID-19
Chaddad, Ahmad
Hassan, Lama
Katib, Yousef
Bouridane, Ahmed
IEEE Journal of Translational Engineering in Health and Medicine2023Journal Article, cited 0 times
COVID-19-NY-SBU
OBJECTIVE: Millions of people have been affected by coronavirus disease 2019 (COVID-19), which has caused millions of deaths around the world. Artificial intelligence (AI) plays an increasing role in all areas of patient care, including prognostics. This paper proposes a novel predictive model based on one dimensional convolutional neural networks (1D CNN) to use clinical variables in predicting the survival outcome of COVID-19 patients.
METHODS AND PROCEDURES: We have considered two scenarios for survival analysis, 1) uni-variate analysis using the Log-rank test and Kaplan-Meier estimator and 2) combining all clinical variables ([Formula: see text]=44) for predicting the short-term from long-term survival. We considered the random forest (RF) model as a baseline model, comparing to our proposed 1D CNN in predicting survival groups.
RESULTS: Our experiments using the univariate analysis show that nine clinical variables are significantly associated with the survival outcome with corrected p < 0.05. Our approach of 1D CNN shows a significant improvement in performance metrics compared to the RF and the state-of-the-art techniques (i.e., 1D CNN) in predicting the survival group of patients with COVID-19.
CONCLUSION: Our model has been tested using clinical variables, where the performance is found promising. The 1D CNN model could be a useful tool for detecting the risk of mortality and developing treatment plans in a timely manner.
CLINICAL IMPACT: The findings indicate that using both Heparin and Exnox for treatment is typically the most useful factor in predicting a patient's chances of survival from COVID-19. Moreover, our predictive model shows that the combination of AI and clinical data can be applied to point-of-care services through fast-learning healthcare systems.
Future artificial intelligence tools and perspectives in medicine
Chaddad, Ahmad
Katib, Yousef
Hassan, Lama
2021Journal Article, cited 0 times
QIN PROSTATE
PURPOSE OF REVIEW: Artificial intelligence has become popular in medical applications, specifically as a clinical support tool for computer-aided diagnosis. These tools are typically employed on medical data (i.e., image, molecular data, clinical variables, etc.) and used the statistical and machine-learning methods to measure the model performance. In this review, we summarized and discussed the most recent radiomic pipeline used for clinical analysis.
RECENT FINDINGS: Currently, limited management of cancers benefits from artificial intelligence, mostly related to a computer-aided diagnosis that avoids a biopsy analysis that presents additional risks and costs. Most artificial intelligence tools are based on imaging features, known as radiomic analysis that can be refined into predictive models in noninvasively acquired imaging data. This review explores the progress of artificial intelligence-based radiomic tools for clinical applications with a brief description of necessary technical steps. Explaining new radiomic approaches based on deep-learning techniques will explain how the new radiomic models (deep radiomic analysis) can benefit from deep convolutional neural networks and be applied on limited data sets.
SUMMARY: To consider the radiomic algorithms, further investigations are recommended to involve deep learning in radiomic models with additional validation steps on various cancer types.
Multimodal Radiomic Features for the Predicting Gleason Score of Prostate Cancer
Predicting Gleason Score of Prostate Cancer Patients using Radiomic Analysis
Chaddad, Ahmad
Niazi, Tamim
Probst, Stephan
Bladou, Franck
Anidjar, Moris
Bahoric, Boris
Frontiers in Oncology2018Journal Article, cited 0 times
Website
PROSTATEx
Radiomics
MRI
PROSTATE
Prediction of survival with multi-scale radiomic analysis in glioblastoma patients
Chaddad, Ahmad
Sabri, Siham
Niazi, Tamim
Abdulkarim, Bassam
Medical & Biological Engineering & Computing2018Journal Article, cited 1 times
Website
Radiomics
GBM
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman’s rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict the PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients.
High-Throughput Quantification of Phenotype Heterogeneity Using Statistical Features
Chaddad, Ahmad
Tanougast, Camel
Advances in Bioinformatics2015Journal Article, cited 5 times
Website
Radiomics
Classification
Glioblastoma Multiforme (GBM)
Support Vector Machine (SVM)
Naïve Bayes (NB)
Machine Learning
Magnetic Resonance Imaging (MRI)
Radiomic features
Radiogenomics
TCGA-GBM
Statistical features are widely used in radiology for tumor heterogeneity assessment using magnetic resonance (MR) imaging technique. In this paper, feature selection based on decision tree is examined to determine the relevant subset of glioblastoma (GBM) phenotypes in the statistical domain. To discriminate between active tumor (vAT) and edema/invasion (vE) phenotype, we selected the significant features using analysis of variance (ANOVA) with p value < 0.01. Then, we implemented the decision tree to define the optimal subset features of phenotype classifier. Naive Bayes (NB), support vector machine (SVM), and decision tree (DT) classifier were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate vAT from vE. Whole nine features were statistically significant to classify the vAT from vE with p value < 0.01. Feature selection based on decision tree showed the best performance by the comparative study using full feature set. The feature selected showed that the two features Kurtosis and Skewness achieved a highest range value of 58.33-75.00% accuracy classifier and 73.88-92.50% AUC. This study demonstrated the ability of statistical features to provide a quantitative, individualized measurement of glioblastoma patient and assess the phenotype progression.
Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients
Chaddad, Ahmad
Tanougast, Camel
Medical & Biological Engineering & Computing2016Journal Article, cited 16 times
Website
Algorithm Development
Radiomics
Glioblastoma Multiforme (GBM)
Image registration
3D Slicer
Classification
GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value < 0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.
Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images
Chaddad, Ahmad
Tanougast, Camel
Brain Informatics2016Journal Article, cited 28 times
Website
TCGA-GBM
To isolate the brain from non-brain tissues using a fully automatic method may be affected by the presence of radio frequency non-homogeneity of MR images (MRI), regional anatomy, MR sequences, and the subjects of the study. In order to automate the brain tumor (Glioblastoma) detection, we proposed a novel approach of skull stripping for axial slices derived from MRI. Then, the brain tumor was detected using multi-level threshold segmentation based on histogram analysis. Skull-stripping method, was applied by adaptive morphological operations approach. This is considered an empirical threshold by calculation of the area of brain tissue, iteratively. It was employed on the registration of non-contrast T1-weighted (T1-WI) and its corresponding fluid attenuated inversion recovery sequence. Then, we used multi-thresholding segmentation (MTS) method which is proposed by Otsu. We calculated the performance metrics based on the similarity coefficients for patients (n = 120) with tumor. The adaptive algorithm of skull stripping and MTS of segmented tumors were achieved efficient in preliminary results with 92 and 80 % of Dice similarity coefficient and 0.3 and 25.8 % of false negative rate, respectively. The adaptive skull stripping algorithm provides robust skull-stripping results, and the tumor area for medical diagnosis was determined by MTS.
CNN Approach for Predicting Survival Outcome of Patients With COVID-19
Chaddad, Ahmad
Tanougast, Camel
2023Journal Article, cited 0 times
COVID-19-NY-SBU
Coronavirus disease 2019 (COVID-19) has been challenged specifically with the new variant. The number of patients seeking treatment has increased significantly, putting tremendous pressure on hospitals and healthcare systems. With the potential of artificial intelligence (AI) to leverage clinicians to improve personalized medicine for COVID-19, we propose a deep learning model based on 1-D and 3-D convolutional neural networks (CNNs) to predict the survival outcome of COVID-19 patients. Our model consists of two CNN channels that operate with CT scans and the corresponding clinical variables. Specifically, each patient data set consists of CT images and the corresponding 44 clinical variables used in the 3-D CNN and 1-D CNN input, respectively. This model aims to combine imaging and clinical features to predict short-term from long-term survival. Our models demonstrate higher performance metrics compared to state-of-the-art models with area under the receiver operator characteristic curve of 91.44%–91.60% versus 84.36%–88.10% and Accuracy of 83.39%–84.47% versus 79.06%–81.94% in predicting the survival groups of patients with COVID-19. Based on the findings, the combined clinical and imaging features in the deep CNN model can be used as a prognostic tool and help to distinguish censored and uncensored cases of COVID-19.
MRI-based classification of IDH mutation and 1p/19q codeletion status of gliomas using a 2.5D hybrid multi-task convolutional neural network
Chakrabarty, Satrajit
LaMontagne, Pamela
Shimony, Joshua
Marcus, Daniel S
Sotiras, Aristeidis
Neuro-Oncology Advances2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Background: IDH mutation and 1p/19q codeletion status are important prognostic markers for glioma that are currently determined using invasive procedures. Our goal was to develop artificial intelligence-based methods to noninvasively determine molecular alterations from MRI.
Methods: Pre-operative MRI scans of 2648 glioma patients were collected from Washington University School of Medicine (WUSM; n = 835) and publicly available Brain Tumor Segmentation (BraTS; n = 378), LGG 1p/19q (n = 159), Ivy Glioblastoma Atlas Project (Ivy GAP; n = 41), The Cancer Genome Atlas (TCGA; n = 461), and the Erasmus Glioma Database (EGD; n = 774) datasets. A 2.5D hybrid convolutional neural network was proposed to simultaneously localize glioma and classify its molecular status by leveraging MRI imaging features and prior knowledge features from clinical records and tumor location. The models were trained on 223 and 348 cases for IDH and 1p/19q tasks, respectively, and tested on one internal (TCGA) and two external (WUSM and EGD) test sets.
Results: For IDH, the best-performing model achieved areas under the receiver operating characteristic (AUROC) of 0.925, 0.874, 0.933 and areas under the precision-recall curves (AUPRC) of 0.899, 0.702, 0.853 on the internal, WUSM, and EGD test sets, respectively. For 1p/19q, the best model achieved AUROCs of 0.782, 0.754, 0.842, and AUPRCs of 0.588, 0.713, 0.782, on those three data-splits, respectively.
Conclusions: The high accuracy of the model on unseen data showcases its generalization capabilities and suggests its potential to perform "virtual biopsy" for tailoring treatment planning and overall clinical management of gliomas.
Automated lung field segmentation in CT images using mean shift clustering and geometrical features
In the study, we used two neural networks, including VGG16 and Resnet50, to process the whole slide images with feature extracting. To classify the three types of brain tumors (i.e., glioblastoma, oligodendroglioma, and astrocytoma), we tried several clustering methods include k-means and random forest classification methods. In the prediction stage, we compared the prediction results with and without MRI features. The results support that the classification method performed with image features extracted by VGG16 has the highest prediction accuracy. Moreover, we found that combining with radiomics generated from MR images slightly improved the accuracy of the classification.
Lung Tumor Classification using Hybrid Deep Learning and Segmentation by Fuzzy C Means
Chandrakantha, T. S.
Jagadale, Basavaraj N.
Alnaggar, Omar Abdullah Murshed Farhan
Indian Journal of Science and Technology2024Journal Article, cited 0 times
Website
RIDER Lung CT
Objectives: This study aims to employ a hybrid Deep Learning (DL) technique for automating tumor detection and classification in lung scans. Methods: The methodology involves three key stages: data preparation, segmentation using Fuzzy C Means (FCM), and classification using a hybrid DL model. The image dataset is sourced from the benchmark Lung Tumor (LT) data, and for segmentation, the FCM approach is applied. The hybrid DL model is created by combining a Pulse Coupled Neural Network (PCNN) and a Convolutional Neural Network (CNN). The study utilizes a dataset of 300 individuals from the NSCLC-Radiomics database. The validation process employs DICE and sensitivity for segmentation, while the hybrid model\'s confusion matrix elements contribute to performance validation. FCM and the hybrid model are employed for processing, segmenting, and classifying the images. Evaluation metrics such as Dice similarity and Sensitivity gauge the success of the segmentation method by measuring the intersection between ground truths and predictions. After segmentation evaluation, the classification process is executed, employing accuracy and loss in the training phase and metrics like accuracy and F1-score in the testing phase for model validation. Findings: The proposed approach achieves an accuracy of 97.43% and an F1-score of 98.28%. These results demonstrate the effectiveness of the suggested approach in accurately classifying and segmenting lung tumors. Novelty: The primary contribution of the research is a hybrid DL model based on PCCN+CCN. This ultimately raises the quality of the model, and these are carried out using real-time public medical images, demonstrating the model\'s originality. Keywords: Lung, Tumor, Segmentation, Classification, Hybrid model
A deep learning pipeline to simulate fluorodeoxyglucose (FDG) uptake in head and neck cancers using non-contrast CT images without the administration of radioactive tracer
Chandrashekar, A.
Handa, A.
Ward, J.
Grau, V.
Lee, R.
Insights Imaging2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Deep learning
Generative adversarial network
Head and neck cancer
Positron Emission Tomography (PET)
Tomography (X-ray computed)
Computed Tomography (CT)
OBJECTIVES: Positron emission tomography (PET) imaging is a costly tracer-based imaging modality used to visualise abnormal metabolic activity for the management of malignancies. The objective of this study is to demonstrate that non-contrast CTs alone can be used to differentiate regions with different Fluorodeoxyglucose (FDG) uptake and simulate PET images to guide clinical management. METHODS: Paired FDG-PET and CT images (n = 298 patients) with diagnosed head and neck squamous cell carcinoma (HNSCC) were obtained from The cancer imaging archive. Random forest (RF) classification of CT-derived radiomic features was used to differentiate metabolically active (tumour) and inactive tissues (ex. thyroid tissue). Subsequently, a deep learning generative adversarial network (GAN) was trained for this CT to PET transformation task without tracer injection. The simulated PET images were evaluated for technical accuracy (PERCIST v.1 criteria) and their ability to predict clinical outcome [(1) locoregional recurrence, (2) distant metastasis and (3) patient survival]. RESULTS: From 298 patients, 683 hot spots of elevated FDG uptake (elevated SUV, 6.03 +/- 1.71) were identified. RF models of intensity-based CT-derived radiomic features were able to differentiate regions of negligible, low and elevated FDG uptake within and surrounding the tumour. Using the GAN-simulated PET image alone, we were able to predict clinical outcome to the same accuracy as that achieved using FDG-PET images. CONCLUSION: This pipeline demonstrates a deep learning methodology to simulate PET images from CT images in HNSCC without the use of radioactive tracer. The same pipeline can be applied to other pathologies that require PET imaging.
Residual Convolutional Neural Network for the Determination of IDH Status in Low-and High-Grade Gliomas from MR Imaging
Chang, Ken
Bai, Harrison X
Zhou, Hao
Su, Chang
Bi, Wenya Linda
Agbodza, Ena
Kavouridis, Vasileios K
Senders, Joeky T
Boaro, Alessandro
Beers, Andrew
Clinical Cancer Research2018Journal Article, cited 26 times
Website
TCGA-LGG
Convolutional Neural Network (CNN)
Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement
Chang, Ken
Beers, Andrew L
Bai, Harrison X
Brown, James M
Ly, K Ina
Li, Xuejun
Senders, Joeky T
Kavouridis, Vasileios K
Boaro, Alessandro
Su, Chang
Bi, Wenya Linda
Rapalino, Otto
Liao, Weihua
Shen, Qin
Zhou, Hao
Xiao, Bo
Wang, Yinyan
Zhang, Paul J
Pinho, Marco C
Wen, Patrick Y
Batchelor, Tracy T
Boxerman, Jerrold L
Arnaout, Omar
Rosen, Bruce R
Gerstner, Elizabeth R
Yang, Li
Huang, Raymond Y
Kalpathy-Cramer, Jayashree
Neuro Oncol2019Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Ivy GAP
Deep Learning
Glioma
Segmentation
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.
Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas
Chang, P
Grinband, J
Weinberg, BD
Bardis, M
Khy, M
Cadena, G
Su, M-Y
Cha, S
Filippi, CG
Bota, D
American Journal of Neuroradiology2018Journal Article, cited 5 times
Website
TCGA-GBM
TCGA-LGG
Joint denoising and interpolating network for low-dose cone-beam CT reconstruction under hybrid dose-reduction strategy
Chao, L.
Wang, Y.
Zhang, T.
Shan, W.
Zhang, H.
Wang, Z.
Li, Q.
Comput Biol Med2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Denoising
Image reconstruction
Interpolation
Low-dose CT
Cone-beam computed tomography (CBCT) is generally reconstructed with hundreds of two-dimensional X-Ray projections through the FDK algorithm, and its excessive ionizing radiation of X-Ray may impair patients' health. Two common dose-reduction strategies are to either lower the intensity of X-Ray, i.e., low-intensity CBCT, or reduce the number of projections, i.e., sparse-view CBCT. Existing efforts improve the low-dose CBCT images only under a single dose-reduction strategy. In this paper, we argue that applying the two strategies simultaneously can reduce dose in a gentle manner and avoid the extreme degradation of the projection data in a single dose-reduction strategy, especially under ultra-low-dose situations. Therefore, we develop a Joint Denoising and Interpolating Network (JDINet) in projection domain to improve the CBCT quality with the hybrid low-intensity and sparse-view projections. Specifically, JDINet mainly includes two important components, i.e., denoising module and interpolating module, to respectively suppress the noise caused by the low-intensity strategy and interpolate the missing projections caused by the sparse-view strategy. Because FDK actually utilizes the projection information after ramp-filtering, we develop a filtered structural similarity constraint to help JDINet focus on the reconstruction-required information. Afterward, we employ a Postprocessing Network (PostNet) in the reconstruction domain to refine the CBCT images that are reconstructed with denoised and interpolated projections. In general, a complete CBCT reconstruction framework is built with JDINet, FDK, and PostNet. Experiments demonstrate that our framework decreases RMSE by approximately 8 %, 15 %, and 17 %, respectively, on the 1/8, 1/16, and 1/32 dose data, compared to the latest methods. In conclusion, our learning-based framework can be deeply imbedded into the CBCT systems to promote the development of CBCT. Source code is available at https://github.com/LianyingChao/FusionLowDoseCBCT.
A New General Maximum Intensity Projection Technology via the Hybrid of U-Net and Radial Basis Function Neural Network
Chao, Zhen
Xu, Wenting
Journal of Digital Imaging2021Journal Article, cited 0 times
Website
LIDC-IDRI
U-Net
An Automatic Overall Survival Time Prediction System for Glioma Brain Tumor Patients Based on Volumetric and Shape Features
An automatic overall survival time prediction system for Glioma brain tumor patients is proposed and developed based on volumetric, location, and shape features. The proposed automatic prediction system consists of three stages: segmentation of brain tumor sub-regions; features extraction; and overall survival time predictions. A deep learning structure based on a modified 3 Dimension (3D) U-Net is proposed to develop an accurate segmentation model to identify and localize the three Glioma brain tumor sub-regions: gadolinium (GD)-enhancing tumor, peritumoral edema, and necrotic and non-enhancing tumor core (NCR/NET). The best performance of a segmentation model is achieved by the modified 3D U-Net based on an Accumulated Encoder (U-Net AE) with a Generalized Dice-Loss (GDL) function trained by the ADAM optimization algorithm. This model achieves Average Dice-Similarity (ADS) scores of 0.8898, 0.8819, and 0.8524 for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET), respectively, in the train dataset of the Multimodal Brain Tumor Segmentation challenge (BraTS) 2020. Various combinations of volumetric (based on brain functionality regions), shape, and location features are extracted to train an overall survival time classification model using a Neural Network (NN). The model classifies the data into three classes: short-survivors, mid-survivors, and long-survivors. An information fusion strategy based on features-level fusion and decision-level fusion is used to produce the best prediction model. The best performance is achieved by the ensemble model and shape features model with accuracies of (55.2%) on the BraTS 2020 validation dataset. The ensemble model achieves a competitive accuracy (55.1%) on the BraTS 2020 test dataset.;
Investigating the impact of the CT Hounsfield unit range on radiomic feature stability using dual energy CT data
Chatterjee, A.
Vallieres, M.
Forghani, R.
Seuntjens, J.
Phys Med2021Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Algorithm Development
Computed Tomography (CT)
Feature stability
Radiomics
Replicability
PURPOSE: Radiomic texture calculation requires discretizing image intensities within the region-of-interest. FBN (fixed-bin-number), FBS (fixed-bin-size) and FBN and FBS with intensity equalization (FBNequal, FBSequal) are four discretization approaches. A crucial choice is the voxel intensity (Hounsfield units, or HU) binning range. We assessed the effect of this choice on radiomic features. METHODS: The dataset comprised 95 patients with head-and-neck squamous-cell-carcinoma. Dual energy CT data was reconstructed at 21 electron energies (40, 45,... 140 keV). Each of 94 texture features were calculated with 64 extraction parameters. All features were calculated five times: original choice, left shift (-10/-20 HU), right shift (+10/+20 HU). For each feature, Spearman correlation between nominal and four variants were calculated to determine feature stability. This was done for six texture feature types (GLCM, GLRLM, GLSZM, GLDZM, NGTDM, and NGLDM) separately. This analysis was repeated for the four binning algorithms. Effect of feature instability on predictive ability was studied for lymphadenopathy as endpoint. RESULTS: FBN and FBNequal algorithms showed good stability (correlation values consistently >0.9). For FBS and FBSequal algorithms, while median values exceeded 0.9, the 95% lower bound decreased as a function of energy, with poor performance over the entire spectrum. FBNequal was the most stable algorithm, and FBS the least. CONCLUSIONS: We believe this is the first multi-energy systematic study of the impact of CT HU range used during intensity discretization for radiomic feature extraction. Future analyses should account for this source of uncertainty when evaluating the robustness of their radiomic signature.
Optimizations for Deep Learning-Based CT Image Enhancement
Computed tomography (CT) combined with deep learning (DL) has recently shown great potential in biomedical imaging. Complex DL models with varying architectures inspired by the human brain are improving imaging software and aiding diagnosis. However, the accuracy of these DL models heavily relies on the datasets used for training, which often contain low-quality CT images from low-dose CT (LDCT) scans. Moreover, in contrast to the neural architecture of the human brain, DL models today are dense and complex, resulting in a significant computational footprint. Therefore, in this work, we propose sparse optimizations to minimize the complexity of the DL models and leverage architecture-aware optimization to reduce the total training time of these DL models. To that end, we leverage a DL model called DenseNet and Deconvolution Network (DDNet). The model enhances LDCT chest images into high-quality (HQ) ones but requires many hours to train. To further improve the quality of final HQ images, we first modified DDNet's architecture with a more robust multi-level VGG (ML-VGG) loss function to achieve state-of-the-art CT image enhancement. However, improving the loss function results in increased computational cost. Hence, we introduce sparse optimizations to reduce the complexity of the improved DL model and then propose architecture-aware optimizations to efficiently utilize the underlying computing hardware to reduce the overall training time. Finally, we evaluate our techniques for performance and accuracy using state-of-the-art hardware resources.
MRI prostate cancer radiomics: Assessment of effectiveness and perspectives
Brain malignancy labeling and delineation via magnetic resonance imaging (MRI) remain difficult but critical activities for a variety of clinical diagnostic purposes. Several contemporary studies employed three methodologies: FLAIR, T1c, and T2, considering every brain visualization paradigm providing distinct and essential facts relevant to specific areas of the lesion. In this work, a pre-processing strategy for the operation of just on a limited section of the data instead of the entire image is suggested for the production of a versatile yet robust brain tumor segmentation method. This strategy helps in reduction in processing duration and fixes the concern of overfitting in a cascade deep learning network. In the subsequent stage, an effortless and coherent cascade convolutional neural network (CCNN) is suggested considering the employment of typically coping with a reduced portion of brain data in every layer. The CCNN model harvests all regional and universal characteristics in dual independent ways. In addition, unique extensive evaluations are carried out on the BRATS 2018 dataset, demonstrating that the suggested model gains fair performance: The suggested methodology yields average total tumor score of 0.934, 0.925, and 0.908 for the intensification of the lesions and central scale score of the lesions, respectively.
Radiomic phenotypes of the background lung parenchyma from [18]F-FDG PET/CT images can augment tumor radiomics and clinical factors in predicting response after surgical resection of tumors in patients with non-small cell lung cancer
In this study we investigate the novel approach of using radiomic phenotypes from the lung parenchyma and tumor region of PET/CT images in non-small cell lung cancer (NSCLC) patients to predict overall survival (OS) and progression free survival (PFS) after tumor resection. We used 144 publicly available fluorodeoxyglucose ([18]F-FDG) PET/CT images from The Cancer Imaging Archive (TCIA) NSCLC Radiogenomics dataset. We extract features using the cancer phenomics imaging toolkit (CaPTk) to extract radiomic features from each of four image source regions: PET imaging of the tumor, PET imaging of the non-tumor lung parenchyma, CT imaging of the tumor, and CT of the parenchyma. Using each of the four sets of features, we independently clustered patients into phenotypes using unsupervised hierarchical clustering. The four phenotyping schemes individually, together, and in combination with clinical variables were assessed for association with time to OS and PFS via Cox proportional-hazards modeling, assessing covariate association via the log-rank p-value and model predictive performance via the C statistic. The clinical variables divided high from low hazard groups with p ≤ 0.05 for OS (p = 0.002, C = 0.62) but not for PFS (p = 0.098, C = 0.58). For PFS, radiomic phenotype derived from PET lung parenchyma performed better than the clinical variables both alone (p = 0.014, C = 0.59) and in conjunction with the clinical variables (p = 0.014, C = 0.62). Radiomic phenotypes from the lung parenchyma of PET/CT images can improve outcomes prediction for PFS after tumor resection in patients with NSCLC. Radiomic phenotypes from the non-cancerous parenchyma may derive prognostic value by detecting differences in the tissue linked to the biology of recurrence.
A Fast Semi-Automatic Segmentation Tool for Processing Brain Tumor Images
Adaptive cascaded transformer U-Net for MRI brain tumor segmentation
Chen, B.
Sun, Q.
Han, Y.
Liu, B.
Zhang, J.
Zhang, Q.
Phys Med Biol2024Journal Article, cited 3 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
RSNA-ASNR-MICCAI BraTS 2021
Objective.Brain tumor segmentation on magnetic resonance imaging (MRI) plays an important role in assisting the diagnosis and treatment of cancer patients. Recently, cascaded U-Net models have achieved excellent performance via conducting coarse-to-fine segmentation of MRI brain tumors. However, they are still restricted by obvious global and local differences among various brain tumors, which are difficult to solve with conventional convolutions.Approach.To address the issue, this study proposes a novel Adaptive Cascaded Transformer U-Net (ACTransU-Net) for MRI brain tumor segmentation, which simultaneously integrates Transformer and dynamic convolution into a single cascaded U-Net architecture to adaptively capture global information and local details of brain tumors. ACTransU-Net first cascades two 3D U-Nets into a two-stage network to segment brain tumors from coarse to fine. Subsequently, it integrates omni-dimensional dynamic convolution modules into the second-stage shallow encoder and decoder, thereby enhancing the local detail representation of various brain tumors through dynamically adjusting convolution kernel parameters. Moreover, 3D Swin-Transformer modules are introduced into the second-stage deep encoder and decoder to capture image long-range dependencies, which helps adapt the global representation of brain tumors.Main results.Extensive experiment results evaluated on the public BraTS 2020 and BraTS 2021 brain tumor data sets demonstrate the effectiveness of ACTransU-Net, with average DSC of 84.96% and 91.37%, and HD95 of 10.81 and 7.31 mm, proving competitiveness with the state-of-the-art methods.Significance.The proposed method focuses on adaptively capturing both global information and local details of brain tumors, aiding physicians in their accurate diagnosis. In addition, it has the potential to extend ACTransU-Net for segmenting other types of lesions. The source code is available at:https://github.com/chenbn266/ACTransUnet.
An Integrated Machine Learning Framework Identifies Prognostic Gene Pair Biomarkers Associated with Programmed Cell Death Modalities in Clear Cell Renal Cell Carcinoma
Chen, B.
Zhou, M.
Guo, L.
Huang, H.
Sun, X.
Peng, Z.
Wu, D.
Chen, W.
Front Biosci (Landmark Ed)2024Journal Article, cited 0 times
TCGA-KIRC
Humans
*Carcinoma
Renal Cell/genetics
Prognosis
Apoptosis
Machine Learning
*Kidney Neoplasms/genetics
Biomarkers
Prss23
Clear cell renal cell carcinoma (ccRCC)
programmed cell death
Radiomics
single-cell RNA-seq
BACKGROUND: Clear cell renal cell carcinoma (ccRCC) is a common and lethal urological malignancy for which there are no effective personalized therapeutic strategies. Programmed cell death (PCD) patterns have emerged as critical determinants of clinical prognosis and immunotherapy responses. However, the actual clinical relevance of PCD processes in ccRCC is still poorly understood. METHODS: We screened for PCD-related gene pairs through single-sample gene set enrichment analysis (ssGSEA), consensus cluster analysis, and univariate Cox regression analysis. A novel machine learning framework incorporating 12 algorithms and 113 unique combinations were used to develop the cell death-related gene pair score (CDRGPS). Additionally, a radiomic score (Rad_Score) derived from computed tomography (CT) image features was used to classify the CDRGPS status as high or low. Finally, we conclusively verified the function of PRSS23 in ccRCC. RESULTS: The CDRGPS was developed through an integrated machine learning approach that leveraged 113 algorithm combinations. CDRGPS represents an independent prognostic biomarker for overall survival and demonstrated consistent performance between training and external validation cohorts. Moreover, CDRGPS showed better prognostic accuracy compared to seven previously published cell death-related signatures. In addition, patients classified as high-risk by CDRGPS exhibited increased responsiveness to tyrosine kinase inhibitors (TKIs), mammalian Target of Rapamycin (mTOR) inhibitors, and immunotherapy. The Rad_Score demonstrated excellent discrimination for predicting high versus low CDRGPS status, with an area under the curve (AUC) value of 0.813 in the Cancer Imaging Archive (TCIA) database. PRSS23 was identified as a significant factor in the metastasis and immune response of ccRCC, thereby validating experimental in vitro results. CONCLUSIONS: CDRGPS is a robust and non-invasive tool that has the potential to improve clinical outcomes and enable personalized medicine in ccRCC patients.
Simple and Fast Convolutional Neural Network Applied to Median Cross Sections for Predicting the Presence of MGMT Promoter Methylation in FLAIR MRI Scans
Chen, Daniel Tianming
Chen, Allen Tianle
Wang, Haiyan
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Radiogenomics
Challenge
BraTS 2021
BRAIN
Algorithm Development
Convolutional Neural Network (CNN)
In this paper we present a small and fast Convolutional Neural Network (CNN) used to predict the presence of MGMT promoter methylation in Magnetic Resonance Imaging (MRI) scans. Our data set is “The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification” by U. Baid, et al. We focus on using the median (“middle-most”) cross section of a FLAIR scan and use this as the input to the neural net for training. This cross section therefore presents the most or nearly the most surface area compared to any other cross section. We are thus able to reduce the computational complexity and time of the training step while preserving the high performance extrapolation capabilities of the model on unseen data.
A hybrid feature selection‐based approach for brain tumor detection and automatic segmentation on multiparametric magnetic resonance images
Chen, Hao
Ban, Duo
Qi, X. Sharon
Pan, Xiaoying
Qiang, Yongqian
Yang, Qing
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
PURPOSE: To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs.
METHODS: We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4 MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimension radiomic features extracted from four sequences of MRIs and 128-dimension high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including nonenhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using fivefold cross validation, the model was independently validated using the rest 35 patients from the same database.
RESULTS: The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852 ± 0.057, 0.844 ± 0.046, and 0.799 ± 0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873 ± 0.074, 0.863 ± 0.072, and 0.852 ± 0.082; the specificities were 0.994 ± 0.005, 0.994 ± 0.005, and 0.995 ± 0.004 for the WT, tumor core, and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5 s per set of four sequence MRIs.
CONCLUSIONS: We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.
HLFSRNN-MIL: A Hybrid Multi-Instance Learning Model for 3D CT Image Classification
Chen, Huilong
Zhang, Xiaoxia
Applied Sciences2024Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Low-dose CT via convolutional neural network
Chen, Hu
Zhang, Yi
Zhang, Weihua
Liao, Peixi
Li, Ke
Zhou, Jiliu
Wang, Ge
Biomedical Optics Express2017Journal Article, cited 342 times
Website
Algorithm Development
low-dose CT
Convolutional Neural Network (CNN)
Image denoising
MATLAB
In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.
Generative models improve radiomics: reproducibility and performance in low dose CTs
English summary; Along with the increasing demand of low dose CT in clinical practices, low; dose CT radiomics has shown its potential to provide clinical decision; support in oncology. As a trade-off of low radiation exposure in low dose; CT imaging, higher noise is present in these images. Noise in low dose CT; decreases the texture information of image, and the reproducibility and; performance of CT radiomics. One potential solution worth exploring for; improving the reproducibility and performance of radiomics based on low; dose CT is denoising the images before extracting radiomic features. As the; state of art method for low dose CT denoising, generative models have been; widely used in denoising practices. This thesis investigated the possibility; of using generative models to enhance the image quality of low dose CTs; and improve radiomics reproducibility and performance.; In the first research chapter (Chapter 2) of this thesis, we investigate the; benefits of shortcuts in encoder-decoder network for CT denoising. An; encoder-decoder network (EDN) is an important architecture for the; generator in generative models and this chapter provides some guidelines to; help us design generative models. Results showed that over half of the; shortcuts are necessary for CT denoising. However, the network should keep; sparse connection between the encoder and decoder. Moreover, deeper; shortcuts have a higher priority to be removed in favor of keeping sparse; connections.; Paired training datasets are needed for training most generative models.; However, collecting these kinds of datasets is difficult and time-consuming.; To investigate the effect of generative models in improving low dose CT; radiomics reproducibility, (Chapter 3) two included generative models –; Conditional Generative Adversarial Network (CGAN) and END - were; trained on paired simulation low-high dose CT images. The trained models; are applied to simulated noisy CT images and real low dose CT images.; Results showed that denoising using EDN and CGANs can improve the; reproducibility of radiomic features from noisy CTs (including simulated; data and real low dose CTs).; To test the improvement of enhanced low dose CT radiomics in real; applications more comprehensively, low dose CT radiomics was applied for; a new application. (Chapter 4) The objective of this application is to; develop a lung cancer classification model at the subject (patient) level from; multiple examined nodules, without the need to have specific expert findings; reported at the level of each individual nodule. Lung cancer classification; was regarded as a multiple instances learning problem, CT radiomics were; used as biomarkers to extract information from each nodule and deep; attention-based MIL is used as the classification algorithm at the patient; level. Results showed that the proposed method can achieve the best; performance in lung cancer classification compared with other MIL methods; and that the introduced attention mechanism can increase the interpretability; of results.; To comprehensively investigate the improvements of generative models for; CT radiomics performance in real applications, pre-trained generative; models are applied into multiple real low dose CT datasets without fine-; tuning. (Chapter 5) Improved radiomics features were applied into multiple; radiomics related applications – tumor pre-treatment survival prediction and; deep attention-based MIL based lung cancer diagnosis. The results showed; that generative models can improve low dose CT radiomics performance.; To investigate the possibility of using unpaired real low-high dose CT image; to establish a denoiser and using thus trained denoiser to enhance radiomics; reproducibility and performance, a Cycle GAN was adopted as the testing; model in this chapter. (Chapter 6) The Cycle GAN was trained based on; paired simulated datasets (for comparison study with EDN and CGAN) and; unpaired real datasets. The trained models were applied to simulated noisy; CT images and real low dose CT images to test the improvement of; radiomics reproducibility and performance. The results showed that Cycle; GANs trained on both simulated and real data can improve radiomics; reproducibility and performance in low dose CT and achieve similar results; compared to CGANs and EDNs; Finally, the discussion section of this thesis (Chapter 7) summarized the; barriers that prevent generative models to be applied apply for real low dose; CT radiomics and propose the possible solutions for these barriers.; Moreover, this discussion section mentioned other possible methods to; improve low dose CT radiomics performance.
Human papillomavirus (HPV) prediction for oropharyngeal cancer based on CT by using off-the-shelf features: A dual-dataset study
Chen, J.
Cheng, Y.
Chen, L.
Yang, B.
J Appl Clin Med Phys2025Journal Article, cited 0 times
Website
RADCURE
HEAD-NECK-RADIOMICS-HN1
3D deep features
Siamese neural network
image-based HPV prediction for oropharyngeal cancer
transfer learning
BACKGROUND: This study aims to develop a novel predictive model for determining human papillomavirus (HPV) presence in oropharyngeal cancer using computed tomography (CT). Current image-based HPV prediction methods are hindered by high computational demands or suboptimal performance. METHODS: To address these issues, we propose a methodology that employs a Siamese Neural Network architecture, integrating multi-modality off-the-shelf features-handcrafted features and 3D deep features-to enhance the representation of information. We assessed the incremental benefit of combining 3D deep features from various networks and introduced manufacturer normalization. Our method was also designed for computational efficiency, utilizing transfer learning and allowing for model execution on a single-CPU platform. A substantial dataset comprising 1453 valid samples was used as internal validation, a separate independent dataset for external validation. RESULTS: Our proposed model achieved superior performance compared to other methods, with an average area under the receiver operating characteristic curve (AUC) of 0.791 [95% (confidence interval, CI), 0.781-0.809], an average recall of 0.827 [95% CI, 0.798-0.858], and an average accuracy of 0.741 [95% CI, 0.730-0.752], indicating promise for clinical application. In the external validation, proposed method attained an AUC of 0.581 [95% CI, 0.560-0.603] and same network architecture with pure deep features achieved an AUC of 0.700 [95% CI, 0.682-0.717]. An ablation study confirmed the effectiveness of incorporating manufacturer normalization and the synergistic effect of combining different feature sets. CONCLUSION: Overall, our proposed model not only outperforms existing counterparts for HPV status prediction but is also computationally accessible for use on a single-CPU platform, which reduces resource requirements and enhances clinical usability.
Generating anthropomorphic phantoms using fully unsupervised deformable image registration with convolutional neural networks
Chen, Junyu
Li, Ye
Du, Yong
Frey, Eric C
Med Phys2020Journal Article, cited 0 times
Website
NaF-Prostate
PHANTOM
Image Registration
Medical Image Simulation
Deep convolutional neural network (DCNN)
PURPOSE: Computerized phantoms have been widely used in nuclear medicine imaging for imaging system optimization and validation. Although the existing computerized phantoms can model anatomical variations through organ and phantom scaling, they do not provide a way to fully reproduce the anatomical variations and details seen in humans. In this work, we present a novel registration-based method for creating highly anatomically detailed computerized phantoms. We experimentally show substantially improved image similarity of the generated phantom to a patient image. METHODS: We propose a deep-learning-based unsupervised registration method to generate a highly anatomically detailed computerized phantom by warping an XCAT phantom to a patient computed tomography (CT) scan. We implemented and evaluated the proposed method using the NURBS-based XCAT phantom and a publicly available low-dose CT dataset from TCIA. A rigorous tradeoff analysis between image similarity and deformation regularization was conducted to select the loss function and regularization term for the proposed method. A novel SSIM-based unsupervised objective function was proposed. Finally, ablation studies were conducted to evaluate the performance of the proposed method (using the optimal regularization and loss function) and the current state-of-the-art unsupervised registration methods. RESULTS: The proposed method outperformed the state-of-the-art registration methods, such as SyN and VoxelMorph, by more than 8%, measured by the SSIM and less than 30%, by the MSE. The phantom generated by the proposed method was highly detailed and was almost identical in appearance to a patient image. CONCLUSIONS: A deep-learning-based unsupervised registration method was developed to create anthropomorphic phantoms with anatomies labels that can be used as the basis for modeling organ properties. Experimental results demonstrate the effectiveness of the proposed method. The resulting anthropomorphic phantom is highly realistic. Combined with realistic simulations of the image formation process, the generated phantoms could serve in many applications of medical imaging research.
Empathy structure in multi-agent system with the mechanism of self-other separation: Design and analysis from a random walk view
Chen, Jize
Liu, Bo
Qu, Zhenshen
Wang, Changhong
Cognitive Systems Research2023Journal Article, cited 0 times
Website
LIDC-IDRI
Random Forest
Semi-supervised learning
In a socialized multi-agent system, the preferences of individuals will be inevitably influenced by others. This paper introduces an extended empathy structure to characterize the coupling process of preferences under specific relations and make it cover scenarios including human society, human–machine system, and even abiotic engineering applications. In this model, empathy is abstracted as a stochastic experience process in the form of Markov chain, and the coupled empathy utility is defined as the expectation of obtaining preferences under the corresponding probability distribution. The self-other separation is the core concept with which our structure can exhibit social attributes, including attraction of implicit states, inhibition of excessive empathy, attention of empathetic targets, and anisotropy of the utility distribution. Compared with the previous empirical models, our model has a better performance on the data set and can provide a new perspective for designing and analyzing the cognitive layer of the human–machine network, as well as the information fusion and semi-supervised clustering methods in engineering.
Improving reproducibility and performance of radiomics in low‐dose CT using cycle GANs
Chen, Junhua
Wee, Leonard
Dekker, Andre
Bermejo, Inigo
Journal of applied clinical medical physics2022Journal Article, cited 0 times
LDCT-and-Projection-data
NSCLC-Radiomics
TCGA-LUAD
BACKGROUND: As a means to extract biomarkers from medical imaging, radiomics has attracted increased attention from researchers. However, reproducibility and performance of radiomics in low-dose CT scans are still poor, mostly due to noise. Deep learning generative models can be used to denoise these images and in turn improve radiomics' reproducibility and performance. However, most generative models are trained on paired data, which can be difficult or impossible to collect.
PURPOSE: In this article, we investigate the possibility of denoising low-dose CTs using cycle generative adversarial networks (GANs) to improve radiomics reproducibility and performance based on unpaired datasets.
METHODS AND MATERIALS: Two cycle GANs were trained: (1) from paired data, by simulating low-dose CTs (i.e., introducing noise) from high-dose CTs and (2) from unpaired real low dose CTs. To accelerate convergence, during GAN training, a slice-paired training strategy was introduced. The trained GANs were applied to three scenarios: (1) improving radiomics reproducibility in simulated low-dose CT images and (2) same-day repeat low dose CTs (RIDER dataset), and (3) improving radiomics performance in survival prediction. Cycle GAN results were compared with a conditional GAN (CGAN) and an encoder-decoder network (EDN) trained on simulated paired data.
RESULTS: The cycle GAN trained on simulated data improved concordance correlation coefficients (CCC) of radiomic features from 0.87 (95%CI, [0.833,0.901]) to 0.93 (95%CI, [0.916,0.949]) on simulated noise CT and from 0.89 (95%CI, [0.881,0.914]) to 0.92 (95%CI, [0.908,0.937]) on the RIDER dataset, as well improving the area under the receiver operating characteristic curve (AUC) of survival prediction from 0.52 (95%CI, [0.511,0.538]) to 0.59 (95%CI, [0.578,0.602]). The cycle GAN trained on real data increased the CCCs of features in RIDER to 0.95 (95%CI, [0.933,0.961]) and the AUC of survival prediction to 0.58 (95%CI, [0.576,0.596]).
CONCLUSION: The results show that cycle GANs trained on both simulated and real data can improve radiomics' reproducibility and performance in low-dose CT and achieve similar results compared to CGANs and EDNs.
Using 3D deep features from CT scans for cancer prognosis based on a video classification model: A multi-dataset feasibility study
Chen, J.
Wee, L.
Dekker, A.
Bermejo, I.
Med Phys2023Journal Article, cited 0 times
Website
NSCLC-Radiomics
OPC-Radiomics
Head-Neck-Radiomics-HN1
RIDER LUNG CT
3D deep neural network
cancer prognosis
deep features
Radiomics
Transfer learning
BACKGROUND: Cancer prognosis before and after treatment is key for patient management and decision making. Handcrafted imaging biomarkers-radiomics-have shown potential in predicting prognosis. PURPOSE: However, given the recent progress in deep learning, it is timely and relevant to pose the question: could deep learning based 3D imaging features be used as imaging biomarkers and outperform radiomics? METHODS: Effectiveness, reproducibility in test/retest, across modalities, and correlation of deep features with clinical features such as tumor volume and TNM staging were tested in this study. Radiomics was introduced as the reference image biomarker. For deep feature extraction, we transformed the CT scans into videos, and we adopted the pre-trained Inflated 3D ConvNet (I3D) video classification network as the architecture. We used four datasets-LUNG 1 (n = 422), LUNG 4 (n = 106), OPC (n = 605), and H&N 1 (n = 89)-with 1270 samples from different centers and cancer types-lung and head and neck cancer-to test deep features' predictiveness and two additional datasets to assess the reproducibility of deep features. RESULTS: Support Vector Machine-Recursive Feature Elimination (SVM-RFE) selected top 100 deep features achieved a concordance index (CI) of 0.67 in survival prediction in LUNG 1, 0.87 in LUNG 4, 0.76 in OPC, and 0.87 in H&N 1, while SVM-RFE selected top 100 radiomics achieved CIs of 0.64, 0.77, 0.73, and 0.74, respectively, all statistically significant differences (p < 0.01, Wilcoxon's test). Most selected deep features are not correlated with tumor volume and TNM staging. However, full radiomics features show higher reproducibility than full deep features in a test/retest setting (0.89 vs. 0.62, concordance correlation coefficient). CONCLUSION: The results show that deep features can outperform radiomics while providing different views for tumor prognosis compared to tumor volume and TNM staging. However, deep features suffer from lower reproducibility than radiomic features and lack the interpretability of the latter.
Generative models improve radiomics reproducibility in low dose CTs: a simulation study
Chen, Junhua
Zhang, Chong
Traverso, Alberto
Zhovannik, Ivan
Dekker, Andre
Wee, Leonard
Bermejo, Inigo
Physics in Medicine and Biology2021Journal Article, cited 0 times
NSCLC-Radiomics
Radiomics is an active area of research in medical image analysis, however poor reproducibility of radiomics has hampered its application in clinical practice. This issue is especially prominent when radiomic features are calculated from noisy images, such as low dose computed tomography (CT) scans. In this article, we investigate the possibility of improving the reproducibility of radiomic features calculated on noisy CTs by using generative models for denoising. Our work concerns two types of generative models-encoder-decoder network (EDN) and conditional generative adversarial network (CGAN). We then compared their performance against a more traditional 'non-local means' denoising algorithm. We added noise to sinograms of full dose CTs to mimic low dose CTs with two levels of noise: low-noise CT and high-noise CT. Models were trained on high-noise CTs and used to denoise low-noise CTs without re-training. We tested the performance of our model in real data, using a dataset of same-day repeated low dose CTs in order to assess the reproducibility of radiomic features in denoised images. EDN and the CGAN achieved similar improvements on the concordance correlation coefficients (CCC) of radiomic features for low-noise images from 0.87 [95%CI, (0.833, 0.901)] to 0.92 [95%CI, (0.909, 0.935)] and for high-noise images from 0.68 [95%CI, (0.617, 0.745)] to 0.92 [95%CI, (0.909, 0.936)], respectively. The EDN and the CGAN improved the test-retest reliability of radiomic features (mean CCC increased from 0.89 [95%CI, (0.881, 0.914)] to 0.94 [95%CI, (0.927, 0.951)]) based on real low dose CTs. These results show that denoising using EDN and CGANs could be used to improve the reproducibility of radiomic features calculated from noisy CTs. Moreover, images at different noise levels can be denoised to improve the reproducibility using the above models without need for re-training, provided the noise intensity is not excessively greater that of the high-noise CTs. To the authors' knowledge, this is the first effort to improve the reproducibility of radiomic features calculated on low dose CT scans by applying generative models.
Are all shortcuts in encoder–decoder networks beneficial for CT denoising?
Chen, Junhua
Zhang, Chong
Wee, Leonard
Dekker, Andre
Bermejo, Inigo
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization2022Journal Article, cited 0 times
Website
NSCLC-Radiomics
Image denoising
Computed Tomography (CT)
Deep Learning
Denoising of CT scans has attracted the attention of many researchers in the medical image analysis domain. Encoder–decoder networks are deep learning neural networks that have become common for image denoising in recent years. Shortcuts between the encoder and decoder layers are crucial for some image-to-image translation tasks. However, are all shortcuts necessary for CT denoising? To answer this question, we set up two encoder–decoder networks representing two popular architectures and then progressively removed shortcuts from the networks from shallow to deep (forward removal) and from deep to shallow (backward removal). We used two unrelated datasets with different noise levels to test the denoising performance of these networks using two metrics, namely root mean square error and content loss. The results show that while more than half of the shortcuts are still indispensable for CT scan denoising, removing certain shortcuts leads to performance improvement for denoising. Both shallow and deep shortcuts might be removed, thus retaining sparse connections, especially when the noise level is high. Backward removal seems to have a better performance than forward removal, which means deep shortcuts have priority to be removed. Finally, we propose a hypothesis to explain this phenomenon and validate it in the experiments.
Glioma Image Segmentation Method on Fully Convolutional Neural Network
Aiming at the difference in the segmentation performance of the three segmentation target regions in the glioma image segmentation task based on the fully convolutional neural network, we propose a comprehensive evaluation method of neural network performance based on four evaluation indices. In addition, we analyze the performance and characteristics of neural network in the segmentation task of glioma, study the segmentation performance of neural network in the whole tumor (WT), tumor core (TC) and enhanced tumor (ET) regions, and propose a deep learning algorithm based on multiple networks in parallel. In this paper, the input image of the two-dimensional neural network is sliced, and the input of the three-dimensional neural network is processed in two ways: overlapping and non-overlapping, and in the image post-processing part, the three-dimensional image is reconstructed before the evaluation index is calculated. This article uses four evaluation indexes, which are Dice, Sensitivity, PPV, and Hausdorff, for the three segmentation target regions, and performs RSR* weight calculation, and finally performs a comprehensive evaluation. Experimental results show that Vnet has the best comprehensive segmentation performance, FCN-8s has the best segmentation performance in the TC area, Unet++ has the best segmentation performance in the ET area, and Vnet has the best segmentation performance in the WT area. Based on this, we propose a FUV multi-network parallel algorithm, combined with a reverse attention mechanism to improve the segmentation accuracy of the three segmentation target regions.
Machine vision-assisted identification of the lung adenocarcinoma category and high-risk tumor area based on CT images
Chen, L.
Qi, H.
Lu, D.
Zhai, J.
Cai, K.
Wang, L.
Liang, G.
Zhang, Z.
Patterns (N Y)2022Journal Article, cited 1 times
Website
Lung Fused-CT-Pathology
NSCLC-Radiomics-Genomics
CPTAC-LUAD
NSCLC-Radiomics
APOLLO-5-LUAD
Computed Tomography (CT)
Convolutional Neural Network (CNN)
Deep learning
LUNG
Computer Aided Diagnosis (CADx)
Computed tomography (CT) is a widely used medical imaging technique. It is important to determine the relationship between CT images and pathological examination results of lung adenocarcinoma to better support its diagnosis. In this study, a bilateral-branch network with a knowledge distillation procedure (KDBBN) was developed for the auxiliary diagnosis of lung adenocarcinoma. KDBBN can automatically identify adenocarcinoma categories and detect the lesion area that most likely contributes to the identification of specific types of adenocarcinoma based on lung CT images. In addition, a knowledge distillation process was established for the proposed framework to ensure that the developed models can be applied to different datasets. The results of our comprehensive computational study confirmed that our method provides a reliable basis for adenocarcinoma diagnosis supplementary to the pathological examination. Meanwhile, the high-risk area labeled by KDBBN highly coincides with the related lesion area labeled by doctors in clinical diagnosis.
Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation
Magnetic resonance imaging (MRI) is the dominant modality used in the initial evaluation of patients with primary brain tumors due to its superior image resolution and high safety profile. Automated segmentation of brain tumors from MRI is critical in the determination of response to therapy. In this paper, we propose a novel method which aggregates multi-scale prediction from 3D U-Net to segment enhancing tumor (ET), whole tumor (WT) and tumor core (TC) from multimodal MRI. Multi-scale prediction is derived from the decoder part of 3D U-Net at different resolutions. The final prediction takes the minimum value of the corresponding pixel from the upsampling multi-scale prediction. Aggregating multi-scale prediction can add constraints to the network which is beneficial for limited data. Additionally, we employ model ensembling strategy to further improve the performance of the proposed network. Finally, we achieve dice scores of 0.7745, 0.8640 and 0.7914, and Hausdorff distances (95th percentile) of 4.2365, 6.9381 and 6.6026 for ET, WT and TC respectively on the test set in BraTS 2019.
Radiomic Features at CT Can Distinguish Pancreatic Cancer from Noncancerous Pancreas
Chen, Po-Ting
Chang, Dawei
Yen, Huihsuan
Liu, Kao-Lang
Huang, Su-Yun
Roth, Holger
Wu, Ming-Shiang
Liao, Wei-Chih
Wang, Weichung
Radiol Imaging Cancer2021Journal Article, cited 0 times
Website
Pancreas-CT
PANCREAS
Computer Aided Diagnosis (CADx)
Computed Tomography (CT)
Purpose To identify distinguishing CT radiomic features of pancreatic ductal adenocarcinoma (PDAC) and to investigate whether radiomic analysis with machine learning can distinguish between patients who have PDAC and those who do not. Materials and Methods This retrospective study included contrast material-enhanced CT images in 436 patients with PDAC and 479 healthy controls from 2012 to 2018 from Taiwan that were randomly divided for training and testing. Another 100 patients with PDAC (enriched for small PDACs) and 100 controls from Taiwan were identified for testing (from 2004 to 2011). An additional 182 patients with PDAC and 82 healthy controls from the United States were randomly divided for training and testing. Images were processed into patches. An XGBoost (https://xgboost.ai/) model was trained to classify patches as cancerous or noncancerous. Patients were classified as either having or not having PDAC on the basis of the proportion of patches classified as cancerous. For both patch-based and patient-based classification, the models were characterized as either a local model (trained on Taiwanese data only) or a generalized model (trained on both Taiwanese and U.S. data). Sensitivity, specificity, and accuracy were calculated for patch- and patient-based analysis for the models. Results The median tumor size was 2.8 cm (interquartile range, 2.0-4.0 cm) in the 536 Taiwanese patients with PDAC (mean age, 65 years +/- 12 [standard deviation]; 289 men). Compared with normal pancreas, PDACs had lower values for radiomic features reflecting intensity and higher values for radiomic features reflecting heterogeneity. The performance metrics for the developed generalized model when tested on the Taiwanese and U.S. test data sets, respectively, were as follows: sensitivity, 94.7% (177 of 187) and 80.6% (29 of 36); specificity, 95.4% (187 of 196) and 100% (16 of 16); accuracy, 95.0% (364 of 383) and 86.5% (45 of 52); and area under the curve, 0.98 and 0.91. Conclusion Radiomic analysis with machine learning enabled accurate detection of PDAC at CT and could identify patients with PDAC. Keywords: CT, Computer Aided Diagnosis (CAD), Pancreas, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. (c) RSNA, 2021.
Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-based Study
Chen, P. T.
Wu, T.
Wang, P.
Chang, D.
Liu, K. L.
Wu, M. S.
Roth, H. R.
Lee, P. C.
Liao, W. C.
Wang, W.
Radiology2022Journal Article, cited 5 times
Website
Pancreas-CT
Segmentation
Classification
Computer Aided Detection (CADe)
Deep Learning
Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years +/- 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. (c) RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.
Towards a general-purpose foundation model for computational pathology
Chen, R. J.
Ding, T.
Lu, M. Y.
Williamson, D. F. K.
Jaume, G.
Song, A. H.
Chen, B.
Zhang, A.
Shao, D.
Shaban, M.
Williams, M.
Oldenburg, L.
Weishaupt, L. L.
Wang, J. J.
Vaidya, A.
Le, L. P.
Gerber, G.
Sahai, S.
Williams, W.
Mahmood, F.
Nat Med2024Journal Article, cited 0 times
TCGA-LUAD
TCGA-LUSC
CPTAC-LUAD
CPTAC-LUSC
CPTAC-CCRCC
TCGA-GBM
TCGA-ESCA
Hungarian-Colorectal-Screening
TIL-WSI-TCGA
Large-scale data
Pathomics
Pathogenomics
*Artificial Intelligence
Workflow
Self-supervised
Cell segmentation
CLAM
ABMIL
Scikit-Learn
Cancer Metastases in Lymph Nodes Challenge 2016 (CAMELYON16) Challenge
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.
Deep learning-based multimodel prediction for disease-free survival status of patients with clear cell renal cell carcinoma after surgery: a multicenter cohort study
Chen, S.
Gao, F.
Guo, T.
Jiang, L.
Zhang, N.
Wang, X.
Zheng, J.
Int J Surg2024Journal Article, cited 0 times
TCGA-KIRC
CPTAC-CCRCC
Multimodal Imaging
Pathomics
Whole Slide Imaging (WSI)
Cell segmentation
Computed Tomography (CT)
Predictive model
BACKGROUND: Although separate analysis of individual factor can somewhat improve the prognostic performance, integration of multimodal information into a single signature is necessary to stratify patients with clear cell renal cell carcinoma (ccRCC) for adjuvant therapy after surgery. METHODS: A total of 414 patients with whole slide images, computed tomography images, and clinical data from three patient cohorts were retrospectively analyzed. The authors performed deep learning and machine learning algorithm to construct three single-modality prediction models for disease-free survival of ccRCC based on whole slide images, cell segmentation, and computed tomography images, respectively. A multimodel prediction signature (MMPS) for disease-free survival were further developed by combining three single-modality prediction models and tumor stage/grade system. Prognostic performance of the prognostic model was also verified in two independent validation cohorts. RESULTS: Single-modality prediction models performed well in predicting the disease-free survival status of ccRCC. The MMPS achieved higher area under the curve value of 0.742, 0.917, and 0.900 in three independent patient cohorts, respectively. MMPS could distinguish patients with worse disease-free survival, with HR of 12.90 (95% CI: 2.443-68.120, P<0.0001), 11.10 (95% CI: 5.467-22.520, P<0.0001), and 8.27 (95% CI: 1.482-46.130, P<0.0001) in three different patient cohorts. In addition, MMPS outperformed single-modality prediction models and current clinical prognostic factors, which could also provide complements to current risk stratification for adjuvant therapy of ccRCC. CONCLUSION: Our novel multimodel prediction analysis for disease-free survival exhibited significant improvements in prognostic prediction for patients with ccRCC. After further validation in multiple centers and regions, the multimodal system could be a potential practical tool for clinicians in the treatment for ccRCC patients.
Machine learning-based pathomics signature could act as a novel prognostic marker for patients with clear cell renal cell carcinoma
Chen, S.
Jiang, L.
Gao, F.
Zhang, E.
Wang, T.
Zhang, N.
Wang, X.
Zheng, J.
Br J Cancer2022Journal Article, cited 0 times
CPTAC-CCRCC
TCGA-KIRC
Pathomics
Whole Slide Imaging (WSI)
Carcinoma
Renal Cell/mortality/*pathology
Female
Humans
Image Interpretation
Computer-Assisted/*methods
Kidney Neoplasms/mortality/*pathology
Machine Learning
Male
Neoplasm Grading
Neoplasm Staging
Nomograms
Prognosis
Prospective Studies
Regression Analysis
Retrospective Studies
Survival Analysis
BACKGROUND: Traditional histopathology performed by pathologists through naked eyes is insufficient for accurate survival prediction of clear cell renal cell carcinoma (ccRCC). METHODS: A total of 483 whole slide images (WSIs) data from three patient cohorts were retrospectively analyzed. We performed machine learning algorithm to identify optimal digital pathological features and constructed machine learning-based pathomics signature (MLPS) for ccRCC patients. Prognostic performance of the prognostic model was also verified in two independent validation cohorts. RESULTS: MLPS could significantly distinguish ccRCC patients with high survival risk, with hazard ratio of 15.05, 4.49 and 1.65 in three independent cohorts, respectively. Cox regression analysis revealed that the MLPS could act as an independent prognostic factor for ccRCC patients. Integration nomogram based on MLPS, tumour stage system and tumour grade system improved the current survival prediction accuracy for ccRCC patients, with area under curve value of 89.5%, 90.0%, 88.5% and 85.9% for 1-, 3-, 5- and 10-year disease-free survival prediction. DISCUSSION: The machine learning-based pathomics signature could act as a novel prognostic marker for patients with ccRCC. Nevertheless, prospective studies with multicentric patient cohorts are still needed for further verifications.
Deep learning-based diagnosis and survival prediction of patients with renal cell carcinoma from primary whole slide images
Chen, Siteng
Wang, Xiyue
Zhang, Jun
Jiang, Liren
Gao, Feng
Xiang, Jinxi
Yang, Sen
Yang, Wei
Zheng, Junhua
Han, Xiao
2024Journal Article, cited 0 times
CPTAC-CCRCC
TCGA-KIRC
Renal cell carcinoma
Artificial Intelligence
Deep Learning
Diagnosis
Survival
It remains an urgent clinical demand to explore novel diagnostic and prognostic biomarkers for renal cell carcinoma (RCC). We proposed deep learning-based artificial intelligence strategies. The study included 1752 whole slide images from multiple centres.
Based on the pixel-level of RCC segmentation, the diagnosis diagnostic model achieved an area under the receiver operating characteristic curve (AUC) of 0.977 (95% CI 0.969–0.984) in the external validation cohort. In addition, our diagnostic model exhibited excellent performance in the differential diagnosis of RCC from renal oncocytoma, which achieved an AUC of 0.951 (0.922–0.972). The graderisk for the recognition of high-grade tumour achieved AUCs of 0.840 (0.805–0.871) in the Cancer Genome Atlas (TCGA) cohort, 0.857 (0.813–0.894) in the Shanghai General Hospital (General) cohort, and 0.894 (0.842–0.933) in the Clinical Proteomic Tumor Analysis Consortium (CPTAC) cohort, for the recognition of high-grade tumour. The OSrisk for predicting 5-year survival status achieved an AUC of 0.784 (0.746–0.819) in the TCGA cohort, which was further verified in the independent general cohort and the CPTAC cohort, with AUCs of 0.774 (0.723–0.820) and 0.702 (0.632–0.765), respectively. Moreover, the competing-risk nomogram (CRN) showed its potential to be a prognostic indicator, with a hazard ratio (HR) of 5.664 (3.893–8.239, p<0.0001), outperforming other traditional clinical prognostic indicators. Kaplan–Meier survival analysis further illustrated that our CRN could significantly distinguish patients with high survival risk (HR 5.664, 95% CI 3.893–8.239, p<0.0001), which outperformed current prognosis indicators.
Deep learning-based artificial intelligence could be a useful tool for clinicians to diagnose and predict the prognosis of RCC patients, thus improving the process of individualised treatment.
Deep learning-based pathology signature could reveal lymph node status and act as a novel prognostic marker across multiple cancer types
Chen, S.
Xiang, J.
Wang, X.
Zhang, J.
Yang, S.
Yang, W.
Zheng, J.
Han, X.
Br J Cancer2023Journal Article, cited 0 times
Website
TCGA-COAD
TCGA-ESCA
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
TCGA-BRCA
TCGA-LUAD
TCGA-READ
TCGA-STAD
TCGA-TGCT
TCGA-THCA
CPTAC-COAD
CPTAC-KIRC
CPTAC-BRCA
CPTAC-LUAD
Whole Slide Imaging (WSI)
Pathomics
Classification
H&E-stained slides
Lymphatic Metastasis/pathology
Prognosis
*Deep Learning
Retrospective Studies
Lymph Nodes/pathology
BACKGROUND: Identifying lymph node metastasis (LNM) relies mainly on indirect radiology. Current studies omitted the quantified associations with traits beyond cancer types, failing to provide generalisation performance across various tumour types. METHODS: 4400 whole slide images across 11 cancer types were collected for training, cross-verification, and external validation of the pan-cancer lymph node metastasis (PC-LNM) model. We proposed an attention-based weakly supervised neural network based on self-supervised cancer-invariant features for the prediction task. RESULTS: PC-LNM achieved a test area under the curve (AUC) of 0.732 (95% confidence interval: 0.717-0.746, P < 0.0001) in fivefold cross-validation of multiple cancer types, which also demonstrated good generalisation in the external validation cohort with AUC of 0.699 (95% confidence interval: 0.658-0.737, P < 0.0001). The interpretability results derived from PC-LNM revealed that the regions with the highest attention scores identified by the model generally correspond to tumours with poorly differentiated morphologies. PC-LNM achieved superior performance over previously reported methods and could also act as an independent prognostic factor for patients across multiple tumour types. DISCUSSION: We presented an automated pan-cancer model for predicting the LNM status from primary tumour histology, which could act as a novel prognostic marker across multiple cancer types.
Integrating Radiomics with Genomics for Non-Small Cell Lung Cancer Survival Analysis
Chen, W.
Qiao, X.
Yin, S.
Zhang, X.
Xu, X.
J Oncol2022Journal Article, cited 0 times
NSCLC Radiogenomics
Radiomics
Radiogenomics
Non-Small Cell Lung Cancer (NSCLC)
PURPOSE: The objectives of our study were to assess the association of radiological imaging and gene expression with patient outcomes in non-small cell lung cancer (NSCLC) and construct a nomogram by combining selected radiomic, genomic, and clinical risk factors to improve the performance of the risk model. METHODS: A total of 116 cases of NSCLC with CT images, gene expression, and clinical factors were studied, wherein 87 patients were used as the training cohort, and 29 patients were used as an independent testing cohort. Handcrafted radiomic features and deep-learning genomic features were extracted and selected from CT images and gene expression analysis, respectively. Two risk scores were calculated through Cox regression models for each patient based on radiomic features and genomic features to predict overall survival (OS). Finally, a fusion survival model was constructed by incorporating these two risk scores and clinical factors. RESULTS: The fusion model that combined CT images, gene expression data, and clinical factors effectively stratified patients into low- and high-risk groups. The C-indexes for OS prediction were 0.85 and 0.736 in the training and testing cohorts, respectively, which was better than that based on unimodal data. CONCLUSIONS: Combining radiomics and genomics can effectively improve OS prediction for NSCLC patients.
Machine learning with multimodal data for COVID-19
Chen, Weijie
Sá, Rui C.
Bai, Yuntong
Napel, Sandy
Gevaert, Olivier
Lauderdale, Diane S.
Giger, Maryellen L.
Heliyon2023Journal Article, cited 0 times
COVID-19-AR
COVID-19-NY-SBU
In response to the unprecedented global healthcare crisis of the COVID-19 pandemic, the scientific community has joined forces to tackle the challenges and prepare for future pandemics. Multiple modalities of data have been investigated to understand the nature of COVID-19. In this paper, MIDRC investigators present an overview of the state-of-the-art development of multimodal machine learning for COVID-19 and model assessment considerations for future studies. We begin with a discussion of the lessons learned from radiogenomic studies for cancer diagnosis. We then summarize the multi-modality COVID-19 data investigated in the literature including symptoms and other clinical data, laboratory tests, imaging, pathology, physiology, and other omics data. Publicly available multimodal COVID-19 data provided by MIDRC and other sources are summarized. After an overview of machine learning developments using multimodal data for COVID-19, we present our perspectives on the future development of multimodal machine learning models for COVID-19.
Combined Spiral Transformation and Model-Driven Multi-Modal Deep Learning Scheme for Automatic Prediction of TP53 Mutation in Pancreatic Cancer
Chen, Xiahan
Lin, Xiaozhu
Shen, Qing
Qian, Xiaohua
IEEE Transactions on Medical Imaging2021Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Pancreatic cancer is a malignant form of cancer with one of the worst prognoses. The poor prognosis and resistance to therapeutic modalities have been linked to TP53 mutation. Pathological examinations, such as biopsies, cannot be frequently performed in clinical practice; therefore, noninvasive and reproducible methods are desired. However, automatic prediction methods based on imaging have drawbacks such as poor 3D information utilization, small sample size, and ineffectiveness multi-modal fusion. In this study, we proposed a model-driven multi-modal deep learning scheme to overcome these challenges. A spiral transformation algorithm was developed to obtain 2D images from 3D data, with the transformed image inheriting and retaining the spatial correlation of the original texture and edge information. The spiral transformation could be used to effectively apply the 3D information with less computational resources and conveniently augment the data size with high quality. Moreover, model-driven items were designed to introduce prior knowledge in the deep learning framework for multi-modal fusion. The model-driven strategy and spiral transformation-based data augmentation can improve the performance of the small sample size. A bilinear pooling module was introduced to improve the performance of fine-grained prediction. The experimental results show that the proposed model gives the desired performance in predicting TP53 mutation in pancreatic cancer, providing a new approach for noninvasive gene prediction. The proposed methodologies of spiral transformation and model-driven deep learning can also be used for the artificial intelligence community dealing with oncological applications. Our source codes with a demon will be released at https://github.com/SJTUBME-QianLab/SpiralTransform.
Reliable gene mutation prediction in clear cell renal cell carcinoma through multi-classifier multi-objective radiogenomics model
Chen, Xi
Zhou, Zhiguo
Hannan, Raquibul
Thomas, Kimberly
Pedrosa, Ivan
Kapur, Payal
Brugarolas, James
Mou, Xuanqin
Wang, Jing
Physics in Medicine and Biology2018Journal Article, cited 45 times
Website
TCGA-KIRP
Renal Cell
CT
radiogenomics
Genetic studies have identified associations between gene mutations and clear cell renal cell carcinoma (ccRCC). Since the complete gene mutational landscape cannot be characterized through biopsy and sequencing assays for each patient, non-invasive tools are needed to determine the mutation status for tumors. Radiogenomics may be an attractive alternative tool to identify disease genomics by analyzing amounts of features extracted from medical images. Most current radiogenomics predictive models are built based on a single classifier and trained through a single objective. However, since many classifiers are available, selecting an optimal model is challenging. On the other hand, a single objective may not be a good measure to guide model training. We proposed a new multi-classifier multi-objective (MCMO) radiogenomics predictive model. To obtain more reliable prediction results, similarity-based sensitivity and specificity were defined and considered as the two objective functions simultaneously during training. To take advantage of different classifiers, the evidential reasoning (ER) approach was used for fusing the output of each classifier. Additionally, a new similarity-based multi-objective optimization algorithm (SMO) was developed for training the MCMO to predict ccRCC related gene mutations (VHL, PBRM1 and BAP1) using quantitative CT features. Using the proposed MCMO model, we achieved a predictive area under the receiver operating characteristic curve (AUC) over 0.85 for VHL, PBRM1 and BAP1 genes with balanced sensitivity and specificity. Furthermore, MCMO outperformed all the individual classifiers, and yielded more reliable results than other optimization algorithms and commonly used fusion strategies.
BGSNet: A cascaded framework of boundary guided semantic for COVID-19 infection segmentation
Chen, Ying
Feng, Longfeng
Lin, Hongping
Zhang, Wei
Chen, Wang
Zhou, Zonglai
Xu, Guohui
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
CT Images in COVID-19
Veterinary and Food Sciences
Coronavirus disease 2019 (COVID-19) has spread globally in early 2020, leading to a new health crisis. Automatic segmentation of lung infections from computed tomography (CT) images provides an important basis for rapid diagnosis of COVID-19. This paper proposes a cascaded architecture of boundary guided semantic network (BGSNet) based on boundary supervision, multi-scale atrous convolution and dual attention mechanism. The BGSNet cascaded architecture includes a boundary supervision module (BSM), a multi-scale atrous convolution module (MACM), and a dual attention guidance module (DAGM). BSM provides boundary supervised features through explicit modeling to guide precise localization of target regions. MACM introduces atrous convolution with different dilation rates to obtain multi-scale receptive field, thus enhancing the segmentation ability of targets with different scales. DAGM combines channel and spatial attention to filter irrelevant information and enhance feature learning ability. The experimental results based on publicly available CO-Seg and CLSC datasets show that the BGSNet cascaded architecture achieves DSC of 0.806 and 0.677, respectively, which is superior to the advanced COVID-19 infection segmentation model. The effectiveness of the main components of BGSNet has been demonstrated through ablation experiments.
Multi-view Local Co-occurrence and Global Consistency Learning Improve Mammogram Classification Generalisation.
Chen, Yuanhong
Wang, Hu
Wang, Chong
Tian, Yu
Liu, Fengbei
Liu, Yuyuan
Elliott, Michael
McCarthy, Davis J.
Frazer, Helen
Carneiro, Gustavo
2022Book Section, cited 0 times
CBIS-DDSM
CMMD
Supervised training
Mammography
BREAST
Classification
Deep learning
When analysing screening mammograms, radiologists can naturally process information across two ipsilateral views of each breast, namely the cranio-caudal (CC) and mediolateral-oblique (MLO) views. These multiple related images provide complementary diagnostic information and can improve the radiologist’s classification accuracy. Unfortunately, most existing deep learning systems, trained with globally-labelled images, lack the ability to jointly analyse and integrate global and local information from these multiple views. By ignoring the potentially valuable information present in multiple images of a screening episode, one limits the potential accuracy of these systems. Here, we propose a new multi-view global-local analysis method that mimics the radiologist’s reading procedure, based on a global consistency learning and local co-occurrence learning of ipsilateral views in mammograms. Extensive experiments show that our model outperforms competing methods, in terms of classification accuracy and generalisation, on a large-scale private dataset and two publicly available datasets, where models are exclusively trained and tested with global labels.
Integrating of Radiomics and Single-cell Transcriptomics Uncovers the Crucial Role of γδ T Cells in Lung Adenocarcinoma
Gamma-delta (γδ) T cells are a crucial component of the tumor immune microenvironment which are considered a promising potential therapeutic strategy and target. Increasing evidence suggests that these unique immune cells play significant roles across various cancers. However, γδ T cells are often regarded as having dual roles in tumors, and their influence on lung adenocarcinoma (LUAD) remains controversial. In this research, we employed a wild-ranging approach using multi-omics data to investigate the function of γδ T cells in LUAD. The abundance of γδ T cell infiltration is linked to a positive prognosis in lung adenocarcinoma. The tumor-inhibiting role of γδ T cells was played through intrinsic lineage evolution, acquiring cytotoxic functions and engaging in signal transduction with antigen-presenting cells. Furthermore, patients with higher γδ T cells infiltration abundance might be more favorable for immunotherapy. Lastly, we established a predictive model using CT images based on radiomics, providing a non-invasive strategy to assess γδ T cells infiltration in LUAD patients. These findings offer new insights and perspectives the personalized therapies of γδ T cells.
Prediction of Glioma Grade Using Intratumoral and Peritumoral Radiomic Features From Multiparametric MRI Images
Cheng, J.
Liu, J.
Yue, H.
Bai, H.
Pan, Y.
Wang, J.
IEEE/ACM Trans Comput Biol Bioinform2022Journal Article, cited 26 times
Website
BraTS 2019
Algorithms
*Glioma/diagnostic imaging
Humans
Magnetic Resonance Imaging/methods
*Multiparametric Magnetic Resonance Imaging
Retrospective Studies
The accurate prediction of glioma grade before surgery is essential for treatment planning and prognosis. Since the gold standard (i.e., biopsy)for grading gliomas is both highly invasive and expensive, and there is a need for a noninvasive and accurate method. In this study, we proposed a novel radiomics-based pipeline by incorporating the intratumoral and peritumoral features extracted from preoperative mpMRI scans to accurately and noninvasively predict glioma grade. To address the unclear peritumoral boundary, we designed an algorithm to capture the peritumoral region with a specified radius. The mpMRI scans of 285 patients derived from a multi-institutional study were adopted. A total of 2153 radiomic features were calculated separately from intratumoral volumes (ITVs)and peritumoral volumes (PTVs)on mpMRI scans, and then refined using LASSO and mRMR feature ranking methods. The top-ranking radiomic features were entered into the classifiers to build radiomic signatures for predicting glioma grade. The prediction performance was evaluated with five-fold cross-validation on a patient-level split. The radiomic signatures utilizing the features of ITV and PTV both show a high accuracy in predicting glioma grade, with AUCs reaching 0.968. By incorporating the features of ITV and PTV, the AUC of IPTV radiomic signature can be increased to 0.975, which outperforms the state-of-the-art methods. Additionally, our proposed method was further demonstrated to have strong generalization performance in an external validation dataset with 65 patients. The source code of our implementation is made publicly available at https://github.com/chengjianhong/glioma_grading.git.
Glioma Sub-region Segmentation on Multi-parameter MRI with Label Dropout
Gliomas are the most common primary brain tumor, the accurate segmentation of clinical sub-regions including enhancing tumor (ET), tumor core (TC) and whole tumor (WT) has great clinical importance throughout the diagnosis, treatment planning, delivery and prognosis. Machine learning algorithms particularly neural network based methods have been successful in many medical image segmentation applications. In this paper, we trained a patch based 3D UNet model with a hybrid loss between soft dice loss, generalized dice loss and multi-class cross-entropy loss. We also proposed a label dropout process that randomly discards inner segment labels and their corresponding network output during training to overcome the heavy class imbalance issue. On the BraTs 2020 final test data, we achieved 0.823, 0.886 and 0.843 for ET, WT and TC respectively.
Applications of Deep Neural Networks with Fractal Structure and Attention Blocks for 2D and 3D Brain Tumor Segmentation
In this paper, we propose a novel deep neural network (DNN) architecture with fractal structure and attention blocks. The new method is tested to identify and segment 2D and 3D brain tumor masks in normal and pathological neuroimaging data. To circumvent the problem of limited 3D volumetric datasets with raw and ground truth tumor masks, we utilized data augmentation using affine transformations to significantly expand the training data prior to estimating the network model parameters. The proposed Attention-based Fractal Unet (AFUnet) technique combines benefits of fractal convolutional networks, attention blocks, and the encoder-decoder structure of Unet. The AFUnet models are fit on training data and their performance is assessed on independent validation and testing datasets. The Dice score is used to measure and contrast the performance of AFUnet against alternative methods, such as Unet, attention Unet, and several other DNN models with relative number of parameters. In addition, we explore the effects of the network depth to the AFUnet prediction accuracy. The results suggest that with a few network structure iterations, the attention-based fractal Unet achieves good performance. Although deeper nested network structure certainly improves the prediction accuracy, this comes with a very substantial computational cost. The benefits of fitting deeper AFUnet models are relative to the extra time and computational demands. Some of the AFUnet networks outperform current state-of-the-art models and achieve highly accurate and realistic brain-tumor boundary segmentation (contours in 2D and surfaces in 3D). In our experiments, the sensitivity of the Dice score to capture significant inter-models differences is marginal. However, there is improved validation loss during long periods of AFUnet training. The lower binary cross entropy loss suggests that AFUNet is superior in finding true negative voxels (i.e., identifying normal tissue), which suggests the new method is more conservative. This approach may be generalized to higher dimensional data, e.g., 4D fMRI hypervolumes, and applied for a wide range of signal, image, volume, and hypervolume segmentation tasks.
Memory-Efficient Cascade 3D U-Net for Brain Tumor Segmentation
Segmentation is a routine and crucial procedure for the treatment of brain tumors. Deep learning based brain tumor segmentation methods have achieved promising performance in recent years. However, to pursue high segmentation accuracy, most of them require too much memory and computation resources. Motivated by a recently proposed partially reversible U-Net architecture that pays more attention to memory footprint, we further present a novel Memory-Efficient Cascade 3D U-Net (MECU-Net) for brain tumor segmentation in this work, which can achieve comparable segmentation accuracy with less memory and computation consumption. More specifically, MECU-Net utilizes fewer down-sampling channels to reduce the utilization of memory and computation resources. To make up the accuracy loss, MECU-Net employs multi-scale feature fusion module to enhance the feature representation capability. Additionally, a light-weight cascade model, which resolves the problem of small target segmentation accuracy caused by model compression to some extent, is further introduced into the segmentation network. Finally, edge loss and weighted dice loss are combined to refine the brain tumor segmentation results. Experiment results on BraTS 2019 validation set illuminate that MECU-Net can achieve average Dice coefficients of 0.902, 0.824 and 0.777 on the whole tumor, tumor core and enhancing tumor, respectively.
Invariant Content Representation for Generalizable Medical Image Segmentation
Cheng, Z.
Wang, S.
Gao, Y.
Zhu, Z.
Yan, C.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
ISBI-MR-Prostate-2013
Algorithm Development
Data augmentation
Domain generalization
Invariant content mining
Medical image segmentation
Domain generalization (DG) for medical image segmentation due to privacy preservation prefers learning from a single-source domain and expects good robustness on unseen target domains. To achieve this goal, previous methods mainly use data augmentation to expand the distribution of samples and learn invariant content from them. However, most of these methods commonly perform global augmentation, leading to limited augmented sample diversity. In addition, the style of the augmented image is more scattered than the source domain, which may cause the model to overfit the style of the source domain. To address the above issues, we propose an invariant content representation network (ICRN) to enhance the learning of invariant content and suppress the learning of variability styles. Specifically, we first design a gamma correction-based local style augmentation (LSA) to expand the distribution of samples by augmenting foreground and background styles, respectively. Then, based on the augmented samples, we introduce invariant content learning (ICL) to learn generalizable invariant content from both augmented and source-domain samples. Finally, we design domain-specific batch normalization (DSBN) based style adversarial learning (SAL) to suppress the learning of preferences for source-domain styles. Experimental results show that our proposed method improves by 8.74% and 11.33% in overall dice coefficient (Dice) and reduces 15.88 mm and 3.87 mm in overall average surface distance (ASD) on two publicly available cross-domain datasets, Fundus and Prostate, compared to the state-of-the-art DG methods. The code is available at https://github.com/ZMC-IIIM/ICRN-DG .
A Joint Detection and Recognition Approach to Lung Cancer Diagnosis From CT Images With Label Uncertainty
Chenyang, L.
Chan, S. C.
IEEE Access2020Journal Article, cited 0 times
Website
LIDC-IDRI
LUNA16 Challenge
lung
radiomic features
deep learning
Automatic lung cancer diagnosis from computer tomography (CT) images requires the detection of nodule location as well as nodule malignancy prediction. This article proposes a joint lung nodule detection and classification network for simultaneous lung nodule detection, segmentation and classification subject to possible label uncertainty in the training set. It operates in an end-to-end manner and provides detection and classification of nodules simultaneously together with a segmentation of the detected nodules. Both the nodule detection and classification subnetworks of the proposed joint network adopt a 3-D encoder-decoder architecture for better exploration of the 3-D data. Moreover, the classification subnetwork utilizes the features extracted from the detection subnetwork and multiscale nodule-specific features for boosting the classification performance. The former serves as valuable prior information for optimizing the more complicated 3D classification network directly to better distinguish suspicious nodules from other tissues compared with direct backpropagation from the decoder. Experimental results show that this co-training yields better performance on both tasks. The framework is validated on the LUNA16 and LIDC-IDRI datasets and a pseudo-label approach is proposed for addressing the label uncertainty problem due to inconsistent annotations/labels. Experimental results show that the proposed nodule detector outperforms the state-of-the-art algorithms and yields comparable performance as state-of-the-art nodule classification algorithms when classification alone is considered. Since our joint detection/recognition approach can directly detect nodules and classify its malignancy instead of performing the tasks separately, our approach is more practical for automatic cancer and nodules detection.
Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness
Cherezov, Dmitry
Goldgof, Dmitry
Hall, Lawrence
Gillies, Robert
Schabath, Matthew
Müller, Henning
Depeursinge, Adrien
Scientific RepoRtS2019Journal Article, cited 0 times
Website
NLST
lung
LDCT
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.
Pancreatic Carcinoma Detection with Publicly available Radiological Images: A Systematic Analysis
Chhikara, Jasmine
Goel, Nidhi
Rathee, Neeru
2022Conference Paper, cited 0 times
Pancreas-CT
CPTAC-PDA
Medical Segmentation Decathlon 2021
Deep Learning
Computer Aided Diagnosis (CADx)
Pancreatic carcinoma is the fifth deadliest melanoma existing worldwide. It shares the maximum percentage of total mortalities caused by cancer every year. The main cause of the high mortality and minimal survival rate is the delayed detection of abnormal cell growth in pancreatic regions of patients diagnosed with this ailment. In recent years, researchers have been putting effort into the early detection of pancreatic carcinoma in radiological imaging scans of the whole abdomen. In this paper, the authors have systematically reviewed the data reported and various works done on publicly available imaging datasets of pancreatic cancer. The analyzed datasets are Pancreas- Computed Tomography, Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma from The Cancer Imaging Archive, and Pancreas Tumor from the Medical Segmentation Decathlon online repository. The review is supported by reporting incidences depending on age group, clinical history, physical conditions, pathological findings, tumor nature, region, stage, and tumor size of examined patients. The outcomes of categorized subjects will aid academicians, research scholars, and industrialists to understand the propagation of pancreatic cancer for early detection in computer-aided systems.
Integrating expert guidance with gradual moment approximation (GMAp)-enhanced transfer learning for improved pancreatic cancer classification
Chhikara, Jasmine
Goel, Nidhi
Rathee, Neeru
Neural Computing and Applications2024Journal Article, cited 0 times
Website
CPTAC-PDA
Pancreas-CT
Despite significant research efforts, pancreatic cancer remains a formidable foe. To address the critical need for improved diagnostics, this study presents a novel approach that integrates expert guidance with computer-aided imaging for fine needle aspiration (FNA). A meticulously curated computed tomography (CT) dataset of ground truth images, focusing on key subregions of the pancreas, was established in collaboration with medical professionals. The images provided the training ground for a novel diagnostic model equipped with the gradual moment approximation (GMAp) optimization algorithm, designed to enhance the precision of cancer detection. By efficiently transferring knowledge from pre-trained models, the proposed model achieved remarkable accuracy (98.16%) in classifying CT images across distinct cancerous pancreatic subregions (head, body, and tail) and healthy pancreas. Extensive evaluations against diverse pre-trained models and benchmark medical databases: medical segmentation decathlon, clinical proteomic tumor analysis consortium pancreatic ductal adenocarcinoma, and pancreas-computed tomography proved the model's robustness and superior F1-scores compared to existing approaches. The experiment demonstrates that the deep learning-based 4-class classification outperforms state-of-the-art machine learning-based method by 3.66% in terms of accuracy. This efficiency, coupled with rigorous testing, paves the way for seamless integration into clinical workflows, potentially enabling earlier and more accurate pancreatic cancer diagnoses.
Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism
Chi, J.
Wei, X.
Sun, Z.
Yang, Y.
Yang, B.
J Imaging Inform Med2024Journal Article, cited 0 times
Pancreas-CT
Attention mechanism
Deep learning
Image super-resolution
Low-dose computed tomography
Low-dose computed tomography (LDCT) has been widely used in medical diagnosis. In practice, doctors often zoom in on LDCT slices for clearer lesions and issues, while, a simple zooming operation fails to suppress low-dose artifacts, leading to distorted details. Therefore, numerous LDCT super-resolution (SR) methods have been proposed to promote the quality of zooming without the increase of the dose in CT scanning. However, there are still some drawbacks that need to be addressed in existing methods. First, the region of interest (ROI) is not emphasized due to the lack of guidance in the reconstruction process. Second, the convolutional blocks extracting fix-resolution features fail to concentrate on the essential multi-scale features. Third, a single SR head cannot suppress the residual artifacts. To address these issues, we propose an LDCT CT joint SR and denoising reconstruction network. Our proposed network consists of global dual-guidance attention fusion modules (GDAFMs) and multi-scale anastomosis blocks (MABs). The GDAFM directs the network to focus on ROI by fusing the extra mask guidance and average CT image guidance, while the MAB introduces hierarchical features through anastomosis connections to leverage multi-scale features and promote the feature representation ability. To suppress radial residual artifacts, we optimize our network using the feedback feature distillation mechanism (FFDM) which shares the backbone to learn features corresponding to the denoising task. We apply the proposed method to the 3D-IRCADB and PANCREAS datasets to evaluate its ability on LDCT image SR reconstruction. The experimental results compared with state-of-the-art methods illustrate the superiority of our approach with respect to peak signal-to-noise (PSNR), structural similarity (SSIM), and qualitative observations. Our proposed LDCT joint SR and denoising reconstruction network has been extensively evaluated through ablation, quantitative, and qualitative experiments. The results demonstrate that our method can recover noise-free and detail-sharp images, resulting in better reconstruction results. Code is available at https://github.com/neu-szy/ldct_sr_dn_w_ffdm .
Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks
Chi, Jianning
Zhang, Yifei
Yu, Xiaosheng
Wang, Ying
Wu, Chengdong
Sensors (Basel)2019Journal Article, cited 2 times
Website
APOLLO-1-VA
Deep convolutional neural network (DCNN)
Machine Learning
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.
SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets
Chinnam, Siva
Sistla, Venkatramaphanikumar
Kolli, Venkata
Traitement du Signal2019Journal Article, cited 0 times
Website
Algorithm Development
Support Vector Machine (SVM)
BraTS
Segmentation
Brain
A Model to Improve the Quality of Low-dose CT Scan Images
Computed Tomography (CT) scans are used during medical imaging diagnosis as they provide detailed cross-sectional images of the human body by making use of X-rays. X-ray radiation as part of medical diagnosis poses health risks to patients leading experts to opt for low doses of radiation when possible. In accordance with European Directives, ionising radiation doses for medical purposes are to be kept as low as reasonably achievable (ALARA). While reduced radiation is beneficial from a health perspective, this impacts the quality of the images as the noise in the images increases, reducing the radiologist’s confidence in diagnosis. Various low-dose CT (LDCT) image denoising strategies available in the literature attempt to solve this conflict. However, current models face problems like over-smoothed results and lose detailed information. Consequently, the quality of LDCT images after denoising is still an important problem. The models presented in this work use deep learning techniques that are modified and trained for this problem. The results show that the best model in terms of image quality achieved a peak signal to noise ratio (PSNR) of 19.5 dB, a structural similarity index measure (SSIM) of 0.7153 and a root mean square error (RMSE) of 43.34. It performed the required operations in an average time of 4843.80s. Furthermore, tests at different dose levels were done to test the robustness of the best performing models.
Radiomic tumor phenotypes augment molecular profiling in predicting recurrence free survival after breast neoadjuvant chemotherapy
Chitalia, R.
Miliotis, M.
Jahani, N.
Tastsoglou, S.
McDonald, E. S.
Belenky, V.
Cohen, E. A.
Newitt, D.
Van't Veer, L. J.
Esserman, L.
Hylton, N.
DeMichele, A.
Hatzigeorgiou, A.
Kontos, D.
Commun Med (Lond)2023Journal Article, cited 2 times
Website
ACRIN 6657
ISPY1
Breast-MRI-NACT-Pilot
Radiomics
Radiogenomics
Algorithm Development
DCE-MRI
Phenotype
BACKGROUND: Early changes in breast intratumor heterogeneity during neoadjuvant chemotherapy may reflect the tumor's ability to adapt and evade treatment. We investigated the combination of precision medicine predictors of genomic and MRI data towards improved prediction of recurrence free survival (RFS). METHODS: A total of 100 women from the ACRIN 6657/I-SPY 1 trial were retrospectively analyzed. We estimated MammaPrint, PAM50 ROR-S, and p53 mutation scores from publicly available gene expression data and generated four, voxel-wise 3-D radiomic kinetic maps from DCE-MR images at both pre- and early-treatment time points. Within the primary lesion from each kinetic map, features of change in radiomic heterogeneity were summarized into 6 principal components. RESULTS: We identify two imaging phenotypes of change in intratumor heterogeneity (p < 0.01) demonstrating significant Kaplan-Meier curve separation (p < 0.001). Adding phenotypes to established prognostic factors, functional tumor volume (FTV), MammaPrint, PAM50, and p53 scores in a Cox regression model improves the concordance statistic for predicting RFS from 0.73 to 0.79 (p = 0.002). CONCLUSIONS: These results demonstrate an important step in combining personalized molecular signatures and longitudinal imaging data towards improved prognosis.; Early changes in tumor properties during treatment may tell us whether or not a patient's tumor is responding to treatment. Such changes may be seen on imaging. Here, changes in breast cancer properties are identified on imaging and are used in combination with gene markers to investigate whether response to treatment can be predicted using mathematical models. We demonstrate that tumor properties seen on imaging early on in treatment can help to predict patient outcomes. Our approach may allow clinicians to better inform patients about their prognosis and choose appropriate and effective therapies.; eng
Expert tumor annotations and radiomics for locally advanced breast cancer in DCE-MRI for ACRIN 6657/I-SPY1
Chitalia, R.
Pati, S.
Bhalerao, M.
Thakur, S. P.
Jahani, N.
Belenky, V.
McDonald, E. S.
Gibbs, J.
Newitt, D. C.
Hylton, N. M.
Kontos, D.
Bakas, S.
Sci Data2022Journal Article, cited 0 times
Website
ISPY1-Tumor-SEG-Radiomics
ISPY1
Algorithm Development
Radiogenomics
BREAST
IBSI
CaPTk
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Breast cancer is one of the most pervasive forms of cancer and its inherent intra- and inter-tumor heterogeneity contributes towards its poor prognosis. Multiple studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of having consistency in: a) data quality, b) quality of expert annotation of pathology, and c) availability of baseline results from computational algorithms. To address these limitations, here we propose the enhancement of the I-SPY1 data collection, with uniformly curated data, tumor annotations, and quantitative imaging features. Specifically, the proposed dataset includes a) uniformly processed scans that are harmonized to match intensity and spatial characteristics, facilitating immediate use in computational studies, b) computationally-generated and manually-revised expert annotations of tumor regions, as well as c) a comprehensive set of quantitative imaging (also known as radiomic) features corresponding to the tumor regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.
Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence
Chitalia, Rhea
Rowland, Jennifer
McDonald, Elizabeth S
Pantalone, Lauren
Cohen, Eric A
Gastounioti, Aimilia
Feldman, Michael
Schnall, Mitchell
Conant, Emily
Kontos, Despina
Clinical Cancer Research2019Journal Article, cited 0 times
Website
DCE-MRI
Breast
Radiomic feature
Volume-based inter difference XOR pattern: a new pattern for pulmonary nodule detection in CT images
Chitradevi, A.
Singh, N. Nirmal
Jayapriya, K.
International Journal of Biomedical Engineering and Technology2021Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
Computer Aided Detection (CADe)
LUNG
Classification
The pulmonary nodule identification which paves the path to the cancer diagnosis is a challenging task today. The proposed work, volume-based inter difference XOR pattern (VIDXP) provides an efficient lung nodule detection system using a 3D texture-based pattern which is formed by XOR pattern calculation of inter-frame grey value differences among centre frame with its neighbourhood frames in rotationally clockwise direction, for every segmented nodule. Different classifiers such as random forest (RF), decision tree (DT) and AdaBoost are used with ten trails of five-fold cross validation test for classification. The experimental analysis in the public database, lung image database consortium-image database resource initiative (LIDC-IDRI) shows that proposed scheme gives better accuracy while comparing with existing approaches. Further, the proposed scheme is enhanced by combining shape information using histogram of oriented gradient (HOG) which improves the classification accuracy.
Efficient Radiomics-Based Classification of Multi-Parametric MR Images to Identify Volumetric Habitats and Signatures in Glioblastoma: A Machine Learning Approach
Chiu, F. Y.
Yen, Y.
Cancers (Basel)2022Journal Article, cited 0 times
Website
TCGA-GBM
annotation
Glioblastoma
Machine learning
multi-parametric
non-invasive
precision medicine
quantitative imaging
Radiomics
Imaging feature
Glioblastoma (GBM) is a fast-growing and aggressive brain tumor of the central nervous system. It encroaches on brain tissue with heterogeneous regions of a necrotic core, solid part, peritumoral tissue, and edema. This study provided qualitative image interpretation in GBM subregions and radiomics features in quantitative usage of image analysis, as well as ratios of these tumor components. The aim of this study was to assess the potential of multi-parametric MR fingerprinting with volumetric tumor phenotype and radiomic features to underlie biological process and prognostic status of patients with cerebral gliomas. Based on efficiently classified and retrieved cerebral multi-parametric MRI, all data were analyzed to derive volume-based data of the entire tumor from local cohorts and The Cancer Imaging Archive (TCIA) cohorts with GBM. Edema was mainly enriched for homeostasis whereas necrosis was associated with texture features. The proportional volume size of the edema was about 1.5 times larger than the size of the solid part tumor. The volume size of the solid part was approximately 0.7 times in the necrosis area. Therefore, the multi-parametric MRI-based radiomics model reveals efficiently classified tumor subregions of GBM and suggests that prognostic radiomic features from routine MRI examination may also be significantly associated with key biological processes as a practical imaging biomarker.
PyDicer: An open-source python library for conversion and analysis of radiotherapy DICOM data
The organisation, conversion, cleaning and processing of DICOM data is an ongoing challenge across medical image analysis research projects. PyDicer (PYthon Dicom Image ConvertER) was created as a generalisable tool for use across a variety of radiotherapy research projects. This includes the conversion of DICOM objects into a standardised form as well as functionality to visualise, clean and analyse the converted data. The generalisability of PyDicer has been demonstrated by its use across a range of projects including the analysis of radiotherapy dose metrics and radiomics features as well as auto-segmentation training, inference and validation.
A review of medical image data augmentation techniques for deep learning applications
Chlap, Phillip
Min, Hang
Vandenberg, Nym
Dowling, Jason
Holloway, Lois
Haworth, Annette
2021Journal Article, cited 0 times
LIDC-IDRI
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Radiomics-guided deep neural networks stratify lung adenocarcinoma prognosis from CT scans
Cho, Hwan-ho
Lee, Ho Yun
Kim, Eunjin
Lee, Geewon
Kim, Jonghoon
Kwon, Junmo
Park, Hyunjin
Communications Biology2021Journal Article, cited 7 times
Website
NSCLC Radiogenomics
Radiomics
Lung Cancer
Classification of the glioma grading using radiomics analysis
Segmentation of brain tumors from multiple MRI modalities is necessary for successful disease diagnosis and clinical treatment. In recent years, Transformer-based networks with the self-attention mechanism have been proposed. But they do not show the performance beyond the U-shaped fully convolutional network. In this paper, we apply HFTrans network to the brain tumor segmentation task of BraTS 2022 challenge by focusing on the multi-modalities of MRI with the benefits of Transformer. By applying BraTS-specific modifications of preprocessing, aggressive data augmentation, and postprocessing, our method shows superior results in comparisons between previous best performers. We show that the final result on the BraTS 2022 validation dataset achieves dice scores of 82.94%, 85.48%, and 92.44% and Hausdorff distances of 14.55 mm, 12.96 mm, and 3.77 mm for enhancing tumor, tumor core, and whole tumor, respectively.
Quantification of T2-FLAIR Mismatch in Nonenhancing Diffuse Gliomas Using Digital Subtraction
Cho, N. S.
Sanvito, F.
Le, V. L.
Oshima, S.
Teraishi, A.
Yao, J.
Telesca, D.
Raymond, C.
Pope, W. B.
Nghiemphu, P. L.
Lai, A.
Cloughesy, T. F.
Salamon, N.
Ellingson, B. M.
AJNR Am J Neuroradiol2024Journal Article, cited 0 times
Website
UCSF-PDGM
Radiomics
Radiogenomics
Isocitrate dehydrogenase (IDH) mutation
T2-weighted
FLAIR
Magnetic Resonance Imaging (MRI)
Astrocytoma
BRAIN
Image subtraction
BACKGROUND AND PURPOSE: The T2-FLAIR mismatch sign on MR imaging is a highly specific imaging biomarker of isocitrate dehydrogenase (IDH)-mutant astrocytomas, which lack 1p/19q codeletion. However, most studies using the T2-FLAIR mismatch sign have used visual assessment. This study quantified the degree of T2-FLAIR mismatch using digital subtraction of fluid-nulled T2-weighted FLAIR images from non-fluid-nulled T2-weighted images in human nonenhancing diffuse gliomas and then used this information to assess improvements in diagnostic performance and investigate subregion characteristics within these lesions. MATERIALS AND METHODS: Two cohorts of treatment-naive, nonenhancing gliomas with known IDH and 1p/19q status were studied (n = 71 from The Cancer Imaging Archive (TCIA) and n = 34 in the institutional cohort). 3D volumes of interest corresponding to the tumor were segmented, and digital subtraction maps of T2-weighted MR imaging minus T2-weighted FLAIR MR imaging were used to partition each volume of interest into a T2-FLAIR mismatched subregion (T2-FLAIR mismatch, corresponding to voxels with positive values on the subtraction maps) and nonmismatched subregion (T2-FLAIR nonmismatch corresponding to voxels with negative values on the subtraction maps). Tumor subregion volumes, percentage of T2-FLAIR mismatch volume, and T2-FLAIR nonmismatch subregion thickness were calculated, and 2 radiologists assessed the T2-FLAIR mismatch sign with and without the aid of T2-FLAIR subtraction maps. RESULTS: Thresholds of >/=42% T2-FLAIR mismatch volume classified IDH-mutant astrocytoma with a specificity/sensitivity of 100%/19.6% (TCIA) and 100%/31.6% (institutional); >/=25% T2-FLAIR mismatch volume showed 92.0%/32.6% and 100%/63.2% specificity/sensitivity, and >/=15% T2-FLAIR mismatch volume showed 88.0%/39.1% and 93.3%/79.0% specificity/sensitivity. In IDH-mutant astrocytomas with >/=15% T2-FLAIR mismatch volume, T2-FLAIR nonmismatch subregion thickness was negatively correlated with the percentage T2-FLAIR mismatch volume (P < .0001) across both cohorts. The percentage T2-FLAIR mismatch volume was higher in grades 3-4 compared with grade 2 IDH-mutant astrocytomas (P < .05), and >/=15% T2-FLAIR mismatch volume IDH-mutant astrocytomas were significantly larger than <15% T2-FLAIR mismatch volume IDH-mutant astrocytoma (P < .05) across both cohorts. When evaluated by 2 radiologists, the additional use of T2-FLAIR subtraction maps did not show a significant difference in interreader agreement, sensitivity, or specificity compared with a separate evaluation of T2-FLAIR and T2-weighted MR imaging alone. CONCLUSIONS: T2-FLAIR digital subtraction maps may be a useful, automated tool to obtain objective segmentations of tumor subregions based on quantitative thresholds for classifying IDH-mutant astrocytomas using the percentage T2 FLAIR mismatch volume with 100% specificity and exploring T2-FLAIR mismatch/T2-FLAIR nonmismatch subregion characteristics. Conversely, the addition of T2-FLAIR subtraction maps did not enhance the sensitivity or specificity of the visual T2-FLAIR mismatch sign assessment by experienced radiologists.
Integrative analysis of imaging and transcriptomic data of the immune landscape associated with tumor metabolism in lung adenocarcinoma: Clinical and prognostic implications
Choi, Hongyoon
Na, Kwon Joong
THERANOSTICS2018Journal Article, cited 0 times
Website
TCGA-LUAD
A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT
Choi, J.
Cho, H. H.
Kwon, J.
Lee, H. Y.
Park, H.
Diagnostics (Basel)2021Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
NSCLC Radiogenomics
CPTAC-LUAD
CPTAC-LSCC
TCGA-LUAD
TCGA-LUSC
Computed Tomography (CT)
Convolutional neural networks (CNN)
Deep Learning
LUNG
Classification
BACKGROUND AND AIM: Tumor staging in non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging involves expert interpretation of imaging, which we aim to automate with deep learning (DL). We proposed a cascaded DL method comprised of two steps to classification between early- and advanced-stage NSCLC using pretreatment computed tomography. METHODS: We developed and tested a DL model to classify between early- and advanced-stage using training (n = 90), validation (n = 8), and two test (n = 37, n = 26) cohorts obtained from the public domain. The first step adopted an autoencoder network to compress the imaging data into latent variables and the second step used the latent variable to classify the stages using the convolutional neural network (CNN). Other DL and machine learning-based approaches were compared. RESULTS: Our model was tested in two test cohorts of CPTAC and TCGA. In CPTAC, our model achieved accuracy of 0.8649, sensitivity of 0.8000, specificity of 0.9412, and area under the curve (AUC) of 0.8206 compared to other approaches (AUC 0.6824-0.7206) for classifying between early- and advanced-stages. In TCGA, our model achieved accuracy of 0.8077, sensitivity of 0.7692, specificity of 0.8462, and AUC of 0.8343. CONCLUSION: Our cascaded DL model for classification NSCLC patients into early-stage and advanced-stage showed promising results and could help future NSCLC research.
CIRDataset: A Large-Scale Dataset for Clinically-Interpretable Lung Nodule Radiomics and Malignancy Prediction
Choi, Wookjin
Dahiya, Navdeep
Nadeem, Saad
2022Book Section, cited 0 times
LIDC-IDRI
Spiculations/lobulations, sharp/curved spikes on the surface of lung nodules, are good predictors of lung cancer malignancy and hence, are routinely assessed and reported by radiologists as part of the standardized Lung-RADS clinical scoring criteria. Given the 3D geometry of the nodule and 2D slice-by-slice assessment by radiologists, manual spiculation/lobulation annotation is a tedious task and thus no public datasets exist to date for probing the importance of these clinically-reported features in the SOTA malignancy prediction algorithms. As part of this paper, we release a large-scale Clinically-Interpretable Radiomics Dataset, CIRDataset, containing 956 radiologist QA/QC’ed spiculation/lobulation annotations on segmented lung nodules from two public datasets, LIDC-IDRI (N = 883) and LUNGx (N = 73). We also present an end-to-end deep learning model based on multi-class Voxel2Mesh extension to segment nodules (while preserving spikes), classify spikes (sharp/spiculation and curved/lobulation), and perform malignancy prediction. Previous methods have performed malignancy prediction for LIDC and LUNGx datasets but without robust attribution to any clinically reported/actionable features (due to known hyperparameter sensitivity issues with general attribution schemes). With the release of this comprehensively-annotated CIRDataset and end-to-end deep learning baseline, we hope that malignancy prediction methods can validate their explanations, benchmark against our baseline, and provide clinically-actionable insights. Dataset, code, pretrained models, and docker containers are available at https://github.com/nadeemlab/CIR.
Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening
Choi, Wookjin
Nadeem, Saad
Alam, Sadegh R.
Deasy, Joseph O.
Tannenbaum, Allen
Lu, Wei
Computer Methods and Programs in Biomedicine2020Journal Article, cited 0 times
Website
Spiculations are important predictors of lung cancer malignancy, which are spikes on the surface of the pulmonary nodules. In this study, we proposed an interpretable and parameter-free technique to quantify the spiculation using area distortion metric obtained by the conformal (angle-preserving) spherical parameterization. We exploit the insight that for an angle-preserved spherical mapping of a given nodule, the corresponding negative area distortion precisely characterizes the spiculations on that nodule. We introduced novel spiculation scores based on the area distortion metric and spiculation measures. We also semi-automatically segment lung nodule (for reproducibility) as well as vessel and wall attachment to differentiate the real spiculations from lobulation and attachment. A simple pathological malignancy prediction model is also introduced. We used the publicly-available LIDC-IDRI dataset pathologists (strong-label) and radiologists (weak-label) ratings to train and test radiomics models containing this feature, and then externally validate the models. We achieved AUC = 0.80 and 0.76, respectively, with the models trained on the 811 weakly-labeled LIDC datasets and tested on the 72 strongly-labeled LIDC and 73 LUNGx datasets; the previous best model for LUNGx had AUC = 0.68. The number-of-spiculations feature was found to be highly correlated (Spearman’s rank correlation coefficient ) with the radiologists’ spiculation score. We developed a reproducible and interpretable, parameter-free technique for quantifying spiculations on nodules. The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule. Using our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve higher performance than previous models. In the future, we will exhaustively test our model for lung cancer screening in the clinic.
3D CMM-Net with Deeper Encoder for Semantic Segmentation of Brain Tumors in BraTS2021 Challenge
Choi, Yoonseok
Al-masni, Mohammed A.
Kim, Dong-Hyun
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
We propose a 3D version of the Contextual Multi-scale Multi-level Network (3D CMM-Net) with deeper encoder depth for automated semantic segmentation of different brain tumors in the BraTS2021 challenge. The proposed network has the capability to extract and learn deeper features for the task of multi-class segmentation directly from 3D MRI data. The overall performance of the proposed network gave Dice scores of 0.7557, 0.8060, and 0.8351 for enhancing tumor, tumor core, and whole tumor, respectively on the local-test dataset.
Prediction of Human Papillomavirus Status and Overall Survival in Patients with Untreated Oropharyngeal Squamous Cell Carcinoma: Development and Validation of CT-Based Radiomics
Choi, Y.
Nam, Y.
Jang, J.
Shin, N. Y.
Ahn, K. J.
Kim, B. S.
Lee, Y. S.
Kim, M. S.
AJNR Am J Neuroradiol2020Journal Article, cited 0 times
Website
Head-Neck-Radiomics-HN1
Algorithm Development
Radiomics
Computed Tomography (CT)
Retrospective Studies
BACKGROUND AND PURPOSE: Human papillomavirus is a prognostic marker for oropharyngeal squamous cell carcinoma. We aimed to determine the value of CT-based radiomics for predicting the human papillomavirus status and overall survival in patients with oropharyngeal squamous cell carcinoma. MATERIALS AND METHODS: Eighty-six patients with oropharyngeal squamous cell carcinoma were retrospectively collected and grouped into training (n = 61) and test (n = 25) sets. For human papillomavirus status and overall survival prediction, radiomics features were selected via a random forest-based algorithm and Cox regression analysis, respectively. Relevant features were used to build multivariate Cox regression models and calculate the radiomics score. Human papillomavirus status and overall survival prediction were assessed via the area under the curve and concordance index, respectively. The models were validated in the test and The Cancer Imaging Archive cohorts (n = 78). RESULTS: For prediction of human papillomavirus status, radiomics features yielded areas under the curve of 0.865, 0.747, and 0.834 in the training, test, and validation sets, respectively. In the univariate Cox regression, the human papillomavirus status (positive: hazard ratio, 0.257; 95% CI, 0.09-0.7; P = .008), T-stage (>/=III: hazard ratio, 3.66; 95% CI, 1.34-9.99; P = .011), and radiomics score (high-risk: hazard ratio, 3.72; 95% CI, 1.21-11.46; P = .022) were associated with overall survival. The addition of the radiomics score to the clinical Cox model increased the concordance index from 0.702 to 0.733 (P = .01). Validation yielded concordance indices of 0.866 and 0.720. CONCLUSIONS: CT-based radiomics may be useful in predicting human papillomavirus status and overall survival in patients with oropharyngeal squamous cell carcinoma.
Machine learning and radiomic phenotyping of lower grade gliomas: improving survival prediction
Choi, Yoon Seong
Ahn, Sung Soo
Chang, Jong Hee
Kang, Seok-Gu
Kim, Eui Hyun
Kim, Se Hoon
Jain, Rajan
Lee, Seung-Koo
Eur Radiol2020Journal Article, cited 0 times
Website
TCGA-LGG
Radiomics
Radiogenomics
Glioma
Machine learning
BACKGROUND AND PURPOSE: Recent studies have highlighted the importance of isocitrate dehydrogenase (IDH) mutational status in stratifying biologically distinct subgroups of gliomas. This study aimed to evaluate whether MRI-based radiomic features could improve the accuracy of survival predictions for lower grade gliomas over clinical and IDH status. MATERIALS AND METHODS: Radiomic features (n = 250) were extracted from preoperative MRI data of 296 lower grade glioma patients from databases at our institutional (n = 205) and The Cancer Genome Atlas (TCGA)/The Cancer Imaging Archive (TCIA) (n = 91) datasets. For predicting overall survival, random survival forest models were trained with radiomic features; non-imaging prognostic factors including age, resection extent, WHO grade, and IDH status on the institutional dataset, and validated on the TCGA/TCIA dataset. The performance of the random survival forest (RSF) model and incremental value of radiomic features were assessed by time-dependent receiver operating characteristics. RESULTS: The radiomics RSF model identified 71 radiomic features to predict overall survival, which were successfully validated on TCGA/TCIA dataset (iAUC, 0.620; 95% CI, 0.501-0.756). Relative to the RSF model from the non-imaging prognostic parameters, the addition of radiomic features significantly improved the overall survival prediction accuracy of the random survival forest model (iAUC, 0.627 vs. 0.709; difference, 0.097; 95% CI, 0.003-0.209). CONCLUSION: Radiomic phenotyping with machine learning can improve survival prediction over clinical profile and genomic data for lower grade gliomas. KEY POINTS: * Radiomics analysis with machine learning can improve survival prediction over the non-imaging factors (clinical and molecular profiles) for lower grade gliomas, across different institutions.
Incremental Prognostic Value of ADC Histogram Analysis over MGMT Promoter Methylation Status in Patients with Glioblastoma
Choi, Yoon Seong
Ahn, Sung Soo
Kim, Dong Wook
Chang, Jong Hee
Kang, Seok-Gu
Kim, Eui Hyun
Kim, Se Hoon
Rim, Tyler Hyungtaek
Lee, Seung-Koo
Radiology2016Journal Article, cited 18 times
Website
Radiogenomics
Glioblastoma Multiforme (GBM)
Purpose To investigate the incremental prognostic value of apparent diffusion coefficient (ADC) histogram analysis over oxygen 6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status in patients with glioblastoma and the correlation between ADC parameters and MGMT status. Materials and Methods This retrospective study was approved by institutional review board, and informed consent was waived. A total of 112 patients with glioblastoma were divided into training (74 patients) and test (38 patients) sets. Overall survival (OS) and progression-free survival (PFS) was analyzed with ADC parameters, MGMT status, and other clinical factors. Multivariate Cox regression models with and without ADC parameters were constructed. Model performance was assessed with c index and receiver operating characteristic curve analyses for 12- and 16-month OS and 12-month PFS in the training set and validated in the test set. ADC parameters were compared according to MGMT status for the entire cohort. Results By using ADC parameters, the c indices and diagnostic accuracies for 12- and 16-month OS and 12-month PFS in the models showed significant improvement, with the exception of c indices in the models for PFS (P < .05 for all) in the training set. In the test set, the diagnostic accuracy was improved by using ADC parameters and was significant, with the 25th and 50th percentiles of ADC for 16-month OS (P = .040 and P = .047) and the 25th percentile of ADC for 12-month PFS (P = .026). No significant correlation was found between ADC parameters and MGMT status. Conclusion ADC histogram analysis had incremental prognostic value over MGMT promoter methylation status in patients with glioblastoma. ((c)) RSNA, 2016 Online supplemental material is available for this article.
ST3GAL1-associated transcriptomic program in glioblastoma tumor growth, invasion, and prognosis
Chong, Yuk Kien
Sandanaraj, Edwin
Koh, Lynnette WH
Thangaveloo, Moogaambikai
Tan, Melanie SY
Koh, Geraldene RH
Toh, Tan Boon
Lim, Grace GY
Holbrook, Joanna D
Kon, Oi Lian
Nadarajah, M.
Ng, I.
Ng, W. H.
Tan, N. S.
Lim, K. L.
Tang, C.
Ang, B. T.
Journal of the National Cancer Institute2016Journal Article, cited 16 times
Website
REMBRANDT
Radiogenomics
BRAIN
Glioblastoma Multiforme (GBM)
BACKGROUND: Cell surface sialylation is associated with tumor cell invasiveness in many cancers. Glioblastoma is the most malignant primary brain tumor and is highly infiltrative. ST3GAL1 sialyltransferase gene is amplified in a subclass of glioblastomas, and its role in tumor cell self-renewal remains unexplored. METHODS: Self-renewal of patient glioma cells was evaluated using clonogenic, viability, and invasiveness assays. ST3GAL1 was identified from differentially expressed genes in Peanut Agglutinin-stained cells and validated in REMBRANDT (n = 390) and Gravendeel (n = 276) clinical databases. Gene set enrichment analysis revealed upstream processes. TGFbeta signaling on ST3GAL1 transcription was assessed using chromatin immunoprecipitation. Transcriptome analysis of ST3GAL1 knockdown cells was done to identify downstream pathways. A constitutively active FoxM1 mutant lacking critical anaphase-promoting complex/cyclosome ([APC/C]-Cdh1) binding sites was used to evaluate ST3Gal1-mediated regulation of FoxM1 protein. Finally, the prognostic role of ST3Gal1 was determined using an orthotopic xenograft model (3 mice groups comprising nontargeting and 2 clones of ST3GAL1 knockdown in NNI-11 [8 per group] and NNI-21 [6 per group]), and the correlation with patient clinical information. All statistical tests on patients' data were two-sided; other P values below are one-sided. RESULTS: High ST3GAL1 expression defines an invasive subfraction with self-renewal capacity; its loss of function prolongs survival in a mouse model established from mesenchymal NNI-11 (P < .001; groups of 8 in 3 arms: nontargeting, C1, and C2 clones of ST3GAL1 knockdown). ST3GAL1 transcriptomic program stratifies patient survival (hazard ratio [HR] = 2.47, 95% confidence interval [CI] = 1.72 to 3.55, REMBRANDT P = 1.92 x 10(-)(8); HR = 2.89, 95% CI = 1.94 to 4.30, Gravendeel P = 1.05 x 10(-)(1)(1)), independent of age and histology, and associates with higher tumor grade and T2 volume (P = 1.46 x 10(-)(4)). TGFbeta signaling, elevated in mesenchymal patients, correlates with high ST3GAL1 (REMBRANDT gliomacor = 0.31, P = 2.29 x 10(-)(1)(0); Gravendeel gliomacor = 0.50, P = 3.63 x 10(-)(2)(0)). The transcriptomic program upon ST3GAL1 knockdown enriches for mitotic cell cycle processes. FoxM1 was identified as a statistically significantly modulated gene (P = 2.25 x 10(-)(5)) and mediates ST3Gal1 signaling via the (APC/C)-Cdh1 complex. CONCLUSIONS: The ST3GAL1-associated transcriptomic program portends poor prognosis in glioma patients and enriches for higher tumor grades of the mesenchymal molecular classification. We show that ST3Gal1-regulated self-renewal traits are crucial to the sustenance of glioblastoma multiforme growth.
Proteogenomic analysis of chemo-refractory high-grade serous ovarian cancer
Chowdhury, Shrabanti
Kennedy, Jacob J
Ivey, Richard G
Murillo, Oscar D
Hosseini, Noshad
Song, Xiaoyu
Petralia, Francesca
Calinawan, Anna
Savage, Sara R
Berry, Anna B
Reva, Boris
Ozbek, Umut
Krek, Azra
Ma, Weiping
da Veiga Leprevost, Felipe
Ji, Jiayi
Yoo, Seungyeul
Lin, Chenwei
Voytovich, Uliana J
Huang, Yajue
Lee, Sun-Hee
Bergan, Lindsay
Lorentzen, Travis D
Mesri, Mehdi
Rodriguez, Henry
Hoofnagle, Andrew N
Herbert, Zachary T
Nesvizhskii, Alexey I
Zhang, Bing
Whiteaker, Jeffrey R
Fenyo, David
McKerrow, Wilson
Wang, Joshua
Schürer, Stephan C
Stathias, Vasileios
Chen, X Steven
Barcellos-Hoff, Mary Helen
Starr, Timothy K
Winterhoff, Boris J
Nelson, Andrew C
Mok, Samuel C
Kaufmann, Scott H
Drescher, Charles
Cieslik, Marcin
Wang, Pei
Birrer, Michael J
Paulovich, Amanda G
2023Journal Article, cited 0 times
PTRC-HGSOC
To improve the understanding of chemo-refractory high-grade serous ovarian cancers (HGSOCs), we characterized the proteogenomic landscape of 242 (refractory and sensitive) HGSOCs, representing one discovery and two validation cohorts across two biospecimen types (formalin-fixed paraffin-embedded and frozen). We identified a 64-protein signature that predicts with high specificity a subset of HGSOCs refractory to initial platinum-based therapy and is validated in two independent patient cohorts. We detected significant association between lack of Ch17 loss of heterozygosity (LOH) and chemo-refractoriness. Based on pathway protein expression, we identified 5 clusters of HGSOC, which validated across two independent patient cohorts and patient-derived xenograft (PDX) models. These clusters may represent different mechanisms of refractoriness and implicate putative therapeutic vulnerabilities.
Predicting recurrence risks in lung cancer patients using multimodal radiomics and random survival forests
Christie, J. R.
Daher, O.
Abdelrazek, M.
Romine, P. E.
Malthaner, R. A.
Qiabi, M.
Nayak, R.
Napel, S.
Nair, V. S.
Mattonen, S. A.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
NSCLC Radiogenomics
Computed Tomography (CT)
lung cancer
machine learning
Positron Emission Tomography (PET)
Radiomics
PURPOSE: We developed a model integrating multimodal quantitative imaging features from tumor and nontumor regions, qualitative features, and clinical data to improve the risk stratification of patients with resectable non-small cell lung cancer (NSCLC). APPROACH: We retrospectively analyzed 135 patients [mean age, 69 years (43 to 87, range); 100 male patients and 35 female patients] with NSCLC who underwent upfront surgical resection between 2008 and 2012. The tumor and peritumoral regions on both preoperative CT and FDG PET-CT and the vertebral bodies L3 to L5 on FDG PET were segmented to assess the tumor and bone marrow uptake, respectively. Radiomic features were extracted and combined with clinical and CT qualitative features. A random survival forest model was developed using the top-performing features to predict the time to recurrence/progression in the training cohort ( n = 101 ), validated in the testing cohort ( n = 34 ) using the concordance, and compared with a stage-only model. Patients were stratified into high- and low-risks of recurrence/progression using Kaplan-Meier analysis. RESULTS: The model, consisting of stage, three wavelet texture features, and three wavelet first-order features, achieved a concordance of 0.78 and 0.76 in the training and testing cohorts, respectively, significantly outperforming the baseline stage-only model results of 0.67 ( p < 0.005 ) and 0.60 ( p = 0.008 ), respectively. Patients at high- and low-risks of recurrence/progression were significantly stratified in both the training ( p < 0.005 ) and the testing ( p = 0.03 ) cohorts. CONCLUSIONS: Our radiomic model, consisting of stage and tumor, peritumoral, and bone marrow features from CT and FDG PET-CT significantly stratified patients into low- and high-risk of recurrence/progression.
A Comprehensive Survey on Deep Learning-Based Pulmonary Nodule Identification on CT Images
Christina Sweetline, B.
Vijayakumaran, C.
2023Book Section, cited 0 times
QIN LUNG CT
Lung cancer is among the most rapidly increasing malignant tumor illnesses in terms of morbidity and death, posing a significant risk to human health. CT screening has shown to be beneficial in detecting lung cancer in its early stages, when it manifests as pulmonary nodules. Low-Dose Computed Tomography (LDCT) scanning has proven to improve the accuracy of detecting the lung nodules and categorizing during early stages, lowering the death rate. Radiologists can discover lung nodules by looking at images of the lungs. However, because the number of specialists is minimal and they are overloaded, proper assessment of image data is a difficult process. Nevertheless, with the rapid flooding of CT data, it is critical for radiologists to use an efficient Computer-Assisted Detection (CAD) system for analyzing the lung nodules automatically. CNNs are found to have a significant impact on lung cancer early detection and management. This research examines the current approaches for detecting lung nodules automatically. The experimental standards for nodule analysis are described with publicly available datasets of lung CT images. Finally, this field’s research trends, current issues, and future directions are discussed. It is concluded that CNNs have significantly changed lung cancer early diagnosis and treatment and this review will give the medical research groups the knowledge they need to understand the notion of CNN and use it to enhance the overall healthcare system for people.
Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features
Chufal, Kundan S.
Ahmad, Irfan
Pahuja, Anjali K.
Miller, Alexis A.
Singh, Rajpal
Chowdhary, Rahul L.
Asian Journal of Oncology2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
LUNG
Machine Learning
Artificial Neural Network (ANN)
Classification
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data.; ; Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05.; ; Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164).; ; Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.
Facilitating innovation and knowledge transfer between homogeneous and heterogeneous datasets: Generic incremental transfer learning approach and multidisciplinary studies
Chui, Kwok Tai
Arya, Varsha
Band, Shahab S.
Alhalabi, Mobeen
Liu, Ryan Wen
Chi, Hao Ran
Journal of Innovation & Knowledge2023Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
SPIE-AAPM Lung CT Challenge
LungCT-Diagnosis
QIN Breast
QIN Breast DCE-MRI
Breast-MRI-NACT-Pilot
Transfer learning
Deep Learning
Open datasets serve as facilitators for researchers to conduct research with ground truth data. Generally, datasets contain innovation and knowledge in the domains that could be transferred between homogeneous datasets and have become feasible using machine learning models with the advent of transfer learning algorithms. Research initiatives are drawn to the heterogeneous datasets if these could extract useful innovation and knowledge across datasets of different domains. A breakthrough can be achieved without the restriction requiring the similarities between datasets. A multiple incremental transfer learning is proposed to yield optimal results in the target model. A multiple rounds multiple incremental transfer learning with a negative transfer avoidance algorithm are proposed as a generic approach to transfer innovation and knowledge from the source domain to the target domain. Incremental learning has played an important role in lowering the risk of transferring unrelated information which reduces the performance of machine learning models. To evaluate the effectiveness of the proposed algorithm, multidisciplinary studies are carried out in 5 disciplines with 15 benchmark datasets. Each discipline comprises 3 datasets as studies with homogeneous datasets whereas heterogeneous datasets are formed between disciplines. The results reveal that the proposed algorithm enhances the average accuracy by 4.35% compared with existing works. Ablation studies are also conducted to analyse the contributions of the individual techniques of the proposed algorithm, namely, the multiple rounds strategy, incremental learning, and negative transfer avoidance algorithms. These techniques enhance the average accuracy of the machine learning model by 3.44%, 0.849%, and 4.26%, respectively.
Results of initial low-dose computed tomographic screening for lung cancer
Church, T. R.
Black, W. C.
Aberle, D. R.
Berg, C. D.
Clingan, K. L.
Duan, F.
Fagerstrom, R. M.
Gareen, I. F.
Gierada, D. S.
Jones, G. C.
Mahon, I.
Marcus, P. M.
Sicks, J. D.
Jain, A.
Baum, S.
N Engl J Med2013Journal Article, cited 529 times
Website
NLST
lung
LDCT
BACKGROUND: Lung cancer is the largest contributor to mortality from cancer. The National Lung Screening Trial (NLST) showed that screening with low-dose helical computed tomography (CT) rather than with chest radiography reduced mortality from lung cancer. We describe the screening, diagnosis, and limited treatment results from the initial round of screening in the NLST to inform and improve lung-cancer-screening programs. METHODS: At 33 U.S. centers, from August 2002 through April 2004, we enrolled asymptomatic participants, 55 to 74 years of age, with a history of at least 30 pack-years of smoking. The participants were randomly assigned to undergo annual screening, with the use of either low-dose CT or chest radiography, for 3 years. Nodules or other suspicious findings were classified as positive results. This article reports findings from the initial screening examination. RESULTS: A total of 53,439 eligible participants were randomly assigned to a study group (26,715 to low-dose CT and 26,724 to chest radiography); 26,309 participants (98.5%) and 26,035 (97.4%), respectively, underwent screening. A total of 7191 participants (27.3%) in the low-dose CT group and 2387 (9.2%) in the radiography group had a positive screening result; in the respective groups, 6369 participants (90.4%) and 2176 (92.7%) had at least one follow-up diagnostic procedure, including imaging in 5717 (81.1%) and 2010 (85.6%) and surgery in 297 (4.2%) and 121 (5.2%). Lung cancer was diagnosed in 292 participants (1.1%) in the low-dose CT group versus 190 (0.7%) in the radiography group (stage 1 in 158 vs. 70 participants and stage IIB to IV in 120 vs. 112). Sensitivity and specificity were 93.8% and 73.4% for low-dose CT and 73.5% and 91.3% for chest radiography, respectively. CONCLUSIONS: The NLST initial screening results are consistent with the existing literature on screening by means of low-dose CT and chest radiography, suggesting that a reduction in mortality from lung cancer is achievable at U.S. screening centers that have staff experienced in chest CT. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).
Self supervised contrastive learning for digital histopathology
Ciga, Ozan
Xu, Tony
Martel, Anne Louise
Machine Learning with Applications2022Journal Article, cited 28 times
Website
Prostate-MRI
C-NMC 2019
SN-AM
Post-NAT-BRCA
AML-Cytomorphology_LMU
CPTAC
TCGA
Algorithm Development
Pathomics
Unsupervised learning has been a long-standing goal of machine learning and is especially important for medical image analysis, where the learning can compensate for the scarcity of labeled datasets. A promising subclass of unsupervised learning is self-supervised learning, which aims to learn salient features using the raw input as the learning signal. In this work, we tackle the issue of learning domain-specific features without any supervision to improve multiple task performances that are of interest to the digital histopathology community. We apply a contrastive self-supervised learning method to digital histopathology by collecting and pretraining on 57 histopathology datasets without any labels. We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features. Furthermore, we find using more images for pretraining leads to a better performance in multiple downstream tasks, albeit there are diminishing returns as more unlabeled images are incorporated into the pretraining. Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks, boosting task performances by more than 28% in scores on average. Interestingly, we did not observe a consistent correlation between the pretraining dataset site or the organ versus the downstream task (e.g., pretraining with only breast images does not necessarily lead to a superior downstream task performance for breast-related tasks). These findings may also be useful when applying newer contrastive techniques to histopathology data. Pretrained PyTorch models are made publicly available at https://github.com/ozanciga/self-supervised-histopathology.
Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features
Çinarer, Gökalp
Emiroğlu, Bülent Gürsel
Yurttakal, Ahmet Haşim
Applied Sciences2020Journal Article, cited 0 times
LGG-1p19qDeletion
Gliomas are the most common primary brain tumors. They are classified into 4 grades (Grade I–II-III–IV) according to the guidelines of the World Health Organization (WHO). The accurate grading of gliomas has clinical significance for planning prognostic treatments, pre-diagnosis, monitoring and administration of chemotherapy. The purpose of this study is to develop a deep learning-based classification method using radiomic features of brain tumor glioma grades with deep neural network (DNN). The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool. This study primarily focuses on the four main aspects of the radiomic workflow, namely tumor segmentation, feature extraction, analysis, and classification. We evaluated data from 121 patients with brain tumors (Grade II, n = 77; Grade III, n = 44) from The Cancer Imaging Archive, and 744 radiomic features were obtained by applying low sub-band and high sub-band 3D wavelet transform filters to the 3D tumor images. Quantitative values were statistically analyzed with MannWhitney U tests and 126 radiomic features with significant statistical properties were selected in eight different wavelet filters. Classification performances of 3D wavelet transform filter groups were measured using accuracy, sensitivity, F1 score, and specificity values using the deep learning classifier model. The proposed model was highly effective in grading gliomas with 96.15% accuracy, 94.12% precision, 100% recall, 96.97% F1 score, and 98.75% Area under the ROC curve. As a result, deep learning and feature selection techniques with wavelet transform filters can be accurately applied using the proposed method in glioma grade classification.
Automatic detection of spiculation of pulmonary nodules in computed tomography images
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histological sub-regions, i.e., peritumoral edema, necrotic core, enhancing and non-enhancing tumour core. Although brain tumours can easily be detected using multi-modal MRI, accurate tumor segmentation is a challenging task. Hence, using the data provided by the BraTS Challenge 2020, we propose a 3D volume-to-volume Generative Adversarial Network for segmentation of brain tumours. The model, called Vox2Vox, generates realistic segmentation outputs from multi-channel 3D MR images, segmenting the whole, core and enhancing tumor with mean values of 87.20%, 81.14%, and 78.67% as dice scores and 6.44mm, 24.36 mm, and 18.95 mm for Hausdorff distance 95 percentile for the BraTS testing set after ensembling 10 Vox2Vox models obtained with a 10-fold cross-validation. The code is available at https://github.com/mdciri/Vox2Vox.
Brain Tumor Classification Using Deep Neural Network
Çınarer, Gökalp
Emiroğlu, Bülent Gürsel
Arslan, Recep Sinan
Yurttakal, Ahmet Haşim
2020Journal Article, cited 0 times
REMBRANDT
Transfer learning for auto-segmentation of 17 organs-at-risk in the head and neck: Bridging the gap between institutional and public datasets
Clark, B.
Hardcastle, N.
Johnston, L. A.
Korte, J.
Med Phys2024Journal Article, cited 0 times
Website
HEAD-NECK-RADIOMICS-HN1
Head-Neck-PET-CT
Head-Neck-CT-Atlas
OPC-Radiomics
Algorithm Development
Deep Learning
Image Segmentation
Transfer learning
BACKGROUND: Auto-segmentation of organs-at-risk (OARs) in the head and neck (HN) on computed tomography (CT) images is a time-consuming component of the radiation therapy pipeline that suffers from inter-observer variability. Deep learning (DL) has shown state-of-the-art results in CT auto-segmentation, with larger and more diverse datasets showing better segmentation performance. Institutional CT auto-segmentation datasets have been small historically (n < 50) due to the time required for manual curation of images and anatomical labels. Recently, large public CT auto-segmentation datasets (n > 1000 aggregated) have become available through online repositories such as The Cancer Imaging Archive. Transfer learning is a technique applied when training samples are scarce, but a large dataset from a closely related domain is available. PURPOSE: The purpose of this study was to investigate whether a large public dataset could be used in place of an institutional dataset (n > 500), or to augment performance via transfer learning, when building HN OAR auto-segmentation models for institutional use. METHODS: Auto-segmentation models were trained on a large public dataset (public models) and a smaller institutional dataset (institutional models). The public models were fine-tuned on the institutional dataset using transfer learning (transfer models). We assessed both public model generalizability and transfer model performance by comparison with institutional models. Additionally, the effect of institutional dataset size on both transfer and institutional models was investigated. All DL models used a high-resolution, two-stage architecture based on the popular 3D U-Net. Model performance was evaluated using five geometric measures: the dice similarity coefficient (DSC), surface DSC, 95(th) percentile Hausdorff distance, mean surface distance (MSD), and added path length. RESULTS: For a small subset of OARs (left/right optic nerve, spinal cord, left submandibular), the public models performed significantly better (p < 0.05) than, or showed no significant difference to, the institutional models under most of the metrics examined. For the remaining OARs, the public models were inferior to the institutional models, although performance differences were small (DSC </= 0.03, MSD < 0.5 mm) for seven OARs (brainstem, left/right lens, left/right parotid, mandible, right submandibular). The transfer models performed significantly better than the institutional models for seven OARs (brainstem, right lens, left/right optic nerve, left/right parotid, spinal cord) with a small margin of improvement (DSC </= 0.02, MSD < 0.4 mm). When numbers of institutional training samples were limited, public and transfer models outperformed the institutional models for most OARs (brainstem, left/right lens, left/right optic nerve, left/right parotid, spinal cord, and left/right submandibular). CONCLUSION: Training auto-segmentation models with public data alone was suitable for a small number of OARs. Using only public data incurred a small performance deficit for most other OARs, when compared with institutional data alone, but may be preferable over time-consuming curation of a large institutional dataset. When a large institutional dataset was available, transfer learning with models pretrained on a large public dataset provided a modest performance improvement for several OARs. When numbers of institutional samples were limited, using the public dataset alone, or as a pretrained model, was beneficial for most OARs.
Reproducing 2D breast mammography images with 3D printed phantoms
Lung cancer (LC) was the predicted leading causeof Australian cancer fatalities in 2018 (around 9,200 deaths). Non-Small Cell Lung Cancer (NSCLC) tumours with larger amounts of heterogeneity have been linked to a worse outcome.Medical imaging is widely used in oncology and non-invasively collects data about the whole tumour. The field of radiomics uses these medical images to extract quantitative image featuresand promises further understanding of the disease at the time of diagnosis, during treatment and in follow up. It is well known that manual and semi-automatic tumour segmentation methods are subject to inter-observer variability which reduces confidence in the treatment region and extentof disease. This leads to tumour under- and over-estimation which can impact on treatment outcome and treatment-induced morbidity.This research aims to use radiomic features centred at each pixel to segment the location of the lung tumour on Computed Tomography (CT) scans. To achieve this objective, a DecisionTree (DT) model was trained using sampled CT data from eight patients. The data consisted of 25 pixel-based texture features calculated from four Gray Level Matrices (GLMs)describing the region around each pixel. The model was assessed using an unseen patient through both a confusion matrix and interpretation of the segment.The findings showed that the model accurately (AUROC =83.9%) predicts tumour location within the test data, concluding that pixel based textural features likely contribute to segmenting the lung tumour. The prediction displayed a strong representation of the manually segmented Region of Interest (ROI), which is considered the ground truth for the purpose of this research.
Acute Lymphoblastic Leukemia Detection Using Depthwise Separable Convolutional Neural Networks
Clinton Jr, Laurence P
Somes, Karen M
Chu, Yongjun
Javed, Faizan
SMU Data Science Review2020Journal Article, cited 0 times
C-NMC 2019
Machine Learning
Automated Medical Image Modality Recognition by Fusion of Visual and Text Information
The Distance-Regularized Level Set Evolution (DRLSE) algorithm solves many problems that plague the class of Level Set algorithms, but has a significant computational cost and is sensitive to its many parameters. Configuring these parameters is a time-intensive trial-and-error task that limits the usability of the algorithm. This is especially true in the field of Medical Imaging, where it would be otherwise highly suitable. The aim of this work is to develop a parallel implementation of the algorithm using the Compute-Unified Device Architecture (CUDA) for Graphics Processing Units (GPU), which would reduce the computational cost of the algorithm, bringing it to the interactive regime. This would lessen the burden of configuring its parameters and broaden its application. Using consumer-grade, hardware, we observed performance gains between roughly 800% and 1700% when comparing against a purely serial C++ implementation we developed, and gains between roughly 180% and 500%, when comparing against the MATLAB reference implementation of DRLSE, both depending on input image resolution.
NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures
Colen, Rivka
Foster, Ian
Gatenby, Robert
Giger, Mary Ellen
Gillies, Robert
Gutman, David
Heller, Matthew
Jain, Rajan
Madabhushi, Anant
Madhavan, Subha
Napel, Sandy
Rao, Arvind
Saltz, Joel
Tatum, James
Verhaak, Roeland
Whitman, Gary
Translational Oncology2014Journal Article, cited 39 times
Website
Multi-modal imaging
Radiogenomics
Radiomics
TCGA-GBM
TCGA-BRCA
Pathomics
The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.
Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project
Colen, Rivka R
Vangel, Mark
Wang, Jixin
Gutman, David A
Hwang, Scott N
Wintermark, Max
Jain, Rajan
Jilwan-Nicolas, Manal
Chen, James Y
Raghavan, Prashant
Holder, C. A.
Rubin, D.
Huang, E.
Kirby, J.
Freymann, J.
Jaffe, C. C.
Flanders, A.
TCGA Glioma Phenotype Research Group
Zinn, P. O.
BMC Medical Genomics2014Journal Article, cited 47 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI)
BACKGROUND: Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype. METHODS: We retrospectively identified 104 treatment-naive glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute). RESULTS: Our results show that patients with a combination of deep white matter tracts and ependymal invasion (Class A) on imaging had a significant decrease in overall survival as compared to patients with absence of such invasive imaging features (Class B) (8.7 versus 18.6 months, p < 0.001). Mitochondrial dysfunction was the top canonical pathway associated with Class A gene expression signature. The MYC oncogene was predicted to be the top activation regulator in Class A. CONCLUSION: We demonstrate that MRI biomarker signatures can identify distinct GBM phenotypes associated with highly significant survival differences and specific molecular pathways. This study identifies mitochondrial dysfunction as the top canonical pathway in a very aggressive GBM phenotype. Thus, imaging-genomic analyses may prove invaluable in detecting novel targetable genomic pathways.
Glioblastoma: Imaging Genomic Mapping Reveals Sex-specific Oncogenic Associations of Cell Death
Colen, Rivka R
Wang, Jixin
Singh, Sanjay K
Gutman, David A
Zinn, Pascal O
Radiology2014Journal Article, cited 36 times
Website
TCGA-GBM
Radiogenomics
PURPOSE: To identify the molecular profiles of cell death as defined by necrosis volumes at magnetic resonance (MR) imaging and uncover sex-specific molecular signatures potentially driving oncogenesis and cell death in glioblastoma (GBM). MATERIALS AND METHODS: This retrospective study was HIPAA compliant and had institutional review board approval, with waiver of the need to obtain informed consent. The molecular profiles for 99 patients (30 female patients, 69 male patients) were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Volumes of necrosis at MR imaging were extracted. Differential gene expression profiles were obtained in those patients (including male and female patients separately) with high versus low MR imaging volumes of tumor necrosis. Ingenuity Pathway Analysis was used for messenger RNA-microRNA interaction analysis. A histopathologic data set (n = 368; 144 female patients, 224 male patients) was used to validate the MR imaging findings by assessing the amount of cell death. A connectivity map was used to identify therapeutic agents potentially targeting sex-specific cell death in GBM. RESULTS: Female patients showed significantly lower volumes of necrosis at MR imaging than male patients (6821 vs 11 050 mm(3), P = .03). Female patients, unlike male patients, with high volumes of necrosis at imaging had significantly shorter survival (6.5 vs 14.5 months, P = .01). Transcription factor analysis suggested that cell death in female patients with GBM is associated with MYC, while that in male patients is associated with TP53 activity. Additionally, a group of therapeutic agents that can potentially be tested to target cell death in a sex-specific manner was identified. CONCLUSION: The results of this study suggest that cell death in GBM may be driven by sex-specific molecular pathways.
DR-Unet104 for Multimodal MRI Brain Tumor Segmentation
In this paper we propose a 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs. We make multiple additions to the Unet architecture, including adding the ‘bottleneck’ residual block to the Unet encoder and adding dropout after each convolution block stack. We verified the effect of including the regularization of dropout with small rate (e.g. 0.2) on the architecture, and found a dropout of 0.2 improved the overall performance compared to no dropout, or a dropout of 0.5. We evaluated the proposed architecture as part of the Multimodal Brain Tumor Segmentation (BraTS) 2020 Challenge and compared our method to DeepLabV3+ with a ResNet-V2–152 backbone. We found the DR-Unet104 achieved a mean dice score coefficient of 0.8862, 0.6756 and 0.6721 for validation data, whole tumor, enhancing tumor and tumor core respectively, an overall improvement on 0.8770, 0.65242 and 0.68134 achieved by DeepLabV3+. Our method produced a final mean DSC of 0.8673, 0.7514 and 0.7983 on whole tumor, enhancing tumor and tumor core on the challenge’s testing data. We produce a competitive lesion segmentation architecture, despite only using 2D convolutions, having the added benefit that it can be used on lower power computers than a 3D architecture. The source code and trained model for this work is openly available at https://github.com/jordan-colman/DR-Unet104.
Early prediction of neoadjuvant chemotherapy response by exploiting a transfer learning approach on breast DCE-MRIs
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Radiomics
Radiogenomics
BREAST
Machine Learning
Magnetic Resonance Imaging (MRI)
Radiography
Support Vector Machine (SVM)
The dynamic contrast-enhanced MR imaging plays a crucial role in evaluating the effectiveness of neoadjuvant chemotherapy (NAC) even since its early stage through the prediction of the final pathological complete response (pCR). In this study, we proposed a transfer learning approach to predict if a patient achieved pCR (pCR) or did not (non-pCR) by exploiting, separately or in combination, pre-treatment and early-treatment exams from I-SPY1 TRIAL public database. First, low-level features, i.e., related to local structure of the image, were automatically extracted by a pre-trained convolutional neural network (CNN) overcoming manual feature extraction. Next, an optimal set of most stable features was detected and then used to design an SVM classifier. A first subset of patients, called fine-tuning dataset (30 pCR; 78 non-pCR), was used to perform the optimal choice of features. A second subset not involved in the feature selection process was employed as an independent test (7 pCR; 19 non-pCR) to validate the model. By combining the optimal features extracted from both pre-treatment and early-treatment exams with some clinical features, i.e., ER, PgR, HER2 and molecular subtype, an accuracy of 91.4% and 92.3%, and an AUC value of 0.93 and 0.90, were returned on the fine-tuning dataset and the independent test, respectively. Overall, the low-level CNN features have an important role in the early evaluation of the NAC efficacy by predicting pCR. The proposed model represents a first effort towards the development of a clinical support tool for an early prediction of pCR to NAC.
A Framework for Customizable FPGA-based Image Registration Accelerators
Image Registration is a highly compute-intensive optimization procedure that determines the geometric transformation to align a floating image to a reference one. Generally, the registration targets are images taken from different time instances, acquisition angles, and/or sensor types. Several methodologies are employed in the literature to address the limiting factors of this class of algorithms, among which hardware accelerators seem the most promising solution to boost performance. However, most hardware implementations are either closed-source or tailored to a specific context, limiting their application to different fields. For these reasons, we propose an open-source hardware-software framework to generate a configurable architecture for the most compute-intensive part of registration algorithms, namely the similarity metric computation. This metric is the Mutual Information, a well-known calculus from the Information Theory, used in several optimization procedures. Through different design parameters configurations, we explore several design choices of our highly-customizable architecture and validate it on multiple FPGAs. We evaluated various architectures against an optimized Matlab implementation on an Intel Xeon Gold, reaching a speedup up to 2.86x, and remarkable performance and power efficiency against other state-of-the-art approaches.
Colour adaptive generative networks for stain normalisation of histopathology images
Cong, C.
Liu, S.
Di Ieva, A.
Pagnucco, M.
Berkovsky, S.
Song, Y.
Med Image Anal2022Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Pathomics
Image color analysis
BRAIN
BREAST
Humans
Hematoxylin
Eosine Yellowish-(YS)
*Coloring Agents
Contrast enhancement
Stain normalisation
*Image Processing
Computer-Assisted/methods
Digital pathology
Generative Adversarial Network (GAN)
Semi-supervised learning
Deep learning has shown its effectiveness in histopathology image analysis, such as pathology detection and classification. However, stain colour variation in Hematoxylin and Eosin (H&E) stained histopathology images poses challenges in effectively training deep learning-based algorithms. To alleviate this problem, stain normalisation methods have been proposed, with most of the recent methods utilising generative adversarial networks (GAN). However, these methods are either trained fully with paired images from the target domain (supervised) or with unpaired images (unsupervised), suffering from either large discrepancy between domains or risks of undertrained/overfitted models when only the target domain images are used for training. In this paper, we introduce a colour adaptive generative network (CAGAN) for stain normalisation which combines both supervised learning from target domain and unsupervised learning from source domain. Specifically, we propose a dual-decoder generator and force consistency between their outputs thus introducing extra supervision which benefits from extra training with source domain images. Moreover, our model is immutable to stain colour variations due to the use of stain colour augmentation. We further implement histogram loss to ensure the processed images are coloured with the target domain colours regardless of their content differences. Extensive experiments on four public histopathology image datasets including TCGA-IDH, CAMELYON16, CAMELYON17 and BreakHis demonstrate that our proposed method produces high quality stain normalised images which improve the performance of benchmark algorithms by 5% to 10% compared to baselines not using normalisation.
The exceptional responders initiative: feasibility of a National Cancer Institute pilot study
Conley, Barbara A
Staudt, Lou
Takebe, Naoko
Wheeler, David A
Wang, Linghua
Cardenas, Maria F
Korchina, Viktoriya
Zenklusen, Jean Claude
McShane, Lisa M
Tricoli, James V
JNCI: Journal of the National Cancer Institute2021Journal Article, cited 5 times
Website
Exceptional Responders
Evaluation of Semiautomatic and Deep Learning-Based Fully Automatic Segmentation Methods on [(18)F]FDG PET/CT Images from Patients with Lymphoma: Influence on Tumor Characterization
Constantino, C. S.
Leocadio, S.
Oliveira, F. P. M.
Silva, M.
Oliveira, C.
Castanheira, J. C.
Silva, A.
Vaz, S.
Teixeira, R.
Neves, M.
Lucio, P.
Joao, C.
Costa, D. C.
J Digit Imaging2023Journal Article, cited 0 times
Website
FDG-PET-CT-Lesions
AutoPET
Artificial intelligence
Computer-assisted image analysis
Lymphoma
Reproducibility of results
[18f]fdg pet/ct
Semi-automatic segmentation
Automatic segmentation
The objective is to assess the performance of seven semiautomatic and two fully automatic segmentation methods on [(18)F]FDG PET/CT lymphoma images and evaluate their influence on tumor quantification. All lymphoma lesions identified in 65 whole-body [(18)F]FDG PET/CT staging images were segmented by two experienced observers using manual and semiautomatic methods. Semiautomatic segmentation using absolute and relative thresholds, k-means and Bayesian clustering, and a self-adaptive configuration (SAC) of k-means and Bayesian was applied. Three state-of-the-art deep learning-based segmentations methods using a 3D U-Net architecture were also applied. One was semiautomatic and two were fully automatic, of which one is publicly available. Dice coefficient (DC) measured segmentation overlap, considering manual segmentation the ground truth. Lymphoma lesions were characterized by 31 features. Intraclass correlation coefficient (ICC) assessed features agreement between different segmentation methods. Nine hundred twenty [(18)F]FDG-avid lesions were identified. The SAC Bayesian method achieved the highest median intra-observer DC (0.87). Inter-observers' DC was higher for SAC Bayesian than manual segmentation (0.94 vs 0.84, p < 0.001). Semiautomatic deep learning-based median DC was promising (0.83 (Obs1), 0.79 (Obs2)). Threshold-based methods and publicly available 3D U-Net gave poorer results (0.56 </= DC </= 0.68). Maximum, mean, and peak standardized uptake values, metabolic tumor volume, and total lesion glycolysis showed excellent agreement (ICC >/= 0.92) between manual and SAC Bayesian segmentation methods. The SAC Bayesian classifier is more reproducible and produces similar lesion features compared to manual segmentation, giving the best concordant results of all other methods. Deep learning-based segmentation can achieve overall good segmentation results but failed in few patients impacting patients' clinical evaluation.
BackgroundMatrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining.ResultsWe developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart.ConclusionsA general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT
Coronado-Delgado, Daniel A
Garnica-Garza, Hector M
Technol Cancer Res Treat2019Journal Article, cited 0 times
Website
4D-Lung
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.
Digital twin driven electrode optimization for wearable bladder monitoring via bioimpedance
Monitoring fluid intake and output for congestive heart failure (CHF) patients is an essential tool to prevent fluid overload, a principal cause of hospital admissions. Addressing this, bladder volume measurement systems utilizing bioimpedance and electrical impedance tomography have been proposed, with limited exploration of continuous monitoring within a wearable design. Advancing this format, we developed a conductivity digital twin from radiological data, where we performed exhaustive simulations to optimize electrode sensitivity on an individual basis. Our optimized placement demonstrated an efficient proof-of-concept volume estimation that required as few as seven measurement frames while maintaining low errors (CI 95% −1.11% to 1.00%) for volumes ≥100 mL. Additionally, we quantify the impact of ascites, a common confounding condition in CHF, on the bioimpedance signal. By improving monitoring technology, we aim to reduce CHF mortality by empowering patients and clinicians with a more thorough understanding of fluid status.
Bayesian Kernel Models for Statistical Genetics and Cancer Genomics
Crawford, Lorin
2017Thesis, cited 0 times
Thesis
Segmentation
Radiogenomics
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Revised Selected Papers, Part I
The two-volume set LNCS 11992 and 11993 constitutes the thoroughly refereed proceedings of the 5th International MICCAI Brainlesion Workshop, BrainLes 2019, the International Multimodal Brain Tumor Segmentation (BraTS) challenge, the Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification (CPM-RadPath) challenge, as well as the tutorial session on Tools Allowing Clinical Translation of Image Computing Algorithms (TACTICAL). These were held jointly at the Medical Image Computing for Computer Assisted Intervention Conference, MICCAI, in Shenzhen, China, in October 2019.; ; The revised selected papers presented in these volumes were organized in the following topical sections: brain lesion image analysis (12 selected papers from 32 submissions); brain tumor image segmentation (57 selected papers from 102 submissions); combined MRI and pathology brain tumor classification (4 selected papers from 5 submissions); tools allowing clinical translation of image computing algorithms (2 selected papers from 3 submissions.)
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II
A comprehensive lung CT landmark pair dataset for evaluating deformable image registration algorithms
Criscuolo, E. R.
Fu, Y.
Hao, Y.
Zhang, Z.
Yang, D.
Med Phys2024Journal Article, cited 0 times
TCGA-LUAD
TCGA-LUSC
Image Registration
Computed Tomography (CT)
deformable image registration
Motion correction
Algorithm Development
ground truth dataset
lung motion
PURPOSE: Deformable image registration (DIR) is a key enabling technology in many diagnostic and therapeutic tasks, but often does not meet the required robustness and accuracy for supporting clinical tasks. This is in large part due to a lack of high-quality benchmark datasets by which new DIR algorithms can be evaluated. Our team was supported by the National Institute of Biomedical Imaging and Bioengineering to develop DIR benchmark dataset libraries for multiple anatomical sites, comprising of large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Here we introduce our lung CT DIR benchmark dataset library, which was developed to improve upon the number and distribution of landmark pairs in current public lung CT benchmark datasets. ACQUISITION AND VALIDATION METHODS: Thirty CT image pairs were acquired from several publicly available repositories as well as authors' institution with IRB approval. The data processing workflow included multiple steps: (1) The images were denoised. (2) Lungs, airways, and blood vessels were automatically segmented. (3) Bifurcations were directly detected on the skeleton of the segmented vessel tree. (4) Falsely identified bifurcations were filtered out using manually defined rules. (5) A DIR was used to project landmarks detected on the first image onto the second image of the image pair to form landmark pairs. (6) Landmark pairs were manually verified. This workflow resulted in an average of 1262 landmark pairs per image pair. Estimates of the landmark pair target registration error (TRE) using digital phantoms were 0.4 mm +/- 0.3 mm. DATA FORMAT AND USAGE NOTES: The data is published in Zenodo at https://doi.org/10.5281/zenodo.8200423. Instructions for use can be found at https://github.com/deshanyang/Lung-DIR-QA. POTENTIAL APPLICATIONS: The dataset library generated in this work is the largest of its kind to date and will provide researchers with a new and improved set of ground truth benchmarks for quantitatively validating DIR algorithms within the lung.
Reconstruction of Mammography Projections using Image-to-Image Translation Techniques
Cristo Santos, Joana
Seoane Santos, Miriam
Henriques Abreu, Pedro
2024Conference Paper, cited 0 times
CBIS-DDSM
Mammography imaging is the gold standard for breast cancer detection and involves capturing two projections: mediolateral oblique and craniocaudal projections. The implementation of an approach that allows the acquisition of only one projection and reconstructs the other could mitigate patient burden, minimize radiation exposure, and reduce costs. Image-to-image translation has showcased the ability to generate realistic synthetic images in different medical imaging modalities which make these techniques a great candidate for the novel application in mammography. This study aims to compare five image-to-image translation approaches to assess the feasibility of reconstructing a mammography projection from its counterpart. The results indicate that ResViT shows the best overall performance in translating between both projections.
StaticCodeCT: single coded aperture tensorial X-ray CT
Cuadros, A. P.
Ma, X.
Restrepo, C. M.
Arce, G. R.
Opt Express2021Journal Article, cited 0 times
LDCT-and-Projection-data
Image resampling
Algorithm Development
Coded aperture X-ray CT (CAXCT) is a new low-dose imaging technology that promises far-reaching benefits in industrial and clinical applications. It places various coded apertures (CA) at a time in front of the X-ray source to partially block the radiation. The ill-posed inverse reconstruction problem is then solved using l1-norm-based iterative reconstruction methods. Unfortunately, to attain high-quality reconstructions, the CA patterns must change in concert with the view-angles making the implementation impractical. This paper proposes a simple yet radically different approach to CAXCT, which is coined StaticCodeCT, that uses a single-static CA in the CT gantry, thus making the imaging system amenable for practical implementations. Rather than using conventional compressed sensing algorithms for recovery, we introduce a new reconstruction framework for StaticCodeCT. Namely, we synthesize the missing measurements using low-rank tensor completion principles that exploit the multi-dimensional data correlation and low-rank nature of a 3-way tensor formed by stacking the 2D coded CT projections. Then, we use the FDK algorithm to recover the 3D object. Computational experiments using experimental projection measurements exhibit up to 10% gains in the normalized root mean square distance of the reconstruction using the proposed method compared with those attained by alternative low-dose systems.
Survival Prediction of Brain Cancer with Incomplete Radiology, Pathology, Genomic, and Demographic Data
Cui, Can
Liu, Han
Liu, Quan
Deng, Ruining
Asad, Zuhayr
Wang, Yaohong
Zhao, Shilin
Yang, Haichun
Landman, Bennett A.
Huo, Yuankai
2022Book Section, cited 0 times
TCGA-GBM
TCGA-LGG
Integrating cross-department multi-modal data (e.g., radiology, pathology, genomic, and demographic data) is ubiquitous in brain cancer diagnosis and survival prediction. To date, such an integration is typically conducted by human physicians (and panels of experts), which can be subjective and semi-quantitative. Recent advances in multi-modal deep learning, however, have opened a door to leverage such a process in a more objective and quantitative manner. Unfortunately, the prior arts of using four modalities on brain cancer survival prediction are limited by a “complete modalities” setting (i.e., with all modalities available). Thus, there are still open questions on how to effectively predict brain cancer survival from incomplete radiology, pathology, genomic, and demographic data (e.g., one or more modalities might not be collected for a patient). For instance, should we use both complete and incomplete data, and more importantly, how do we use such data? To answer the preceding questions, we generalize the multi-modal learning on cross-department multi-modal data to a missing data setting. Our contribution is three-fold: 1) We introduce a multi-modal learning with missing data (MMD) pipeline with competitive performance and less hardware consumption; 2) We extend multi-modal learning on radiology, pathology, genomic, and demographic data into missing data scenarios; 3) A large-scale public dataset (with 962 patients) is collected to systematically evaluate glioma tumor survival prediction using four modalities. The proposed method improved the C-index of survival prediction from 0.7624 to 0.8053.
Predicting the ISUP grade of clear cell renal cell carcinoma with multiparametric MR and multiphase CT radiomics
Cui, Enming
Li, Zhuoyong
Ma, Changyi
Li, Qing
Lei, Yi
Lan, Yong
Yu, Juan
Zhou, Zhipeng
Li, Ronggang
Long, Wansheng
Lin, Fan
Eur Radiol2020Journal Article, cited 0 times
Website
Clear cell renal cell carcinoma (ccRCC)
Machine Learning
Radiomics
TCGA-KIRC
OBJECTIVE: To investigate externally validated magnetic resonance (MR)-based and computed tomography (CT)-based machine learning (ML) models for grading clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients with pathologically proven ccRCC in 2009-2018 were retrospectively included for model development and internal validation; patients from another independent institution and The Cancer Imaging Archive dataset were included for external validation. Features were extracted from T1-weighted, T2-weighted, corticomedullary-phase (CMP), and nephrographic-phase (NP) MR as well as precontrast-phase (PCP), CMP, and NP CT. CatBoost was used for ML-model investigation. The reproducibility of texture features was assessed using intraclass correlation coefficient (ICC). Accuracy (ACC) was used for ML-model performance evaluation. RESULTS: Twenty external and 440 internal cases were included. Among 368 and 276 texture features from MR and CT, 322 and 250 features with good to excellent reproducibility (ICC >/= 0.75) were included for ML-model development. The best MR- and CT-based ML models satisfactorily distinguished high- from low-grade ccRCCs in internal (MR-ACC = 73% and CT-ACC = 79%) and external (MR-ACC = 74% and CT-ACC = 69%) validation. Compared to single-sequence or single-phase images, the classifiers based on all-sequence MR (71% to 73% in internal and 64% to 74% in external validation) and all-phase CT (77% to 79% in internal and 61% to 69% in external validation) images had significant increases in ACC. CONCLUSIONS: MR- and CT-based ML models are valuable noninvasive techniques for discriminating high- from low-grade ccRCCs, and multiparameter MR- and multiphase CT-based classifiers are potentially superior to those based on single-sequence or single-phase imaging. KEY POINTS: * Both the MR- and CT-based machine learning models are reliable predictors for differentiating high- from low-grade ccRCCs. * ML models based on multiparameter MR sequences and multiphase CT images potentially outperform those based on single-sequence or single-phase images in ccRCC grading.
Primary lung tumor segmentation from PET–CT volumes with spatial–topological constraint
Cui, Hui
Wang, Xiuying
Lin, Weiran
Zhou, Jianlong
Eberl, Stefan
Feng, Dagan
Fulham, Michael
International Journal of Computer Assisted Radiology and Surgery2016Journal Article, cited 14 times
Website
RIDER Phantom PET–CT
LUNG
Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program
Cui, X.
Zheng, S.
Heuvelmans, M. A.
Du, Y.
Sidorenkov, G.
Fan, S.
Li, Y.
Xie, Y.
Zhu, Z.
Dorrius, M. D.
Zhao, Y.
Veldhuis, R. N. J.
de Bock, G. H.
Oudkerk, M.
van Ooijen, P. M. A.
Vliegenthart, R.
Ye, Z.
Eur J Radiol2022Journal Article, cited 0 times
Website
LIDC-IDRI
National Lung Screening Trial (NLST)
*Deep Learning
Early Detection of Cancer
Humans
Radiomics
Reproducibility of Results
Sensitivity and Specificity
*Solitary Pulmonary Nodule/diagnostic imaging
Tomography
X-Ray Computed
Artificial intelligence
Computed Tomography (CT)
Computer Aided Diagnosis (CADx)
Pulmonary nodules
OBJECTIVE: To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS: One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS: The reference standard consisted of 262 nodules >/= 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules >/= 4 - </= 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS: The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.
Volume of high-risk intratumoral subregions at multi-parametric MR imaging predicts overall survival and complements molecular analysis of glioblastoma
Cui, Yi
Ren, Shangjie
Tha, Khin Khin
Wu, Jia
Shirato, Hiroki
Li, Ruijiang
European Radiology2017Journal Article, cited 10 times
Website
GBM
Prognostic Imaging Biomarkers in Glioblastoma: Development and Independent Validation on the Basis of Multiregion and Quantitative Analysis of MR Images
Cui, Yi
Tha, Khin Khin
Terasaka, Shunsuke
Yamaguchi, Shigeru
Wang, Jeff
Kudo, Kohsuke
Xing, Lei
Shirato, Hiroki
Li, Ruijiang
Radiology2015Journal Article, cited 45 times
Website
TCGA-GBM
Computer Aided Diagnosis (CADx)
Segmentation
PURPOSE: To develop and independently validate prognostic imaging biomarkers for predicting survival in patients with glioblastoma on the basis of multiregion quantitative image analysis. MATERIALS AND METHODS: This retrospective study was approved by the local institutional review board, and informed consent was waived. A total of 79 patients from two independent cohorts were included. The discovery and validation cohorts consisted of 46 and 33 patients with glioblastoma from the Cancer Imaging Archive (TCIA) and the local institution, respectively. Preoperative T1-weighted contrast material-enhanced and T2-weighted fluid-attenuation inversion recovery magnetic resonance (MR) images were analyzed. For each patient, we semiautomatically delineated the tumor and performed automated intratumor segmentation, dividing the tumor into spatially distinct subregions that demonstrate coherent intensity patterns across multiparametric MR imaging. Within each subregion and for the entire tumor, we extracted quantitative imaging features, including those that fully capture the differential contrast of multimodality MR imaging. A multivariate sparse Cox regression model was trained by using TCIA data and tested on the validation cohort. RESULTS: The optimal prognostic model identified five imaging biomarkers that quantified tumor surface area and intensity distributions of the tumor and its subregions. In the validation cohort, our prognostic model achieved a concordance index of 0.67 and significant stratification of overall survival by using the log-rank test (P = .018), which outperformed conventional prognostic factors, such as age (concordance index, 0.57; P = .389) and tumor volume (concordance index, 0.59; P = .409). CONCLUSION: The multiregion analysis presented here establishes a general strategy to effectively characterize intratumor heterogeneity manifested at multimodality imaging and has the potential to reveal useful prognostic imaging biomarkers in glioblastoma.
Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset
Cuocolo, Renato
Comelli, Albert
Stefano, Alessandro
Benfante, Viviana
Dahiya, Navdeep
Stanzione, Arnaldo
Castaldo, Anna
De Lucia, Davide Raffaele
Yezzi, Anthony
Imbriaco, Massimo
Journal of Magnetic Resonance Imaging2021Journal Article, cited 0 times
Website
ProstateX
Deep learning
segmentation
Background Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen. Purpose This study compared different deep learning methods for whole-gland and zonal prostate segmentation. Study Type Retrospective. Population A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. Field strength/Sequence A 3 T, TSE T2-weighted. Assessment Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. Statistical Tests Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. Results The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. Data Conclusion Deep learning networks can accurately segment the prostate using T2-weighted images. Evidence Level 4 Technical Efficacy Stage 2
Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset
Cuocolo, R.
Stanzione, A.
Castaldo, A.
De Lucia, D. R.
Imbriaco, M.
Eur J Radiol2021Journal Article, cited 0 times
Website
PROSTATEx
Segmentation
Image classification
Machine Learning
PURPOSE: Radiomic features are promising quantitative parameters that can be extracted from medical images and employed to build machine learning predictive models. However, generalizability is a key concern, encouraging the use of public image datasets. We performed a quality assessment of the PROSTATEx training dataset and provide publicly available lesion, whole-gland, and zonal anatomy segmentation masks. METHOD: Two radiology residents and two experienced board-certified radiologists reviewed the 204 prostate MRI scans (330 lesions) included in the training dataset. The quality of provided lesion coordinate was scored using the following scale: 0 = perfectly centered, 1 = within lesion, 2 = within the prostate without lesion, 3 = outside the prostate. All clearly detectable lesions were segmented individually slice-by-slice on T2-weighted and apparent diffusion coefficient images. With the same methodology, volumes of interest including the whole gland, transition, and peripheral zones were annotated. RESULTS: Of the 330 available lesion identifiers, 3 were duplicates (1%). From the remaining, 218 received score = 0, 74 score = 1, 31 score = 2 and 4 score = 3. Overall, 299 lesions were verified and segmented. Independently of lesion coordinate score and other issues (e.g., lesion coordinates falling outside DICOM images, artifacts etc.), the whole prostate gland and zonal anatomy were also manually annotated for all cases. CONCLUSION: While several issues were encountered evaluating the original PROSTATEx dataset, the improved quality and availability of lesion, whole-gland and zonal segmentations will increase its potential utility as a common benchmark in prostate MRI radiomics.
Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival
Cury, Sarah Santiloni
de Moraes, Diogo
Freire, Paula Paccielli
de Oliveira, Grasieli
Marques, Douglas Venancio Pereira
Fernandez, Geysson Javier
Dal-Pai-Silva, Maeli
Hasimoto, Erica Nishida
Dos Reis, Patricia Pintor
Rogatto, Silvia Regina
Carvalho, Robson Francisco
Cancers (Basel)2019Journal Article, cited 1 times
Website
NSCLC-Radiomics-Genomics
Radiogenomics
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.
Algorithmic three-dimensional analysis of tumor shape in MRI improves prognosis of survival in glioblastoma: a multi-institutional study
Czarnek, Nicholas
Clark, Kal
Peters, Katherine B
Mazurowski, Maciej A
Journal of Neuro-Oncology2017Journal Article, cited 15 times
Website
TCGA-GBM
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
In this retrospective, IRB-exempt study, we analyzed data from 68 patients diagnosed with glioblastoma (GBM) in two institutions and investigated the relationship between tumor shape, quantified using algorithmic analysis of magnetic resonance images, and survival. Each patient's Fluid Attenuated Inversion Recovery (FLAIR) abnormality and enhancing tumor were manually delineated, and tumor shape was analyzed by automatic computer algorithms. Five features were automatically extracted from the images to quantify the extent of irregularity in tumor shape in two and three dimensions. Univariate Cox proportional hazard regression analysis was performed to determine how prognostic each feature was of survival. Kaplan Meier analysis was performed to illustrate the prognostic value of each feature. To determine whether the proposed quantitative shape features have additional prognostic value compared with standard clinical features, we controlled for tumor volume, patient age, and Karnofsky Performance Score (KPS). The FLAIR-based bounding ellipsoid volume ratio (BEVR), a 3D complexity measure, was strongly prognostic of survival, with a hazard ratio of 0.36 (95% CI 0.20-0.65), and remained significant in regression analysis after controlling for other clinical factors (P = 0.0061). Three enhancing-tumor based shape features were prognostic of survival independently of clinical factors: BEVR (P = 0.0008), margin fluctuation (P = 0.0013), and angular standard deviation (P = 0.0078). Algorithmically assessed tumor shape is statistically significantly prognostic of survival for patients with GBM independently of patient age, KPS, and tumor volume. This shows promise for extending the utility of MR imaging in treatment of GBM patients.
Radiogenomics of glioblastoma: a pilot multi-institutional study to investigate a relationship between tumor shape features and tumor molecular subtype
Faber: A Hardware/SoftWare Toolchain for Image Registration
D'Arnese, Eleonora
Conficconi, Davide
Sozzo, Emanuele Del
Fusco, Luigi
Sciuto, Donatella
Santambrogio, Marco Domenico
IEEE Transactions on Parallel and Distributed Systems2023Journal Article, cited 0 times
Website
CPTAC-LUAD
Algorithm Development
Image Registration
Graphics Processing Units (GPU)
Image registration is a well-defined computation paradigm widely applied to align one or more images to a target image. This paradigm, which builds upon three main components, is particularly compute-intensive and represents many image processing pipelines’ bottlenecks. State-of-the-art solutions leverage hardware acceleration to speed up image registration, but they are usually limited to implementing a single component. We present Faber, an open-source HW/SW CAD toolchain tailored to image registration. The Faber toolchain comprises HW/SW highly-tunable registration components, supports users with different expertise in building custom pipelines, and automates the design process. In this direction, Faber provides both default settings for entry-level users and latency and resource models to guide HW experts in customizing the different components. Finally, Faber achieves from 1.5× to 54× in speedup and from 2× to 177× in energy efficiency against state-of-the-art tools on a Xeon Gold.
Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans
da Silva, Giovanni L F
Diniz, Petterson S
Ferreira, Jonnison L
Franca, Joao V F
Silva, Aristofanes C
de Paiva, Anselmo C
de Cavalcanti, Elton A A
Med Biol Eng Comput2020Journal Article, cited 0 times
Website
Prostate-3T
Deep convolutional neural network (DCNN)
Segmentation
PROSTATE
Automatic and reliable prostate segmentation is an essential prerequisite for assisting the diagnosis and treatment, such as guiding biopsy procedure and radiation therapy. Nonetheless, automatic segmentation is challenging due to the lack of clear prostate boundaries owing to the similar appearance of prostate and surrounding tissues and the wide variation in size and shape among different patients ascribed to pathological changes or different resolutions of images. In this regard, the state-of-the-art includes methods based on a probabilistic atlas, active contour models, and deep learning techniques. However, these techniques have limitations that need to be addressed, such as MRI scans with the same spatial resolution, initialization of the prostate region with well-defined contours and a set of hyperparameters of deep learning techniques determined manually, respectively. Therefore, this paper proposes an automatic and novel coarse-to-fine segmentation method for prostate 3D MRI scans. The coarse segmentation step combines local texture and spatial information using the Intrinsic Manifold Simple Linear Iterative Clustering algorithm and probabilistic atlas in a deep convolutional neural networks model jointly with the particle swarm optimization algorithm to classify prostate and non-prostate tissues. Then, the fine segmentation uses the 3D Chan-Vese active contour model to obtain the final prostate surface. The proposed method has been evaluated on the Prostate 3T and PROMISE12 databases presenting a dice similarity coefficient of 84.86%, relative volume difference of 14.53%, sensitivity of 90.73%, specificity of 99.46%, and accuracy of 99.11%. Experimental results demonstrate the high performance potential of the proposed method compared to those previously published.
Pathophysiological mapping of tumor habitats in the breast in DCE-MRI using molecular texture descriptor
da Silva Neto, Otilio Paulo
Araújo, José Denes Lima
Caldas Oliveira, Ana Gabriela
Cutrim, Mara
Silva, Aristófanes Corrêa
Paiva, Anselmo Cardoso
Gattass, Marcelo
Computers in Biology and Medicine2019Journal Article, cited 0 times
QIN Breast DCE-MRI
Breast
MRI
BACKGROUND: We propose a computational methodology capable of detecting and analyzing breast tumor habitats in images acquired by magnetic resonance imaging with dynamic contrast enhancement (DCE-MRI), based on the pathophysiological behavior of the contrast agent (CA).
METHODS: The proposed methodology comprises three steps. In summary, the first step is the acquisition of images from the Quantitative Imaging Network Breast. In the second step, the segmentation of the breasts is performed to remove the background, noise, and other unwanted objects from the image. In the third step, the generation of habitats is performed by applying two techniques: the molecular texture descriptor (MTD) that highlights the CA regions in the breast, and pathophysiological texture mapping (MPT), which generates tumor habitats based on the behavior of the CA. The combined use of these two techniques allows the automatic detection of tumors in the breast and analysis of each separate habitat with respect to their malignancy type.
RESULTS: The results found in this study were promising, with 100% of breast tumors being identified. The segmentation results exhibited an accuracy of 99.95%, sensitivity of 71.07%, specificity of 99.98%, and volumetric similarity of 77.75%. Moreover, we were able to classify the malignancy of the tumors, with 6 classified as malignant type III (WashOut) and 14 as malignant type II (Plateau), for a total of 20 cases.
CONCLUSION: We proposed a method for the automatic detection of tumors in the breast in DCE-MRI and performed the pathophysiological mapping of tumor habitats by analyzing the behavior of the CA, combining MTD and MPT, which allowed the mapping of internal tumor habitats.
Self-training for Brain Tumour Segmentation with Uncertainty Estimation and Biophysics-Guided Survival Prediction
Dai, Chengliang
Wang, Shuo
Raynaud, Hadrien
Mo, Yuanhan
Angelini, Elsa
Guo, Yike
Bai, Wenjia
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Radiomics
Gliomas are among the most common types of malignant brain tumours in adults. Given the intrinsic heterogeneity of gliomas, the multi-parametric magnetic resonance imaging (mpMRI) is the most effective technique for characterising gliomas and their sub-regions. Accurate segmentation of the tumour sub-regions on mpMRI is of clinical significance, which provides valuable information for treatment planning and survival prediction. Thanks to the recent developments on deep learning, the accuracy of automated medical image segmentation has improved significantly. In this paper, we leverage the widely used attention and self-training techniques to conduct reliable brain tumour segmentation and uncertainty estimation. Based on the segmentation result, we present a biophysics-guided prognostic model for the prediction of overall survival. Our method of uncertainty estimation has won the second place of the MICCAI 2020 BraTS Challenge.
Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic Magnetic Resonance Imaging Using Mask Region-Based Convolutional Neural Networks
Dai, Zhenzhen
Carver, Eric
Liu, Chang
Lee, Joon
Feldman, Aharon
Zong, Weiwei
Pantelic, Milan
Elshaikh, Mohamed
Wen, Ning
2020Journal Article, cited 0 times
PROSTATEx
PURPOSE: Accurate delineation of the prostate gland and intraprostatic lesions (ILs) is essential for prostate cancer dose-escalated radiation therapy. The aim of this study was to develop a sophisticated deep neural network approach to magnetic resonance image analysis that will help IL detection and delineation for clinicians.
METHODS AND MATERIALS: We trained and evaluated mask region-based convolutional neural networks to perform the prostate gland and IL segmentation. There were 2 cohorts in this study: 78 public patients (cohort 1) and 42 private patients from our institution (cohort 2). Prostate gland segmentation was performed using T2-weighted images (T2WIs), although IL segmentation was performed using T2WIs and coregistered apparent diffusion coefficient maps with prostate patches cropped out. The IL segmentation model was extended to select 5 highly suspicious volumetric lesions within the entire prostate.
RESULTS: The mask region-based convolutional neural networks model was able to segment the prostate with dice similarity coefficient (DSC) of 0.88 ± 0.04, 0.86 ± 0.04, and 0.82 ± 0.05; sensitivity (Sens.) of 0.93, 0.95, and 0.95; and specificity (Spec.) of 0.98, 0.85, and 0.90. However, ILs were segmented with DSC of 0.62 ± 0.17, 0.59 ± 0.14, and 0.38 ± 0.19; Sens. of 0.55 ± 0.30, 0.63 ± 0.28, and 0.22 ± 0.24; and Spec. of 0.974 ± 0.010, 0.964 ± 0.015, and 0.972 ± 0.015 in public validation/public testing/private testing patients when trained with patients from cohort 1 only. When trained with patients from both cohorts, the values were as follows: DSC of 0.64 ± 0.11, 0.56 ± 0.15, and 0.46 ± 0.15; Sens. of 0.57 ± 0.23, 0.50 ± 0.28, and 0.33 ± 0.17; and Spec. of 0.980 ± 0.009, 0.969 ± 0.016, and 0.977 ± 0.013.
CONCLUSIONS: Our research framework is able to perform as an end-to-end system that automatically segmented the prostate gland and identified and delineated highly suspicious ILs within the entire prostate. Therefore, this system demonstrated the potential for assisting the clinicians in tumor delineation.
Brain Tumor Segmentation Using Non-local Mask R-CNN and Single Model Ensemble
Gliomas are the most common primary malignant brain tumors. Accurate segmentation and quantitative analysis of brain tumor are critical for diagnosis and treatment planning. Automatically segmenting tumors and their subregions is a challenging task as demonstrated by the annual Multimodal Brain Tumor Segmentation Challenge (BraTS). In order to tackle this challenging task, we trained 2D non-local Mask R-CNN with 814 patients from the BraTS 2021 training dataset. Our performance on another 417 patients from the BraTS 2021 training dataset were as follows: DSC of 0.784, 0.851 and 0.817; sensitivity of 0.775, 0.844 and 0.825 for the enhancing tumor, whole tumor and tumor core, respectively. By applying the focal loss function, our method achieved a DSC of 0.775, 0.885 and 0.829, as well as sensitivity of 0.757, 0.877 and 0.801. We also experimented with data distillation to ensemble single model’s predictions. Our refined results were DSC of 0.797, 0.884 and 0.833; sensitivity of 0.820, 0.855 and 0.820.
The Role of Transient Vibration of the Skull on Concussion
Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury.; ; Summary for Lay Audience; A concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. As the maximum mechanical impedance of the brain tissue occurs at 450±50 Hz, skull resonant frequencies may play an important role in the propagation of this vibration into the brain tissue. The overall goal of the proposed research is to gain a better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives: I) develop an automatic method to segment/extract skull and brain from magnetic resonance imaging (MRI), II) create a novel 2D and 3D automatic method to align the facial skeleton, and III) identify the skull resonant frequencies and raise the theory of how these vibrations may propagate into brain tissue. For objective 1, 58 MRI and their respective computed tomography (CT) scans were used to create a convolutional neural network framework for skull and brain segmentation in MRI. Moreover, an invariant moment kernel was introduced to improve the brain segmentation accuracy in MRI. For objective 2, a 2D and 3D technique for automatically calculating the craniofacial symmetry midline from head CT scans using deep learning techniques was used to precisely align the facial skeleton for future impact analysis. In objective 3, several skulls segmented were tested to identify their natural resonant frequencies. Those with a resonant frequency of 450±50 Hz were selected to improve understanding of how their shapes and thickness may help the vibration to propagate deeply in the brain tissue. The results from this study will improve our understanding of the role of transient vibration of the skull on concussion.
Development of a Convolutional Neural Network Based Skull Segmentation in MRI Using Standard Tesselation Language Models
Dalvit Carvalho da Silva, R.
Jenkyn, T. R.
Carranza, V. A.
J Pers Med2021Journal Article, cited 0 times
Website
CPTAC-GBM
HNSCC
TCGA-HNSC
ACRIN-FMISO-Brain
ACRIN 6684
Computed Tomography (CT)
Magnetic Resonance Imaging (MRI)
Convolutional Neural Network (CNN)
Segmentation
Image Registration
Segmentation is crucial in medical imaging analysis to help extract regions of interest (ROI) from different imaging modalities. The aim of this study is to develop and train a 3D convolutional neural network (CNN) for skull segmentation in magnetic resonance imaging (MRI). 58 gold standard volumetric labels were created from computed tomography (CT) scans in standard tessellation language (STL) models. These STL models were converted into matrices and overlapped on the 58 corresponding MR images to create the MRI gold standards labels. The CNN was trained with these 58 MR images and a mean +/- standard deviation (SD) Dice similarity coefficient (DSC) of 0.7300 +/- 0.04 was achieved. A further investigation was carried out where the brain region was removed from the image with the help of a 3D CNN and manual corrections by using only MR images. This new dataset, without the brain, was presented to the previous CNN which reached a new mean +/- SD DSC of 0.7826 +/- 0.03. This paper aims to provide a framework for segmenting the skull using CNN and STL models, as the 3D CNN was able to segment the skull with a certain precision.
Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?
Damilakis, E.
Mavroudis, D.
Sfakianaki, M.
Souglakos, J.
Cancers (Basel)2020Journal Article, cited 0 times
Website
NSCLC-Radiomics
Radiomics
Radiogenomics
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.
A deep learning framework integrating MRI image preprocessing methods for brain tumor segmentation and classification
Dang, Khiet
Vo, Toi
Ngo, Lua
Ha, Huong
IBRO Neuroscience Reports2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Glioma grading is critical in treatment planning and prognosis. This study aims to address this issue through MRI-based classification to develop an accurate model for glioma diagnosis. Here, we employed a deep learning pipeline with three essential steps: (1) MRI images were segmented using preprocessing approaches and UNet architecture, (2) brain tumor regions were extracted using segmentation, then (3) high-grade gliomas and low-grade gliomas were classified using the VGG and GoogleNet implementations. Among the additional preprocessing techniques used in conjunction with the segmentation task, the combination of data augmentation and Window Setting Optimization was found to be the most effective tool, resulting in the Dice coefficient of 0.82, 0.91, and 0.72 for enhancing tumor, whole tumor, and tumor core, respectively. While most of the proposed models achieve comparable accuracies of about 93 % on the testing dataset, the pipeline of VGG combined with UNet segmentation obtains the highest accuracy of 97.44 %. In conclusion, the presented architecture illustrates a realistic model for detecting gliomas; moreover, it emphasizes the significance of data augmentation and segmentation in improving model performance.
MFMF: Multiple Foundation Model Fusion Networks for Whole Slide Image Classification
Dang, Thao M.
Guo, Yuzhi
Ma, Hehuan
Zhou, Qifeng
Na, Saiyang
Gao, Jean
Huang, Junzhou
2024Conference Paper, cited 0 times
TCGA-LUAD
TCGA-LUSC
Tumor detection and subtyping remain a significant challenge in histopathology image analysis. As digital pathology progresses, the applications of deep learning become essential. Whole Slide Image (WSI) classification has emerged as a crucial task in digital pathology, vital for accurate cancer diagnosis and treatment. In this paper, we introduce an innovative abnormal-guided Multiple Foundation Model Fusion (MFMF) framework, aimed at enhancing WSI classification by integrating multi-level information from pathology images with Multiple Instance Learning (MIL). Traditional methods often focus on patch-level features while neglecting the rich contextual and morphological details at the cell and text levels, thus failing to fully exploit the multidimensional nature of WSIs. Our method enhances traditional models by efficiently integrating patch-level, cell-level, and text-level features using three foundation models. These are then fused through a novel three-step cross-attention module that effectively leverages cell and text information with patch-level features. Furthermore, unlike most studies that use attention scores to select instances based on the assumption that high scores indicate the presence of a tumor, we design an abnormality-aware module to naturally identify and detect abnormal features (i.e., tumors) as the criteria for selecting important instances, thereby reducing computational costs and boosting overall performance. We validate our approach against leading benchmarks on the CAMELYON16 and TCGA-Lung datasets, achieving superior classification performance. Our study not only tackles the challenges of sparsity and noise in multi-level features but also enhances the efficiency and accuracy of WSI classification by exploiting abnormal features.
Feature Extraction In Medical Images by Using Deep Learning Approach
Dara, S
Tumma, P
Eluri, NR
Kancharla, GR
International Journal of Pure and Applied Mathematics2018Journal Article, cited 0 times
Website
TCGA-LUAD
Machine Learning
Deep Learning
Feature Extraction
Prognostic Value of Preoperative MRI Metrics for Diffuse Lower-Grade Glioma Molecular Subtypes
Darvishi, P.
Batchala, P. P.
Patrie, J. T.
Poisson, L. M.
Lopes, M. B.
Jain, R.
Fadul, C. E.
Schiff, D.
Patel, S. H.
AJNR Am J Neuroradiol2020Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Despite the improved prognostic relevance of the 2016 WHO molecular-based classification of lower-grade gliomas, variability in clinical outcome persists within existing molecular subtypes. Our aim was to determine prognostically significant metrics on preoperative MR imaging for lower-grade gliomas within currently defined molecular categories. MATERIALS AND METHODS: We undertook a retrospective analysis of 306 patients with lower-grade gliomas accrued from an institutional data base and The Cancer Genome Atlas. Two neuroradiologists in consensus analyzed preoperative MRIs of each lower-grade glioma to determine the following: tumor size, tumor location, number of involved lobes, corpus callosum involvement, hydrocephalus, midline shift, eloquent cortex involvement, ependymal extension, margins, contrast enhancement, and necrosis. Adjusted hazard ratios determined the association between MR imaging metrics and overall survival per molecular subtype, after adjustment for patient age, patient sex, World Health Organization grade, and surgical resection status. RESULTS: For isocitrate dehydrogenase (IDH) wild-type lower-grade gliomas, tumor size (hazard ratio, 3.82; 95% CI, 1.94-7.75; P < .001), number of involved lobes (hazard ratio, 1.70; 95% CI, 1.28-2.27; P < .001), hydrocephalus (hazard ratio, 4.43; 95% CI, 1.12-17.54; P = .034), midline shift (hazard ratio, 1.16; 95% CI, 1.03-1.30; P = .013), margins (P = .031), and contrast enhancement (hazard ratio, 0.34; 95% CI, 0.13-0.90; P = .030) were associated with overall survival. For IDH-mutant 1p/19q-codeleted lower-grade gliomas, tumor size (hazard ratio, 2.85; 95% CI, 1.06-7.70; P = .039) and ependymal extension (hazard ratio, 6.34; 95% CI, 1.07-37.59; P = .042) were associated with overall survival. CONCLUSIONS: MR imaging metrics offers prognostic information for patients with lower-grade gliomas within molecularly defined classes, with the greatest prognostic value for IDH wild-type lower-grade gliomas.
An Efficient Detection and Classification of Acute Leukemia using Transfer Learning and Orthogonal Softmax Layer-based Model
Das, P. K.
Sahoo, B.
Meher, S.
IEEE/ACM Trans Comput Biol Bioinform2022Journal Article, cited 0 times
Website
C_NMC_2019
Blood cancer
Pathomics
Support Vector Machine (SVM)
Image color analysis
Transfer learning
Acute lymphoblastic leukemia
acute myelogenous leukemia
Classification
Orthogonal softMax layer (OSL)
For the early diagnosis of hematological disorders like blood cancer, microscopic analysis of blood cells is very important. Traditional deep CNNs lead to overfitting when it receives small medical image datasets such as ALLIDB1, ALLIDB2, and ASH. This paper proposes a new and effective model for classifying and detecting Acute Lymphoblastic Leukemia (ALL) or Acute Myelogenous Leukemia (AML) that delivers excellent performance in small medical datasets. Here, we have proposed a novel Orthogonal SoftMax Layer (OSL)-based Acute Leukemia detection model that consists of ResNet 18-based deep feature extraction followed by efficient OSL-based classification. Here, OSL is integrated with the ResNet18 to improve the classification performance by making the weight vectors orthogonal to each other. Hence, it integrates ResNet benefits (residual learning and identity mapping) with the benefits of OSL-based classification (improvement of feature discrimination capability and computational efficiency). Furthermore, we have introduced extra dropout and ReLu layers in the architecture to achieve a faster network with enhanced performance. The performance verification is performed on standard ALLIDB1, ALLIDB2, and C_NMC_2019 datasets for efficient ALL detection and ASH dataset for effective AML detection. The experimental performance demonstrates the superiority of the proposed model over other compairing models.
Detection and Classification of Immature Leukocytes for Diagnosis of Acute Myeloid Leukemia Using Random Forest Algorithm
Dasariraju, Satvik
Huo, Marc
McCalla, Serena
Bioengineering2020Journal Article, cited 0 times
AML-Cytomorphology_LMU
Acute myeloid leukemia (AML) is a fatal blood cancer that progresses rapidly and hinders the function of blood cells and the immune system. The current AML diagnostic method, a manual examination of the peripheral blood smear, is time consuming, labor intensive, and suffers from considerable inter-observer variation. Herein, a machine learning model to detect and classify immature leukocytes for efficient diagnosis of AML is presented. Images of leukocytes in AML patients and healthy controls were obtained from a publicly available dataset in The Cancer Imaging Archive. Image format conversion, multi-Otsu thresholding, and morphological operations were used for segmentation of the nucleus and cytoplasm. From each image, 16 features were extracted, two of which are new nucleus color features proposed in this study. A random forest algorithm was trained for the detection and classification of immature leukocytes. The model achieved 92.99% accuracy for detection and 93.45% accuracy for classification of immature leukocytes into four types. Precision values for each class were above 65%, which is an improvement on the current state of art. Based on Gini importance, the nucleus to cytoplasm area ratio was a discriminative feature for both detection and classification, while the two proposed features were shown to be significant for classification. The proposed model can be used as a support tool for the diagnosis of AML, and the features calculated to be most important serve as a baseline for future research.
Brain tumor image pixel segmentation and detection using an aggregation of GAN models with vision transformer
Datta, Priyanka
Rohilla, Rajesh
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
Website
BraTS 2020
Magnetic Resonance Imaging (MRI)
Imaging features
Image Enhancement/methods
Classification
Algorithm Development
Generative Adversarial Network (GAN)
A number of applications in the field of medical analysis require the difficult and crucial tasks of brain tumor detection and segmentation from magnetic resonance imaging (MRI). Given that each type of brain imaging provides distinctive information about the specifics of each tumor component, in order to create a flexible and successful brain tumor segmentation system, we first suggest a normalization preprocessing method along with pixel segmentation. Then creating synthetic images is advantageous in many fields thanks to generative adversarial networks (GANs). In contrast, combining different GANs may enable understanding of the distributed features but it can make the model very complex and confusing. Standalone GAN may only retrieve the localized features in the latent version of an image. To achieve global and local feature extraction in a single model, we have used a vision transformer (ViT) along with a standalone GAN which will further improve the similarity of the images and can increase the performance of the model for detection of tumor. By effectively overcoming the constraint of data scarcity, high computational time, and lower discrimination capability, our suggested model can comprehend better accuracy, and lower computational time and also give the understanding of the information variance in various representations of the original images. The proposed model was evaluated on the BraTS 2020 dataset and Masoud2021 dataset, that is, a combination of the three datasets SARTAJ, Figshare, and BR35H. The obtained results demonstrate that the suggested model is capable of producing fine-quality images with accuracy and sensitivity scores of 0.9765 and 0.977 on the BraTS 2020 dataset as well as 0.9899 and 0.9683 on the Masoud2021 dataset.
AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium
Davatzikos, C.
Barnholtz-Sloan, J. S.
Bakas, S.
Colen, R.
Mahajan, A.
Quintero, C. B.
Font, J. C.
Puig, J.
Jain, R.
Sloan, A. E.
Badve, C.
Marcus, D. S.
Choi, Y. S.
Lee, S. K.
Chang, J. H.
Poisson, L. M.
Griffith, B.
Dicker, A. P.
Flanders, A. E.
Booth, T. C.
Rathore, S.
Akbari, H.
Sako, C.
Bilello, M.
Shukla, G.
Kazerooni, A. F.
Brem, S.
Lustig, R.
Mohan, S.
Bagley, S.
Nasrallah, M.
O'Rourke, D. M.
Neuro-Oncology2020Journal Article, cited 0 times
Website
Machine Learning
Glioblastoma Multiforme (GBM)
Computer Aided Diagnosis (CADx)
Magnetic Resonance Imaging (MRI)
Radiomics
Radiomic features
BRAIN
Cross-linking breast tumor transcriptomic states and tissue histology
Dawood, M.
Eastwood, M.
Jahanifar, M.
Young, L.
Ben-Hur, A.
Branson, K.
Jones, L.
Rajpoot, N.
Minhas, Fuaa
Cell Rep Med2023Journal Article, cited 1 times
Website
CPTAC-BRCA
Whole Slide Imaging (WSI)
TCGA-BRCA
Humans
Female
*Gene Expression Profiling
Transcriptome/genetics
Neural Networks
Computer
Phenotype
*Breast Neoplasms/genetics
breast cancer
computational pathology
gene groups
genotype to phenotype mapping
graph neural networks
receptor status prediction
spatial transcriptomics
topic modelling
transcriptomics
Identification of the gene expression state of a cancer patient from routine pathology imaging and characterization of its phenotypic effects have significant clinical and therapeutic implications. However, prediction of expression of individual genes from whole slide images (WSIs) is challenging due to co-dependent or correlated expression of multiple genes. Here, we use a purely data-driven approach to first identify groups of genes with co-dependent expression and then predict their status from WSIs using a bespoke graph neural network. These gene groups allow us to capture the gene expression state of a patient with a small number of binary variables that are biologically meaningful and carry histopathological insights for clinical and therapeutic use cases. Prediction of gene expression state based on these gene groups allows associating histological phenotypes (cellular composition, mitotic counts, grading, etc.) with underlying gene expression patterns and opens avenues for gaining biological insights from routine pathology imaging directly.
Cerberus: A Multi-headed Network for Brain Tumor Segmentation
The automated analysis of medical images requires robust and accurate algorithms that address the inherent challenges of identifying heterogeneous anatomical and pathological structures, such as brain tumors, in large volumetric images. In this paper, we present Cerberus, a single lightweight convolutional neural network model for the segmentation of fine-grained brain tumor regions in multichannel MRIs. Cerberus has an encoder-decoder architecture that takes advantage of a shared encoding phase to learn common representations for these regions and, then, uses specialized decoders to produce detailed segmentations. Cerberus learns to combine the weights learned for each category to produce a final multi-label segmentation. We evaluate our approach on the official test set of the Brain Tumor Segmentation Challenge 2020, and we obtain dice scores of 0.807 for enhancing tumor, 0.867 for whole tumor and 0.826 for tumor core.
A blockchain-based protocol for tracking user access to shared medical imaging
de Aguiar, Erikson J.
dos Santos, Alyson J.
Meneguette, Rodolfo I.
De Grande, Robson E.
Ueyama, Jó
Future Generation Computer Systems2022Journal Article, cited 0 times
Website
CPTAC-CM
CPTAC-LSCC
Modern healthcare systems are complex and regularly share sensitive data among multiple stakeholders, such as doctors, patients, and pharmacists. Patients’ data has increased and requires safe methods for its management. Research works related to blockchain, such as MIT MedRec, have strived to draft trustworthy and immutable systems to share data. However, blockchain may be challenging in healthcare scenarios due to issues about privacy and control of data sharing destinations. This paper presents a protocol for tracking shared medical data, which includes images, and controlling the medical data access by multiple conflicting stakeholders. Several efforts rely on blockchain for healthcare, but just a few are concerned about malicious data leakage in blockchain-based healthcare systems. We implement a token mechanism stored in DICOM files and managed by Hyperledger Fabric Blockchain. Our findings and evaluations revealed low chances of a hash collision, such as employing a fitting-resistance birthday attack. Although our solution was devised for healthcare, it can inspire and be easily ported to other blockchain-based application scenarios, such as Ethereum or Hyperledger Besu for business networks.
Impact of GAN-based Lesion-Focused Medical Image Super-Resolution on Radiomic Feature Robustness
Robust machine learning models based on radiomic features might allow for accu rate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, in creasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., can cer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE).; At 2× SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at 4× SR. We also evaluated the robustness of our model’s radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.
Impact of GAN-based lesion-focused medical image super-resolution on the robustness of radiomic features
de Farias, E. C.
di Noia, C.
Han, C.
Sala, E.
Castelli, M.
Rundo, L.
Sci Rep2021Journal Article, cited 12 times
Website
NSCLC-Radiomics
Algorithms
Humans
Image Processing
Computer-Assisted/methods
Lung/*diagnostic imaging/pathology
Lung Neoplasms/*diagnostic imaging/pathology
Machine Learning
Tomography
X-Ray Computed/methods
Robust machine learning models based on radiomic features might allow for accurate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, increasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., cancer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At [Formula: see text] SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at [Formula: see text] SR. We also evaluated the robustness of our model's radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.
Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma’s grade and IDH status
De Looze, Céline
Beausang, Alan
Cryan, Jane
Loftus, Teresa
Buckley, Patrick G
Farrell, Michael
Looby, Seamus
Reilly, Richard
Brett, Francesca
Kearney, Hugh
Journal of Neuro-Oncology2018Journal Article, cited 0 times
REMBRANDT
glioma
machine learning
Combined use of radiomics and artificial neural networks for the three‐dimensional automatic segmentation of glioblastoma multiforme
de los Reyes, Alexander Mulet
Lord, Victoria Hyde
Buemi, Maria Elena
Gandía, Daniel
Déniz, Luis Gómez
Alemán, Maikel Noriega
Suárez, Cecilia
Expert Systems2024Journal Article, cited 0 times
Website
TCGA-GBM
BraTS 2020
Artificial Neural Network (ANN)
Automatic Segmentation
Glioblastoma Multiforme (GBM)
Radiomics
Glioblastoma multiforme (GBM) is the most prevalent and aggressive primary brain tumour that has the worst prognosis in adults. Currently, the automatic segmentation of this kind of tumour is being intensively studied. Here, the automatic threedimensional segmentation of the GBM is achieved with its related subzones (active tumour, inner necrosis, and peripheral oedema). Preliminary segmentations were first defined based on the four basic magnetic resonance imaging modalities and classic image processing methods (multithreshold Otsu, Chan–Vese active contours, and morphological erosion). After an automatic gap-filling post processing step, these preliminary segmentations were combined and corrected by a supervised artificial neural network of multilayer perceptron type with a hidden layer of 80 neurons, fed by 30 selected radiomic features of gray intensity and texture. Network classification has an overall accuracy of 83.9%, while the complete combined algorithm achieves average Dice similarity coefficients of 89.3%, 80.7%, 79.7%, and 66.4% for the entire region of interest, active tumour, oedema, and necrosis segmentations, respectively.; These values are in the range of the best reported in the present bibliography, but even with better Hausdorff distances and lower computational costs. Results presented here evidence that it is possible to achieve the automatic segmentation of this kind of tumour by traditional radiomics. This has relevant clinical potential at the time of diagnosis, precision radiotherapy planning, or post-treatment response evaluation.
Segmentação Automática de Candidatos a Nódulos Pulmonares em Imagens de Tomografia Computadorizada
Este trabalho apresenta um algoritmo para segmentação automática de candidatos a nódulos pulmonares em imagens de Tomografia Computadorizada do tórax. A metodologia empregada inclui aquisição das imagens, eliminação de ruídos, segmentação do parênquima pulmonar e segmentação dos candidatos a nódulos pulmonares. O uso do filtro wiener e a aplicação do limiar ideal garante ao algoritmo uma melhora significativa nos resultados, permitindo detectar um maior número de nódulos nas imagens. Os testes foram realizados utilizando um conjunto de imagens da base LIDC-IDRI, contendo 708 nódulos. Os resultados do teste mostraram que o algoritmo localizou 93,08% dos nódulos considerados.; ; This paper presents an algorithm for automatic segmentation of pulmonary nodules candidates in chest computed tomography images. The methodology includes acquisition images, noise elimination, segmentation of pulmonary parenchyma and segmentation pulmonary nodules candidates. The use of the filter wiener and the application of ideal threshold ensures to the algorithm a significant improvement in results, allowing to detect a greater number of nodules on the images. The tests were conducted using a set of images of the base LIDC-IDRI, containing 708 nodules. The test results showed that the algorithm located 93.08% of the nodules considered.
Probabilistic Tissue Mapping for Tumor Segmentation and Infiltration Detection of Glioma
Segmentation of glioma structures is vital for therapy planning. Although state of the art algorithms achieve impressive results when compared to ground-truth manual delineations, one could argue that the binary nature of these labels does not properly reflect the underlying biology, nor does it account for uncertainties in the predicted segmentations. Moreover, the tumor infiltration beyond the contrast-enhanced lesion – visually imperceptible on imaging – is often ignored despite its potential role in tumor recurrence. We propose an intensity-based probabilistic model for brain tissue mapping based on conventional MRI sequences. We evaluated its value in the binary segmentation of the tumor and its subregions, and in the visualisation of possible infiltration. The model achieves a median Dice of 0.82 in the detection of the whole tumor, but suffers from confusion between different subregions. Preliminary results for the tumor probability maps encourage further investigation of the model regarding infiltration detection.
Ensembling Voxel-Based and Box-Based Model Predictions for Robust Lesion Detection
This paper presents a novel generic method to improve lesion detection by ensembling semantic segmentation and object detection models. The proposed approach allows to benefit from both voxel-based and box-based predictions, thus improving the ability to accurately detect lesions. The method consists of 3 main steps: (i) semantic segmentation and object detection models are trained separately; (ii) voxel-based and box-based predictions are matched spatially; (iii) corresponding lesion presence probabilities are combined into summary detection maps. We illustrate and validate the robustness of the proposed approach on three different oncology applications: liver and pancreas neoplasm detection in single-phase CT, and significant prostate cancer detection in multi-modal MRI. Performance is evaluated on publicly-available databases, and compared to two state-of-the art baseline methods. The proposed ensembling approach improves the average precision metric in all considered applications, with a 8% gain for prostate cancer.
Automatic Measurement of the Total Visceral Adipose Tissue From Computed Tomography Images by Using a Multi-Atlas Segmentation Method
BACKGROUND: The visceral adipose tissue (VAT) volume is a predictive and/or prognostic factor for many cancers. The objective of our study was to develop an automatic measurement of the whole VAT volume using a multi-atlas segmentation (MAS) method from a computed tomography.
METHODS: A total of 31 sets of whole-body computed tomography volume data were used. The reference VAT volume was defined on the basis of manual segmentation (VATMANUAL). We developed an algorithm, which measured automatically the VAT volumes using a MAS based on a nonrigid volume registration algorithm coupled with a selective and iterative method for performance level estimation (SIMPLE), called VATMAS_SIMPLE. The results were evaluated using intraclass correlation coefficient and dice similarity coefficients.
RESULTS: The intraclass correlation coefficient of VATMAS_SIMPLE was excellent, at 0.976 (confidence interval, 0.943-0.989) (P < 0.001). The dice similarity coefficient of VATMAS_SIMPLE was also good, at 0.905 (SD, 0.076).
CONCLUSIONS: This newly developed algorithm based on a MAS can measure accurately the whole abdominopelvic VAT.
Automated MRI based pipeline for segmentation and prediction of grade, IDH mutation and 1p19q co-deletion in glioma
Decuyper, M.
Bonte, S.
Deblaere, K.
Van Holen, R.
Comput Med Imaging Graph2021Journal Article, cited 42 times
Website
TCGA-GBM
TCGA-LGG
LGG-1p19qDeletion
BraTS 2019
*Brain Neoplasms/diagnostic imaging/genetics
*Glioma/diagnostic imaging/genetics
Humans
Isocitrate dehydrogenase (IDH) mutation
Isocitrate Dehydrogenase/genetics
Magnetic Resonance Imaging (MRI)
Mutation
Retrospective Studies
Algorithm Development
Deep learning
Glioma
PyTorch
Molecular markers
Segmentation
ReLu
In the WHO glioma classification guidelines grade (glioblastoma versus lower-grade glioma), IDH mutation and 1p/19q co-deletion status play a central role as they are important markers for prognosis and optimal therapy planning. Currently, diagnosis requires invasive surgical procedures. Therefore, we propose an automatic segmentation and classification pipeline based on routinely acquired pre-operative MRI (T1, T1 postcontrast, T2 and/or FLAIR). A 3D U-Net was designed for segmentation and trained on the BraTS 2019 training dataset. After segmentation, the 3D tumor region of interest is extracted from the MRI and fed into a CNN to simultaneously predict grade, IDH mutation and 1p19q co-deletion. Multi-task learning allowed to handle missing labels and train one network on a large dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS databases. Additionally, the network was validated on an independent dataset of 110 patients retrospectively acquired at the Ghent University Hospital (GUH). Segmentation performance calculated on the BraTS validation set shows an average whole tumor dice score of 90% and increased robustness to missing image modalities by randomly excluding input MRI during training. Classification area under the curve scores are 93%, 94% and 82% on the TCIA test data and 94%, 86% and 87% on the GUH data for grade, IDH and 1p19q status respectively. We developed a fast, automatic pipeline to segment glioma and accurately predict important (molecular) markers based on pre-therapy MRI.
Waiting for Big Changes in Limited-Stage Small-Cell Lung Cancer: For Now, More of the Same
Deek, Matthew P.
Haigentz, Missak
Jabbour, Salma K.
Journal of Clinical Oncology2023Journal Article, cited 0 times
ACRIN-NSCLC-FDG-PET
The Oncology Grand Rounds series is designed to place original reports published in the Journal into clinical context. A case presentation is followed by a description of diagnostic and management challenges, a review of the relevant literature, and a summary of the authors' suggested management approaches. The goal of this series is to help readers better understand how to apply the results of key studies, including those published in Journal of Clinical Oncology, to patients seen in their own clinical practice.Concurrent chemoradiotherapy remains central to the treatment of limited-stage small-cell lung cancer (SCLC). SCLC is one of the few tumors treated with twice-daily radiotherapy (RT) in the primary definitive setting, a regimen that was established when Intergroup 0096 demonstrated its superiority over once-daily RT. However, questions remained about the optimal chemoradiotherapy regimen given the low RT dose used in the once-daily RT arm of Intergroup 0096. CALGB 30610/RTOG 0538 and CONVERT attempted to establish whether dose-escalated once-daily RT was superior to twice-daily RT in limited-stage SCLC. Although both studies showed similar survival between treatment regimens, once-daily RT was not found to be superior to twice-daily RT, and trial design limited the ability to conclude dose-escalated once-daily RT as noninferior to twice-daily RT. Thus, twice-daily RT with concurrent chemotherapy remains a standard of care in limited-stage SCLC.
Directional local ternary quantized extrema pattern: A new descriptor for biomedical image indexing and retrieval
Deep, G
Kaur, L
Gupta, S
Engineering Science and Technology, an International Journal2016Journal Article, cited 9 times
Website
LIDC-IDRI
Algorithm Development
Computed Tomography (CT)
Magnetic resonance imaging (MRI)
Texture features
Local mesh ternary patterns: a new descriptor for MRI and CT biomedical image indexing and retrieval
Deep, G.
Kaur, L.
Gupta, S.
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization2016Journal Article, cited 3 times
Website
Algorithm Development
LIDC-IDRI
This paper proposes a new pattern-based feature called local mesh ternary pattern for biomedical image indexing and retrieval. The standard local binary patterns (LBP) and local ternary patterns (LTP) encode the greyscale relationship between the centre pixel and its surrounding neighbours in two-dimensional (2D) local region of an image, whereas the proposed method encodes the greyscale relationship among the neighbours for a given centre pixel with three selected directions of mesh patterns which is generated from 2D image. The novelty of the proposed method is that it uses ternary patterns from mesh patterns of an image to encode more spatial structure information which leads to better retrieval. The experiments have been carried out for proving the worth of proposed algorithm on three different types of benchmarked biomedical databases; (i) computed tomography (CT) scanned lung image databases named as LIDC-IDRI-CT and VIA/I–ELCAP-CT, (ii) brain magnetic resonance imaging (MRI) database named as OASIS-MRI. The results demonstrate that the proposed method yields better performance in terms of average retrieval precision and average retrieval rate over state-of-the-art feature extraction techniques like LBP, LTP, local mesh pattern, etc.
Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data
The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.
Evaluating ChatGPT-4V in chest CT diagnostics: a critical image interpretation assessment
To assess the diagnostic accuracy of ChatGPT-4V in interpreting a set of four chest CT slices for each case of COVID-19, non-small cell lung cancer (NSCLC), and control cases, thereby evaluating its potential as an AI tool in radiological diagnostics.
Risk Factors of Recurrence and Metastasis of Breast Cancer Sub-types Based on Magnetic Resonance Imaging Techniques
This study presents an analysis of breast cancer risk factors, focusing on metastasis and recurrence. Using MRI, we identified variables that could differentiate cancer sub-types, detect recurring cancers, and identify metastasizing cancers. Contrary to some studies, we found no higher incidence of metastasis or recurrence for the Triple Negative sub-type. However, the HER2 type showed a higher likelihood of metastasis. Besides, we used 529 features obtained from DCE-MRI images and identified 21 MRI derived variables as sub-type indicators, 9 as recurrence indicators, and 10 as metastasis indicators. Our findings aim to refine the target and highlight important information, assisting those who use MRI to diagnose breast cancer.
Mesoscopic imaging of glioblastomas: Are diffusion, perfusion and spectroscopic measures influenced by the radiogenetic phenotype?
Demerath, Theo
Simon-Gabriel, Carl Philipp
Kellner, Elias
Schwarzwald, Ralf
Lange, Thomas
Heiland, Dieter Henrik
Reinacher, Peter
Staszewski, Ori
Mast, Hansjorg
Kiselev, Valerij G
Egger, Karl
Urbach, Horst
Weyerbrock, Astrid
Mader, Irina
Neuroradiology Journal2017Journal Article, cited 5 times
Website
Radiogenomics
RIDER NEURO MRI
Magnetic resonance imaging (MRI)
Glioblastoma Multiforme (GBM)
The purpose of this study was to identify markers from perfusion, diffusion, and chemical shift imaging in glioblastomas (GBMs) and to correlate them with genetically determined and previously published patterns of structural magnetic resonance (MR) imaging. Twenty-six patients (mean age 60 years, 13 female) with GBM were investigated. Imaging consisted of native and contrast-enhanced 3D data, perfusion, diffusion, and spectroscopic imaging. In the presence of minor necrosis, cerebral blood volume (CBV) was higher (median +/- SD, 2.23% +/- 0.93) than in pronounced necrosis (1.02% +/- 0.71), pcorr = 0.0003. CBV adjacent to peritumoral fluid-attenuated inversion recovery (FLAIR) hyperintensity was lower in edema (1.72% +/- 0.31) than in infiltration (1.91% +/- 0.35), pcorr = 0.039. Axial diffusivity adjacent to peritumoral FLAIR hyperintensity was lower in severe mass effect (1.08*10(-3) mm(2)/s +/- 0.08) than in mild mass effect (1.14*10(-3) mm(2)/s +/- 0.06), pcorr = 0.048. Myo-inositol was positively correlated with a marker for mitosis (Ki-67) in contrast-enhancing tumor, r = 0.5, pcorr = 0.0002. Changed CBV and axial diffusivity, even outside FLAIR hyperintensity, in adjacent normal-appearing matter can be discussed as to be related to angiogenesis pathways and to activated proliferation genes. The correlation between myo-inositol and Ki-67 might be attributed to its binding to cell surface receptors regulating tumorous proliferation of astrocytic cells.
MultiGradICON: A Foundation Model for Multimodal Medical Image Registration
Demir, Başar
Tian, Lin
Greer, Hastings
Kwitt, Roland
Vialard, François-Xavier
Estépar, Raúl San José
Bouix, Sylvain
Rushmore, Richard
Ebrahim, Ebrahim
Niethammer, Marc
2024Book Section, cited 0 times
Pancreatic-CT-CBCT-SEG
TCGA-KIRC
4D-Lung
TCGA-KIRP
Modern medical image registration approaches predict deformations using deep networks. These approaches achieve state-of-the-art (SOTA) registration accuracy and are generally fast. However, deep learning (DL) approaches are, in contrast to conventional non-deep-learning-based approaches, anatomy-specific. Recently, a universal deep registration approach, uniGradICON, has been proposed. However, uniGradICON focuses on monomodal image registration. In this work, we therefore develop multiGradICON as a first step towards universal multimodal medical image registration. Specifically, we show that 1) we can train a DL registration model that is suitable for monomodal and multimodal registration; 2) loss function randomization can increase multimodal registration accuracy; and 3) training a model with multimodal data helps multimodal generalization. Our code and the multiGradICON model are available at https://github.com/uncbiag/uniGradICON.
Computer-aided detection of lung nodules using outer surface features
Demir, Önder
Yılmaz Çamurcu, Ali
Bio-Medical Materials and Engineering2015Journal Article, cited 28 times
Website
LIDC-IDRI
Computed Tomography (CT)
Computer Aided Detection (CADe)
LUNG
Classification
In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.
Uncertainty-Based Dynamic Graph Neighborhoods for Medical Segmentation
In recent years, deep learning based methods have shown success in essential medical image analysis tasks such as segmentation. Post-processing and refining the results of segmentation is a common practice to decrease the misclassifications originating from the segmentation network. In addition to widely used methods like Conditional Random Fields (CRFs) which focus on the structure of the segmented volume/area, a graph-based recent approach makes use of certain and uncertain points in a graph and refines the segmentation according to a small graph convolutional network (GCN). However, there are two drawbacks of the approach: most of the edges in the graph are assigned randomly and the GCN is trained independently from the segmentation network. To address these issues, we define a new neighbor-selection mechanism according to feature distances and combine the two networks in the training procedure. According to the experimental results on pancreas segmentation from Computed Tomography (CT) images, we demonstrate improvement in the quantitative measures. Also, examining the dynamic neighbors created by our method, edges between semantically similar image parts are observed. The proposed method also shows qualitative enhancements in the segmentation maps, as demonstrated in the visual results.
radMLBench: A dataset collection for benchmarking in radiomics
Demircioglu, A.
Comput Biol Med2024Journal Article, cited 0 times
Website
C4KC-KiTS
Colorectal-Liver-Metastases
HCC-TACE-Seg
HNSCC
Head-Neck-PET-CT
HEAD-NECK-RADIOMICS-HN1
ISPY1
LGG-1p19qDeletion
Meningioma-SEG-CLASS
NSCLC Radiogenomics
PI-CAI
Prostate-MRI-US-Biopsy
QIN-HEADNECK
UCSF-PDGM
UPENN-GBM
RSNA-ASNR-MICCAI-BraTS-2021
BraTS 2021
Benchmarking
High-dimensional datasets
Machine learning
Methodology
Radiomics
BACKGROUND: New machine learning methods and techniques are frequently introduced in radiomics, but they are often tested on a single dataset, which makes it challenging to assess their true benefit. Currently, there is a lack of a larger, publicly accessible dataset collection on which such assessments could be performed. In this study, a collection of radiomics datasets with binary outcomes in tabular form was curated to allow benchmarking of machine learning methods and techniques. METHODS: A variety of journals and online sources were searched to identify tabular radiomics data with binary outcomes, which were then compiled into a homogeneous data collection that is easily accessible via Python. To illustrate the utility of the dataset collection, it was applied to investigate whether feature decorrelation prior to feature selection could improve predictive performance in a radiomics pipeline. RESULTS: A total of 50 radiomic datasets were collected, with sample sizes ranging from 51 to 969 and 101 to 11165 features. Using this data, it was observed that decorrelating features did not yield any significant improvement on average. CONCLUSIONS: A large collection of datasets, easily accessible via Python, suitable for benchmarking and evaluating new machine learning techniques and methods was curated. Its utility was exemplified by demonstrating that feature decorrelation prior to feature selection does not, on average, lead to significant performance gains and could be omitted, thereby increasing the robustness and reliability of the radiomics pipeline.
CT-based radiomics stratification of tumor grade and TNM stage of clear cell renal cell carcinoma
Demirjian, Natalie L
Varghese, Bino A
Cen, Steven Y
Hwang, Darryl H
Aron, Manju
Siddiqui, Imran
Fields, Brandon K K
Lei, Xiaomeng
Yap, Felix Y
Rivas, Marielena
Reddy, Sharath S
Zahoor, Haris
Liu, Derek H
Desai, Mihir
Rhie, Suhn K
Gill, Inderbir S
Duddalwar, Vinay
Eur Radiol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
Machine learning
Manual segmentation
KIDNEY
OBJECTIVES: To evaluate the utility of CT-based radiomics signatures in discriminating low-grade (grades 1-2) clear cell renal cell carcinomas (ccRCC) from high-grade (grades 3-4) and low TNM stage (stages I-II) ccRCC from high TNM stage (stages III-IV). METHODS: A total of 587 subjects (mean age 60.2 years +/- 12.2; range 22-88.7 years) with ccRCC were included. A total of 255 tumors were high grade and 153 were high stage. For each subject, one dominant tumor was delineated as the region of interest (ROI). Our institutional radiomics pipeline was then used to extract 2824 radiomics features across 12 texture families from the manually segmented volumes of interest. Separate iterations of the machine learning models using all extracted features (full model) as well as only a subset of previously identified robust metrics (robust model) were developed. Variable of importance (VOI) analysis was performed using the out-of-bag Gini index to identify the top 10 radiomics metrics driving each classifier. Model performance was reported using area under the receiver operating curve (AUC). RESULTS: The highest AUC to distinguish between low- and high-grade ccRCC was 0.70 (95% CI 0.62-0.78) and the highest AUC to distinguish between low- and high-stage ccRCC was 0.80 (95% CI 0.74-0.86). Comparable AUCs of 0.73 (95% CI 0.65-0.8) and 0.77 (95% CI 0.7-0.84) were reported using the robust model for grade and stage classification, respectively. VOI analysis revealed the importance of neighborhood operation-based methods, including GLCM, GLDM, and GLRLM, in driving the performance of the robust models for both grade and stage classification. CONCLUSION: Post-validation, CT-based radiomics signatures may prove to be useful tools to assess ccRCC grade and stage and could potentially add to current prognostic models. Multiphase CT-based radiomics signatures have potential to serve as a non-invasive stratification schema for distinguishing between low- and high-grade as well as low- and high-stage ccRCC. KEY POINTS: * Radiomics signatures derived from clinical multiphase CT images were able to stratify low- from high-grade ccRCC, with an AUC of 0.70 (95% CI 0.62-0.78). * Radiomics signatures derived from multiphase CT images yielded discriminative power to stratify low from high TNM stage in ccRCC, with an AUC of 0.80 (95% CI 0.74-0.86). * Models created using only robust radiomics features achieved comparable AUCs of 0.73 (95% CI 0.65-0.80) and 0.77 (95% CI 0.70-0.84) to the model with all radiomics features in classifying ccRCC grade and stage, respectively.
Residual 3D U-Net with Localization for Brain Tumor Segmentation
Gliomas are brain tumors originating from the neuronal support tissue called glia, which can be benign or malignant. They are considered rare tumors, whose prognosis, which is highly fluctuating, is primarily related to several factors, including localization, size, degree of extension and certain immune factors. We propose an approach using a Residual 3D U-Net to segment these tumors with localization, a technique for centering and reducing the size of input images to make more accurate and faster predictions. We incorporated different training and post-processing techniques such as cross-validation and minimum pixel threshold.
Investigation of inter-fraction target motion variations in the context of pencil beam scanned proton therapy in non-small cell lung cancer patients
den Otter, L. A.
Anakotta, R. M.
Weessies, M.
Roos, C. T. G.
Sijtsema, N. M.
Muijs, C. T.
Dieters, M.
Wijsman, R.
Troost, E. G. C.
Richter, C.
Meijers, A.
Langendijk, J. A.
Both, S.
Knopf, A. C.
Med Phys2020Journal Article, cited 0 times
Website
4D-Lung
PURPOSE: For locally advanced-stage non-small cell lung cancer (NSCLC), inter-fraction target motion variations during the whole time span of a fractionated treatment course are assessed in a large and representative patient cohort. The primary objective is to develop a suitable motion monitoring strategy for pencil beam scanning proton therapy (PBS-PT) treatments of NSCLC patients during free breathing. METHODS: Weekly 4D computed tomography (4DCT; 41 patients) and daily 4D cone beam computed tomography (4DCBCT; 10 of 41 patients) scans were analyzed for a fully fractionated treatment course. Gross tumor volumes (GTVs) were contoured and the 3D displacement vectors of the centroid positions were compared for all scans. Furthermore, motion amplitude variations in different lung segments were statistically analyzed. The dosimetric impact of target motion variations and target motion assessment was investigated in exemplary patient cases. RESULTS: The median observed centroid motion was 3.4 mm (range: 0.2-12.4 mm) with an average variation of 2.2 mm (range: 0.1-8.8 mm). Ten of 32 patients (31.3%) with an initial motion <5 mm increased beyond a 5-mm motion amplitude during the treatment course. Motion observed in the 4DCBCT scans deviated on average 1.5 mm (range: 0.0-6.0 mm) from the motion observed in the 4DCTs. Larger motion variations for one example patient compromised treatment plan robustness while no dosimetric influence was seen due to motion assessment biases in another example case. CONCLUSIONS: Target motion variations were investigated during the course of radiotherapy for NSCLC patients. Patients with initial GTV motion amplitudes of < 2 mm can be assumed to be stable in motion during the treatment course. For treatments of NSCLC patients who exhibit motion amplitudes of > 2 mm, 4DCBCT should be considered for motion monitoring due to substantial motion variations observed.
Chest imaging representing a COVID-19 positive rural U.S. population
Desai, Shivang
Baghal, Ahmad
Wongsurawat, Thidathip
Jenjaroenpun, Piroon
Powell, Thomas
Al-Shukri, Shaymaa
Gates, Kim
Farmer, Phillip
Rutherford, Michael
Blake, Geri
Nolan, Tracy
Sexton, Kevin
Bennett, William
Smith, Kirk
Syed, Shorabuddin
Prior, Fred
Scientific Data2020Journal Article, cited 0 times
COVID-19-AR
As the COVID-19 pandemic unfolds, radiology imaging is playing an increasingly vital role in determining therapeutic options, patient management, and research directions. Publicly available data are essential to drive new research into disease etiology, early detection, and response to therapy. In response to the COVID-19 crisis, the National Cancer Institute (NCI) has extended the Cancer Imaging Archive (TCIA) to include COVID-19 related images. Rural populations are one population at risk for underrepresentation in such public repositories. We have published in TCIA a collection of radiographic and CT imaging studies for patients who tested positive for COVID-19 in the state of Arkansas. A set of clinical data describes each patient including demographics, comorbidities, selected lab data and key radiology findings. These data are cross-linked to SARS-COV-2 cDNA sequence data extracted from clinical isolates from the same population, uploaded to the GenBank repository. We believe this collection will help to address population imbalance in COVID-19 data by providing samples from this normally underrepresented population.
Computer-Aided Detection for Early Detection of Lung Cancer Using CT Images
Doctors face difficulty in the diagnosis of lung cancer due to the complex nature and clinical interrelations of computer-diagnosed scan images. Hence, the visual inspection and subjective evaluation methods are time consuming and tedious, which leads to inter and intra observer inconsistency or imprecise classification. The Computer-Aided Detection (CAD) can help the clinicians for objective decision-making, early diagnosis, and classification of cancerous abnormalities. In this work, CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection in which, the phases of lung cancer are discriminated using image processing tools. Cancer is the second leading cause of death in non-communicable diseases worldwide. Lung cancer is, in fact, the most dangerous form of cancer that affects both the genders. Either or both sides of the lung begin to expand during the uncontrolled growth of extraordinary cells. The most widely used imaging technique for lung cancer diagnosis is Computerised Tomography (CT) scanning. In this work, the CAD method is used to differentiate between the phases of pictures of lung cancer stages. Abnormality detection consists of 4 steps: pre-processing, segmentation, extraction of features, and classification of input CT images. For the segmentation process, Marker-controlled watershed segmentation and the K-means algorithm are used. From CT images, normal and abnormal information is extracted and its characteristics are determined. Stages 1–4 of cancerous imaging were discriminated and graded with approximately 80% efficiency using neural network feedforward backpropagation algorithm. Input data is collected from the Lung Image Database Consortium (LIDC), which out of 1018 dataset cases uses 100 cases. For the output display, a graphical user interface (GUI) is developed. This automated and robust CAD system is necessary for accurate and quick screening of the mass population.
Microscopic Analysis of Blood Cells for Disease Detection: A Review
Deshpande, Nilkanth Mukund
Gite, Shilpa Shailesh
Aluvalu, Rajanikanth
2021Book Section, cited 0 times
C-NMC 2019
Any contamination in the human body can prompt changes in blood cell morphology and various parameters of cells. The minuscule images of blood cells are examined for recognizing the contamination inside the body with an expectation of maladies and variations from the norm. Appropriate segmentation of these cells makes the detection of a disease progressively exact and vigorous. Microscopic blood cell analysis is a critical activity in the pathological analysis. It highlights the investigation of appropriate malady after exact location followed by an order of abnormalities, which assumes an essential job in the analysis of various disorders, treatment arranging, and assessment of results of treatment. A survey on different areas where microscopic imaging of blood cells is used for disease detection is presented in this paper. A small note on Blood composition is presented, which is followed by a generalized methodology for microscopic blood image analysis for certain application of medical imaging. Comparison of existing methodologies proposed by researchers for disease detection using microscopic blood cell image analysis is discussed in this paper.
Development of a nomogram combining clinical staging with 18F-FDG PET/CT image features in non-small-cell lung cancer stage I–III
Desseroit, Marie-Charlotte
Visvikis, Dimitris
Tixier, Florent
Majdoub, Mohamed
Perdrisot, Rémy
Guillevin, Rémy
Le Rest, Catherine Cheze
Hatt, Mathieu
European Journal of Nuclear Medicine and Molecular Imaging2016Journal Article, cited 34 times
Website
RIDER Lung PET-CT
3D Slicer
Texture features
18F-FDG PET/CT
Non-Small Cell Lung Cancer (NSCLC)
MedCalc
Kaplan-Meier curve
U-Net based Pancreas Segmentation from Computed Tomography Images
Delineation of pancreas from computed tomography (CT) scan images is burdensome owing to its anatomic variation in size, shape and its presence with respect to adjacent organs. This work explores the U-Net architecture to delineate pancreas from the CT volume of abdomen. U-Net was trained for variable number of epochs to identify the optimal learning environment for better segmentation performance. U-Net when trained for pancreas segmentation using CT dataset taken from The Cancer Imaging Archive (TCIA) has resulted in a segmentation performance of dice similarity coefficient of 0.8138, intersection over union of 0.7962 and boundary F1 score of 0.8036.
Survey of Leukemia Cancer Cell Detection Using Image Processing
Devi, Tulasi Gayatri
Patil, Nagamma
Rai, Sharada
Philipose, Cheryl Sarah
2022Book Section, cited 0 times
SN-AM
Cancer is the development of abnormal cells that divide at an abnormal pace, uncontrollably. Cancerous cells have the ability to destroy other normal tissues and can spread throughout the body. Cancer cells can develop in various parts of the body. The paper focuses on leukemia which is a type of blood cancer. Blood cancer usually start in the bone marrow where the blood is produced in the body. The types of blood cancer are: Leukemia, Non-Hodgkin lymphoma, Hodgkin lymphoma, and Multiple myeloma. Leukemia is a type of blood cancer that originates in the bone marrow. Leukemia is seen when the body produces an abnormal amount of white blood cells that hinder the bone marrow from creating red blood cells and platelets. Several detection methods to identify the cancerous cells have been proposed. Identification of the cancer cells through cell image processing is very complex. The use of computer aided image processing allows the images to be viewed in 2D and 3D making it easier to identify the cancerous cells. The cells have to undergo segmentation and classification in order to identify the cancerous tumours. Several papers propose segmentation methods, classification methods and some propose both. The purpose of this survey is to review various papers that use either conventional methods or machine learning methods to detect the cells as cancerous and non-cancerous.
Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma
Dextraze, Katherine
Saha, Abhijoy
Kim, Donnie
Narang, Shivali
Lehrer, Michael
Rao, Anita
Narang, Saphal
Rao, Dinesh
Ahmed, Salmaan
Madhugiri, Venkatesh
Fuller, Clifton David
Kim, Michelle M
Krishnan, Sunil
Rao, Ganesh
Rao, Arvind
Oncotarget2017Journal Article, cited 0 times
Website
Radiomics
Glioblastoma Multiforme (GBM)
TCGA-GBM
Glioblastoma (GBM) show significant inter- and intra-tumoral heterogeneity, impacting response to treatment and overall survival time of 12-15 months. To study glioblastoma phenotypic heterogeneity, multi-parametric magnetic resonance images (MRI) of 85 glioblastoma patients from The Cancer Genome Atlas were analyzed to characterize tumor-derived spatial habitats for their relationship with outcome (overall survival) and to identify their molecular correlates (i.e., determine associated tumor signaling pathways correlated with imaging-derived habitat measurements). Tumor sub-regions based on four sequences (fluid attenuated inversion recovery, T1-weighted, post-contrast T1-weighted, and T2-weighted) were defined by automated segmentation. From relative intensity of pixels in the 3-dimensional tumor region, "imaging habitats" were identified and analyzed for their association to clinical and genetic data using survival modeling and Dirichlet regression, respectively. Sixteen distinct tumor sub-regions ("spatial imaging habitats") were derived, and those associated with overall survival (denoted "relevant" habitats) in glioblastoma patients were identified. Dirichlet regression implicated each relevant habitat with unique pathway alterations. Relevant habitats also had some pathways and cellular processes in common, including phosphorylation of STAT-1 and natural killer cell activity, consistent with cancer hallmarks. This work revealed clinical relevance of MRI-derived spatial habitats and their relationship with oncogenic molecular mechanisms in patients with GBM. Characterizing the associations between imaging-derived phenotypic measurements with the genomic and molecular characteristics of tumors can enable insights into tumor biology, further enabling the practice of personalized cancer treatment. The analytical framework and workflow demonstrated in this study are inherently scalable to multiple MR sequences.
Social group optimization–assisted Kapur’s entropy and morphological segmentation for automated detection of COVID-19 infection from computed tomography images
Dey, Nilanjan
Rajinikanth, V
Fong, Simon James
Kaiser, M Shamim
Mahmud, Mufti
Cognitive Computation2020Journal Article, cited 0 times
LIDC-IDRI
RIDER
COVID-19
Segmentation
Machine Learning
A Fast Domain-Inspired Unsupervised Method to Compute COVID-19 Severity Scores from Lung CT
Dey, Samiran
Kundu, Bijon
Basuchowdhuri, Partha
Saha, Sanjoy Kumar
Chakraborti, Tapabrata
Pattern Recognition2025Book Section, cited 0 times
Website
MIDRC-RICORD-1A
There has been a deluge of data-driven deep learning approaches to detect COVID-19 from computed tomography (CT) images over the pandemic, most of which use ad-hoc deep learning black boxes of little to no relevance to the actual process clinicians use and hence have not seen translation to real-life practical settings. Radiologists use a clinically established process of estimating the percentage of the affected area of the lung to grade the severity of infection out of a score of 0-25 from lung CT scans. Hence any computer-automated process that has aspirations of being adopted in the clinic to alleviate the workload of radiologists while being trustworthy and safe, needs to follow this clearly defined clinical process religiously. Keeping this in mind, we propose a simple yet effective methodology that uses explainable mechanistic modelling using classical image processing and pattern recognition techniques. The proposed pipeline has no learning element and hence is fast. It mimics the clinical process and hence is transparent. We collaborate with an experienced radiologist to enhance an existing benchmark COVID-19 lung CT dataset by adding the grading labels, which is another contribution of this paper, along with the methodology which has a higher potential of becoming a clinical decision support system (CDSS) due to its rapid and explainable nature. The radiologist gradations and the code is available at https://github.com/Samiran-Dey/explainable_seg.
LeukoCapsNet: a resource-efficient modified CapsNet model to identify leukemia from blood smear images
Dhalla, Sabrina
Mittal, Ajay
Gupta, Savita
Neural Computing and Applications2023Journal Article, cited 0 times
Website
C-NMC 2019
Algorithm Development
Deep convolutional neural network (DCNN)
Computer Aided Detection (CADe)
Leukemia
Leukemia is one of the deadly cancers which spreads itself at an exponential rate and has a detrimental impact on leukocytes in the human blood. To automate the process of leukemia detection, researchers have utilized deep learning networks to analyze blood smear images. In our research, we have proposed the usage of networks that mimic the human brain’s real working. These models are fed features from numerous convolution layers, each having its own set of additional skip connections. It is then stored and processed as vectors, making them rotationally invariant as well, a characteristic not found in other deep learning networks, specifically convolutional neural networks (CNNs). The network is then pruned by 20% to make it more deployable in resource-constrained environments. This research also compares the model’s performance by four ablation experiments and concludes that the proposed model is optimal. It has also been tested on three different types of datasets to highlight its robustness. The average values of all three datasets correspond to specificity: 96.97%, sensitivity: 96.81%, precision: 96.79% and accuracy: 97.44%. In a nutshell, the outcomes of the proposed model, i.e., PrunedResCapsNet make it more dynamic and effective compared with other existing methods.
Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images
An Artificial Intelligence Based Approach Toward Predicting Mortality in Head and Neck Cancer Patients With Relation to Smoking and Clinical Data
Dhariwal, Naman
Hariprasad, Rithvik
Sundari, L. Mohana
IEEE Access2023Journal Article, cited 0 times
RADCURE
Head and neck cancers are one of the most common cancers in the world which affects the mouth, throat, and tongue regions of the human body. Lifestyle factors such as smoking, and tobacco have been long associated with the generation of cancerous cells in the body. This paper is a novel approach towards extracting the correlation between these life factors and head and neck cancers, supported by crucial cancer attributes like the tumor-node-metastasis and human papilloma virus. Mortality prediction algorithms in cases of head and neck cancers will help doctors pre-determine the factors that are most crucial and help deliver specialized and targeted treatments. The paper used eight machine learning and four deep learning hyper-parameter tuned models to predict the mortality rate associated with head and neck cancer. The maximum accuracy of 98.8% was achieved by the gradient boosting algorithm in the paper. The feature importance of smoking and human papilloma virus positivity using the same classifier was approximately 4% and 2.5% respectively. The most influential factor in mortality prediction was the duration of follow-up from diagnosis to the last contact date, with 40.8% importance. Quantitative results from the area under the receiver operating characteristic curve substantiate the classifiers’ performance, with a maximum value of 0.99 for gradient boosting. This paper is bound to impact many medical professionals by helping them predict the mortality of cancer patients and aid appropriate treatments.
Employing U-NET and RBCNN to Build an Automatic Lung Cancer Segmentation and Classification System
Dhivya, P.
Yamini, P.
2024Conference Paper, cited 0 times
LIDC-IDRI
The most common cancer-related cause of death globally is lung cancer. The key to effective lung cancer treatment and higher survival rates is early diagnosis. Converting a radiologist's diagnosing procedure to computer assisted results in more accurate results and an earlier diagnosis. The difficulty is that building a effective model for segmentation and classification. In this paper, we suggest a system for detecting lung cancer that makes use of a number of methods for precise and effective diagnosis. To enhance picture quality, our method pre-processes CT scan images using a Gaussian filter and contrast stretching. For the purpose of determining the borders of lung nodules with high precision, the U-Net architecture with the Adam optimizer is used. Then, a Gaussian mixture model (GMM) with EM optimisation and pixel padding is used to extract features. The rotational-based CNN (RBCNN) classifier successfully categorises the nodules as benign and malignant using these form variables as inputs
Ensemble Methods with [$$^{18}$$F]FDG-PET/CT Radiomics in Breast Cancer Response Prediction
Pathological complete response (pCR) after neoadjuvant che-motherapy (NAC) in patients with breast cancer was found to improve survival, and it has a great prognostic value in the aggressive tumor subtype. This study aims to predict pCR before NAC treatment with a radiomic feature-based ensemble learning model using both positron emission tomography/computed tomography (PET/CT) images taken from the online QIN-Breast dataset. It studies the problem of constructing an end-to-end classification pipeline that includes a large-scale radiomic feature extraction, a hybrid iterative feature selection and a heterogeneous weighted ensemble classification. The proposed hybrid feature selection procedure can identify significant radiomic predictors out of 2153 features extracted from delineated tumour regions. The proposed weighted ensemble approach aggregates the outcomes of four weak classifiers (Decision tree, Naive Bayes, K-nearest neighbour, and Logistics regression) based on their importance. The empirical study demonstrates that the proposed feature selection-cum-ensemble classification method has achieved 92% and 88.4% balanced accuracy in PET and CT, respectively. The PET/CT aggregated model performed better and achieved 98% balanced accuracy and 94.74% F1-score. Furthermore, this study is the first classification work on the online QIN-Breast dataset.
Deep learning in head & neck cancer outcome prediction
Diamant, André
Chatterjee, Avishek
Vallières, Martin
Shenouda, George
Seuntjens, Jan
Sci Rep2019Journal Article, cited 0 times
Head-Neck-PET-CT
Convolutional Neural Network (CNN)
Radiomics
Deep learning
HEAD AND NECK
head and neck squamous cell carcinoma (HNSCC)
Segmentation
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.
Human-interpretable image features derived from densely mapped cancer pathology slides predict diverse molecular phenotypes
Diao, J. A.
Wang, J. K.
Chui, W. F.
Mountain, V.
Gullapally, S. C.
Srinivasan, R.
Mitchell, R. N.
Glass, B.
Hoffman, S.
Rao, S. K.
Maheshwari, C.
Lahiri, A.
Prakash, A.
McLoughlin, R.
Kerner, J. K.
Resnick, M. B.
Montalto, M. C.
Khosla, A.
Wapinski, I. N.
Beck, A. H.
Elliott, H. L.
Taylor-Weiner, A.
Nat Commun2021Journal Article, cited 0 times
Website
Post-NAT-BRCA
H&E-stained slides
BREAST
Deep Learning
Computational methods have made substantial progress in improving the accuracy and throughput of pathology workflows for diagnostic, prognostic, and genomic prediction. Still, lack of interpretability remains a significant barrier to clinical integration. We present an approach for predicting clinically-relevant molecular phenotypes from whole-slide histopathology images using human-interpretable image features (HIFs). Our method leverages >1.6 million annotations from board-certified pathologists across >5700 samples to train deep learning models for cell and tissue classification that can exhaustively map whole-slide images at two and four micron-resolution. Cell- and tissue-type model outputs are combined into 607 HIFs that quantify specific and biologically-relevant characteristics across five cancer types. We demonstrate that these HIFs correlate with well-known markers of the tumor microenvironment and can predict diverse molecular signatures (AUROC 0.601-0.864), including expression of four immune checkpoint proteins and homologous recombination deficiency, with performance comparable to 'black-box' methods. Our HIF-based approach provides a comprehensive, quantitative, and interpretable window into the composition and spatial architecture of the tumor microenvironment.
Radiogenomics: Lung Cancer-Related Genes Mutation Status Prediction
Dias, Catarina
Pinheiro, Gil
Cunha, António
Oliveira, Hélder P.
2019Book Section, cited 0 times
NSCLC Radiogenomics
Advances in genomics have driven to the recognition that tumours are populated by different minor subclones of malignant cells that control the way the tumour progresses. However, the spatial and temporal genomic heterogeneity of tumours has been a hurdle in clinical oncology. This is mainly because the standard methodology for genomic analysis is the biopsy, that besides being an invasive technique, it does not capture the entire tumour spatial state in a single exam. Radiographic medical imaging opens new opportunities for genomic analysis by providing full state visualisation of a tumour at a macroscopic level, in a non-invasive way. Having in mind that mutational testing of EGFR and KRAS is a routine in lung cancer treatment, it was studied whether clinical and imaging data are valuable for predicting EGFR and KRAS mutations in a cohort of NSCLC patients. A reliable predictive model was found for EGFR (AUC = 0.96) using both a Multi-layer Perceptron model and a Random Forest model but not for KRAS (AUC = 0.56). A feature importance analysis using Random Forest reported that the presence of emphysema and lung parenchymal features have the highest correlation with EGFR mutation status. This study opens new opportunities for radiogenomics on predicting molecular properties in a more readily available and non-invasive way.
Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy
Dias, Marta Filipa Ferraz
Collins-Fekete, Charles-Antoine
Baroni, Guido
Riboldi, Marco
Seco, Joao
Biomedical Physics & Engineering Express2019Journal Article, cited 0 times
Website
LUNG
CT
Radiation Therapy
The Next Frontier in Health Disparities-A Closer Look at Exploring Sex Differences in Glioma Data and Omics Analysis, from Bench to Bedside and Back
Diaz Rosario, M.
Kaur, H.
Tasci, E.
Shankavaram, U.
Sproull, M.
Zhuge, Y.
Camphausen, K.
Krauze, A.
Biomolecules2022Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Animals
Female
*Glioma/genetics/therapy
Humans
Male
Prospective Studies
Publications
Retrospective Studies
*Sex Characteristics
genomics
glioma
health disparities
large-scale data
proteomics
sex differences
Sex differences are increasingly being explored and reported in oncology, and glioma is no exception. As potentially meaningful sex differences are uncovered, existing gender-derived disparities mirror data generated in retrospective and prospective trials, real-world large-scale data sets, and bench work involving animals and cell lines. The resulting disparities at the data level are wide-ranging, potentially resulting in both adverse outcomes and failure to identify and exploit therapeutic benefits. We set out to analyze the literature on women's data disparities in glioma by exploring the origins of data in this area to understand the representation of women in study samples and omics analyses. Given the current emphasis on inclusive study design and research, we wanted to explore if sex bias continues to exist in present-day data sets and how sex differences in data may impact conclusions derived from large-scale data sets, omics, biospecimen analysis, novel interventions, and standard of care management.
Radiation source personalization for nanoparticle-enhanced radiotherapy using dynamic contrast-enhanced MRI in the treatment planning process
Díaz-Galindo, C.A.
Garnica-Garza, H.M.
2024Journal Article, cited 0 times
GLIS-RT
Radiotherapy
MRI
Nanoparticle-enhanced radiotherapy offers the potential to selectively increase the radiation dose imparted to the tumor while at the same time sparing the healthy structures around it. Among the recommendations of an interdisciplinary group for the clinical translation of this treatment modality, is the developing of methods to quantify the effects that the nanoparticle concentration has on the radiation dosimetry and incorporate these effects into the treatment planning process. In this work, using Monte Carlo simulations and dynamic contrast-enhanced MRI images, treatment plans for nanoparticle-enhanced radiotherapy are calculated in order to evaluate the effects that realistic distributions of the nanoparticles have on the resultant plans and to devise treatment strategies to account for these effects, including the selection of the proper x-ray source configuration in terms of energy and collimation. Dynamic contrast-enhanced MRI studies were obtained for two treatment sites, namely brain and breast. A model to convert the MRI signal to contrast agent concentration was applied to each set of images. Two treatment modalities, 3D conformal radiotherapy and Stereotactic Body Radiation Therapy, were evaluated at three different x-ray spectra, namely 6 MV from a linear accelerator, 110 kVp and 220 kVp from a tungsten target. For the breast patient, as the nanoparticle distribution varies markedly with time, the treatment plans were obtained at two different times after administration. It was determined that maximum doses to the healthy structures around the tumor are mostly determined by the minimum nanoparticle concentration in the tumor. The presence of highly hypoxic or necrotic tissue, which fails to accumulate the nanoparticles, or leakage of the contrast agent into the surrounding healthy tissue, make irradiation with conventional conformal radiotherapy unfeasible for kilovoltage beam energies, as the uniform beam apertures lack the ability to compensate for the non-uniform distribution of the nanoparticles. Therefore, proper quantification of the nanoparticle distribution not only in the target volume but also in the surrounding tissues and structures is crucial for the proper planning of nanoparticle-enhanced radiotherapy and a treatment delivery with a high-degree of freedom, such as small-field stereotactic body radiotherapy, should be the method of choice for this treatment modality.
Automated segmentation refinement of small lung nodules in CT scans by local shape analysis
Diciotti, Stefano
Lombardo, Simone
Falchini, Massimo
Picozzi, Giulia
Mascalchi, Mario
IEEE Trans Biomed Eng2011Journal Article, cited 68 times
Website
Radiomics
Segmentation
Computer Aided Diagnosis (CADx)
LUNG
LIDC-IDRI
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.
Breast cancer detection using deep learning: Datasets, methods, and challenges ahead
Din, Nusrat Mohi Ud
Dar, Rayees Ahmad
Rasool, Muzafar
Assad, Assif
Computers in Biology and Medicine2022Journal Article, cited 0 times
RIDER Breast MRI
Breast Cancer (BC) is the most commonly diagnosed cancer and second leading cause of mortality among women. About 1 in 8 US women (about 13%) will develop invasive BC throughout their lifetime. Early detection of this life-threatening disease not only increases the survival rate but also reduces the treatment cost. Fortunately, advancements in radiographic imaging like "Mammograms", "Computed Tomography (CT)", "Magnetic Resonance Imaging (MRI)", "3D Mammography", and "Histopathological Imaging (HI)" have made it feasible to diagnose this life-taking disease at an early stage. However, the analysis of radiographic images and Histopathological images is done by experienced radiologists and pathologists, respectively. The process is not only costly but also error-prone. Over the last ten years, Computer Vision and Machine Learning (ML) have transformed the world in every way possible. Deep learning (DL), a subfield of ML has shown outstanding results in a variety of fields, particularly in the biomedical industry, because of its ability to handle large amounts of data. DL techniques automatically extract the features by analyzing the high dimensional and correlated data efficiently. The potential and ability of DL models have also been utilized and evaluated in the identification and prognosis of BC, utilizing radiographic and Histopathological images, and have performed admirably. However, AI has shown good claims in retrospective studies only. External validations are needed for translating these cutting-edge AI tools as a clinical decision maker. The main aim of this research work is to present the critical analysis of the research and findings already done to detect and classify BC using various imaging modalities including "Mammography", "Histopathology", "Ultrasound", "PET/CT", "MRI", and "Thermography". At first, a detailed review of the past research papers using Machine Learning, Deep Learning and Deep Reinforcement Learning for BC classification and detection is carried out. We also review the publicly available datasets for the above-mentioned imaging modalities to make future research more accessible. Finally, a critical discussion section has been included to elaborate open research difficulties and prospects for future study in this emerging area, demonstrating the limitations of Deep Learning approaches.
Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: a robust, multi-institutional study
Ding, J.
Zhao, R.
Qiu, Q.
Chen, J.
Duan, J.
Cao, X.
Yin, Y.
Quant Imaging Med Surg2022Journal Article, cited 2 times
Website
TCGA-LGG
DICOM-Glioma-SEG
Multiplanar reconstruction (MPR)
Deep Learning
BRAIN
Radiomics
Background: Although surgical pathology or biopsy are considered the gold standard for glioma grading, these procedures have limitations. This study set out to evaluate and validate the predictive performance of a deep learning radiomics model based on contrast-enhanced T1-weighted multiplanar reconstruction images for grading gliomas. Methods: Patients from three institutions who diagnosed with gliomas by surgical specimen and multiplanar reconstructed (MPR) images were enrolled in this study. The training cohort included 101 patients from institution 1, including 43 high-grade glioma (HGG) patients and 58 low-grade glioma (LGG) patients, while the test cohorts consisted of 50 patients from institutions 2 and 3 (25 HGG patients, 25 LGG patients). We then extracted radiomics features and deep learning features using six pretrained models from the MPR images. The Spearman correlation test and the recursive elimination feature selection method were used to reduce the redundancy and select most predictive features. Subsequently, three classifiers were used to construct classification models. The performance of the grading models was evaluated using the area under the receiver operating curve, sensitivity, specificity, accuracy, precision, and negative predictive value. Finally, the prediction performances of the test cohort were compared to determine the optimal classification model. Results: For the training cohort, 62% (13 out of 21) of the classification models constructed with MPR images from multiple planes outperformed those constructed with single-plane MPR images, and 61% (11 out of 18) of classification models constructed with both radiomics features and deep learning features had higher area under the curve (AUC) values than those constructed with only radiomics or deep learning features. The optimal model was a random forest model that combined radiomic features and VGG16 deep learning features derived from MPR images, which achieved AUC of 0.847 in the training cohort and 0.898 in the test cohort. In the test cohort, the sensitivity, specificity, and accuracy of the optimal model were 0.840, 0.760, and 0.800, respectively. Conclusions: Multiplanar CE-T1W MPR imaging features are more effective than features from single planes when differentiating HGG and LGG. The combination of deep learning features and radiomics features can effectively grade glioma and assist clinical decision-making.
Feature-Enhanced Graph Networks for Genetic Mutational Prediction Using Histopathological Images in Colon Cancer
Mining histopathological and genetic data provides a unique avenue to deepen our understanding of cancer biology. However, extensive cancer heterogeneity across image- and molecular-scales poses technical challenges for feature extraction and outcome prediction. In this study, we propose a feature-enhanced graph network (FENet) for genetic mutation prediction using histopathological images in colon cancer. Unlike conventional approaches analyzing patch-based feature alone without considering their spatial connectivity, we seek to link and explore non-isomorphic topological structures in histopathological images. Our FENet incorporates feature enhancement in convolutional graph neural networks to aggregate discriminative features for capturing gene mutation status. Specifically, our approach could identify both local patch feature information and global topological structure in histopathological images simultaneously. Furthermore, we introduced an ensemble strategy by constructing multiple subgraphs to boost the prediction performance. Extensive experiments on the TCGA-COAD and TCGA-READ cohort including both histopathological images and three key genes’ mutation profiles (APC, KRAS, and TP53) demonstrated the superiority of FENet for key mutational outcome prediction in colon cancer.
Spatially aware graph neural networks and cross-level molecular profile prediction in colon cancer histopathology: a retrospective multi-cohort study
Ding, Kexin
Zhou, Mu
Wang, He
Zhang, Shaoting
Metaxas, Dimitri N.
The Lancet Digital Health2022Journal Article, cited 1 times
Website
CPTAC-COAD
Pathomics
Digital pathology
Histopathology imaging features
Neural Networks
Computer
Background; Digital whole-slide images are a unique way to assess the spatial context of the cancer microenvironment. Exploring these spatial characteristics will enable us to better identify cross-level molecular markers that could deepen our understanding of cancer biology and related patient outcomes.; ; Methods; We proposed a graph neural network approach that emphasises spatialisation of tumour tiles towards a comprehensive evaluation of predicting cross-level molecular profiles of genetic mutations, copy number alterations, and functional protein expressions from whole-slide images. We introduced a transformation strategy that converts whole-slide image scans into graph-structured data to address the spatial heterogeneity of colon cancer. We developed and assessed the performance of the model on The Cancer Genome Atlas colon adenocarcinoma (TCGA-COAD) and validated it on two external datasets (ie, The Cancer Genome Atlas rectum adenocarcinoma [TCGA-READ] and Clinical Proteomic Tumor Analysis Consortium colon adenocarcinoma [CPTAC-COAD]). We also predicted microsatellite instability and result interpretability.; ; Findings; The model was developed on 459 colon tumour whole-slide images from TCGA-COAD, and externally validated on 165 rectum tumour whole-slide images from TCGA-READ and 161 colon tumour whole-slide images from CPTAC-COAD. For TCGA cohorts, our method accurately predicted the molecular classes of the gene mutations (area under the curve [AUCs] from 82·54 [95% CI 77·41–87·14] to 87·08 [83·28–90·82] on TCGA-COAD, and AUCs from 70·46 [61·37–79·61] to 81·80 [72·20–89·70] on TCGA-READ), along with genes with copy number alterations (AUCs from 81·98 [73·34–89·68] to 90·55 [86·02–94·89] on TCGA-COAD, and AUCs from 62·05 [48·94–73·46] to 76·48 [64·78–86·71] on TCGA-READ), microsatellite instability (MSI) status classification (AUC 83·92 [77·41–87·59] on TCGA-COAD, and AUC 61·28 [53·28–67·93] on TCGA-READ), and protein expressions (AUCs from 85·57 [81·16–89·44] to 89·64 [86·29–93·19] on TCGA-COAD, and AUCs from 51·77 [42·53–61·83] to 59·79 [50·79–68·57] on TCGA-READ). For the CPTAC-COAD cohort, our model predicted a panel of gene mutations with AUC values from 63·74 (95% CI 52·92–75·37) to 82·90 (73·69–90·71), genes with copy number alterations with AUC values from 62·39 (51·37–73·76) to 86·08 (79·67–91·74), and MSI status prediction with AUC value of 73·15 (63·21–83·13).; ; Interpretation; We showed that spatially connected graph models enable molecular profile predictions in colon cancer and are generalised to rectum cancer. After further validation, our method could be used to infer the prognostic value of multiscale molecular biomarkers and identify targeted therapies for patients with colon cancer.; ; Funding; This research has been partially funded by ARO MURI 805491, NSF IIS-1793883, NSF CNS-1747778, NSF IIS 1763523, DOD-ARO ACC-W911NF, and NSF OIA-2040638 to Dimitri N Metaxas.
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
Ding, Nan
Chen, Xi
Levinboim, Tomer
Changpinyo, Soravit
Soricut, Radu
2022Book Section, cited 0 times
CBIS-DDSM
With the increasing abundance of pretrained models in recent years, the problem of selecting the best pretrained checkpoint for a particular downstream classification task has been gaining increased attention. Although several methods have recently been proposed to tackle the selection problem (e.g. LEEP, H-score), these methods resort to applying heuristics that are not well motivated by learning theory. In this paper we present PACTran, a theoretically grounded family of metrics for pretrained model selection and transferability measurement. We first show how to derive PACTran metrics from the optimal PAC-Bayesian bound under the transfer learning setting. We then empirically evaluate three metric instantiations of PACTran on a number of vision tasks (VTAB) as well as a language-and-vision (OKVQA) task. An analysis of the results shows PACTran is a more consistent and effective transferability measure compared to existing selection methods.
COMPUTATIONAL IMAGING AND MULTIOMIC BIOMARKERS FOR PRECISION MEDICINE: CHARACTERIZING HETEROGENEITY IN LUNG CANCER
Lung cancer is the leading cause of cancer deaths and is the third most diagnosed cancer in both men and women in the United States. Non-small cell lung cancer (NSCLC) accounts for 84% of all lung cancer cases. The inherent intra-tumor and inter-tumor heterogeneity in lung tumors has been linked with adverse clinical outcomes. A well-rounded characterization of tumor heterogeneity by personalized biomarkers is needed to develop precision medicine treatment strategies for cancer. Large-scale genome-based characterization poses the disadvantages of high cost and technical complexity. Further, a histopathological sample from a tumor biopsy may not be able to fully represent the structural and functional properties of the entire tumor. Medical imaging is now emerging as a key player in the field of personalized medicine, due to its ability to non-invasively characterize the anatomical and physiological properties of the tumor regions. The studies included in this thesis introduce analytical tools developed thorough machine learning and bioinformatics and use information from diagnostic images and other “omic” sources, to develop computational imaging and multiomic biomarkers to characterize intratumor heterogeneity. A novel radiomic biomarker, that integrates with PDL1 expression, ECOG status, BMI, and smoking status, to enhance the ability to predict progression-free survival in a preliminary cohort of patients with stage 4 NSCLC, treated with first-line anti-PD1/PDL1 checkpoint inhibitor therapy PEMBROLIZUMAB. This study also showed that mitigation of the heterogeneity introduced by voxel spacing and image acquisition parameters improves the prognostic performance of the radiomic phenotypes. We further performed a detailed investigation of the effects of heterogeneity in image parameters on the reproducibility of prognostic performance of models built using radiomic biomarkers. The results of this second study indicated that accounting for heterogeneity in image parameters is important to obtain more reproducible prognostic scores, irrespective of image site or modality. In the third study, we developed novel multiomic phenotypes in a larger cohort of patients with stage 4 NSCLC treated with PEMBROLIZUMAB. These multiomic phenotypes, formed by integration of radiomics, radiological and pathological information of the patients, enhanced precision in progression-free survival prediction upon combination with prognostic clinical variables. To our knowledge, our study is the first to construct a “multiomic signature for prognosis of NSCLC patient response to immunotherapy, in contrast to prior radiogenomic approaches leveraging a radiomics signature to identify patient categories based on a genomic biomarker-based classification. In the exploratory fourth study, we evaluated the performance of radiomics analyses of part-solid lung nodules to detect nodule invasiveness using several approaches: radiomics analysis in the presurgical CT scan, delta radiomics over three time-points leading up to surgical resection and nodule volumetry. The best performing model for the prediction of nodule invasiveness was the model built using a combination of immediate pre-surgical, delta radiomics, delta volumes and clinical assessment. The study showed that the combined utilization of clinical, volumetric and radiomic features may facilitate complex decision making in the management of subsolid lung nodules. To summarize, the studies included in this thesis demonstrate the value of computational radiomic and multiomic biomarkers in the characterization of lung tumor heterogeneity and have the potential to be utilized in the advancement of precision medicine in oncology.
Spinal cord detection in planning CT for radiotherapy through adaptive template matching, IMSLIC and convolutional neural networks
Diniz, J. O. B.
Diniz, P. H. B.
Valente, T. L. A.
Silva, A. C.
Paiva, A. C.
Comput Methods Programs Biomed2019Journal Article, cited 23 times
Website
LCTSC
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Planning CT
Spinal cord
Radiation Therapy
BACKGROUND AND OBJECTIVE: The spinal cord is a very important organ that must be protected in treatments of radiotherapy (RT), considered an organ at risk (OAR). Excess rays associated with the spinal cord can cause irreversible diseases in patients who are undergoing radiotherapy. For the planning of treatments with RT, computed tomography (CT) scans are commonly used to delimit the OARs and to analyze the impact of rays in these organs. Delimiting these OARs take a lot of time from medical specialists, plus the fact that involves a large team of professionals. Moreover, this task made slice-by-slice becomes an exhaustive and consequently subject to errors, especially in organs such as the spinal cord, which extend through several slices of the CT and requires precise segmentation. Thus, we propose, in this work, a computational methodology capable of detecting spinal cord in planning CT images. METHODS: The techniques highlighted in this methodology are adaptive template matching for initial segmentation, intrinsic manifold simple linear iterative clustering (IMSLIC) for candidate segmentation and convolutional neural networks (CNN) for candidate classification, that consists of four steps: (1) images acquisition, (2) initial segmentation, (3) candidates segmentation and (4) candidates classification. RESULTS: The methodology was applied on 36 planning CT images provided by The Cancer Imaging Archive, and achieved an accuracy of 92.55%, specificity of 92.87% and sensitivity of 89.23% with 0.065 of false positives per images, without any false positives reduction technique, in detection of spinal cord. CONCLUSIONS: It is demonstrated the feasibility of the analysis of planning CT images using IMSLIC and convolutional neural network techniques to achieve success in detection of spinal cord regions.
Breast Cancer Mass Detection in Mammograms Using Gray Difference Weight and MSER Detector
Divyashree, B. V.
Kumar, G. Hemantha
SN Computer Science2021Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Mammography
fast marching method
Computer Aided Detection (CADe)
Algorithm Development
Breast cancer is a deadly and one of the most prevalent cancers in women across the globe. Mammography is widely used imaging modality for diagnosis and screening of breast cancer. Segmentation of breast region and mass detection are crucial steps in automatic breast cancer detection. Due to the non-uniform distribution of various tissues, it is a challenging task to analyze mammographic images with high accuracy. In this paper, background suppression and pectoral muscle removal are performed using gradient weight map followed by gray difference weight and fast marching method. Enhancement of breast region is performed using contrast limited adaptive histogram equalization (CLAHE) and de-correlation stretch. Detection of breast masses is accomplished by gray difference weight and maximally stable external regions (MSER) detector. Experimentation on Mammographic Image Analysis Society (MIAS) and curated breast imaging subset of digital database for screening mammography (CBIS-DDSM) show that the method proposed performs breast boundary segmentation and mass detection with best accuracies. Mass detection achieved high accuracies of about 97.64% and 94.66% for MIAS and CBIS-DDSM dataset, respectively. The method is simple, robust, less affected to noise, density, shape and size which could provide reasonable support for mammographic analysis.
An efficient reversible data hiding using SVD over a novel weighted iterative anisotropic total variation based denoised medical images
Diwakar, Manoj
Kumar, Pardeep
Singh, Prabhishek
Tripathi, Amrendra
Singh, Laxman
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
LIDC-IDRI
Algorithm Development
Watermarking
Image denoising
Computed Tomography (CT)
Computed tomography (CT) advancement and extensive usage have raised the public’s worry regarding the patient’s associated radiation dose. Reducing the radiation dose may lead to more noise and artifacts, which may harm the reputation of radiologists. The instability of low-dose CT reconstruction necessitates better image reconstruction, which increases the diagnostic performance. More modern low-dose CT tests have demonstrated outstanding results. Many times these low-dose denoised medical images with medical related information are also required to transmit over a network. Hence in this article, firstly is a novel denoising method is proposed to improve the quality of low-dose CT images that is based on the total variation method by utilizing whale optimization algorithm (WHA). WHA method is important for getting the best possible weighted function. Reduction of noise occurs by the comparison of a given output to the ground truth, although total variation tends to statistically migrate the data noise distribution from strong to weak. Following denoising, a reversible watermarking approach based on SVD and multi-local extrema (MLE) approaches is provided. Individual results of denoising and watermarking are excellent in terms of visual and performance metrics, according to comparative experimental investigation. Also it was also analyzed that if the watermark is embedded over the denoised CT images then the results of watermarking methods are impressive. So, resultant image offers us the chance to use our visual perception abilities to allow us to cut noise and keep vital and secure information.
A Diagnostic Study of Content-Based Image Retrieval Technique for Studying the CT Images of Lung Nodules and Prediction of Lung Cancer as a Biometric Tool
Dixit, Rajeev
Kumar, Dr Pankaj
Ojha, Dr Shashank
International Journal of Electrical and Electronics Research2023Journal Article, cited 0 times
Website
LIDC-IDRI
Content based image retrieval (CBIR)
LUNG
Automatic detection
Content Based Medical Image Retrieval (CBMIR) can be defined as a digital image search using the contents of the images. CBMIR plays a very important part in medical applications such as retrieving CT images and more accurately diagnosing aberrant lung tissues in CT images. The Content-Based Medical Image Retrieval (CBMIR) method might aid radiotherapists in examining a patient's CT image in order to retrieve comparable pulmonary nodes more precisely by utilizing query nodes. Intending a particular query node, the CBMIR system searches a large chest CT image database for comparable nodes. The prime aim of this research is to evaluate an end-to-end method for developing a CBIR system for lung cancer diagnosis.
An introduction to Topological Object Data Analysis
Summary and analysis are important foundations in Statistics, but typical methods may prove ineffective at providing thorough summaries of complex object data. Topological data analysis (TDA) (also called topological object data analysis (TODA) when applied to object data) provides additional topological summaries, such as the persistence diagram and persistence landscape, that can be useful in distinguishing distributions based on data sets. The main tool is persistent homology, which tracks the births and deaths of various homology classes as one steps through a filtered simplicial complex that covers the sample. The persistence diagrams and landscapes can also be used to provide confidence sets for “significant” features and two-sample tests between groups. An example of application is provided via analyzing mammogram images for patients with benign and malignant masses.
Complete fully automatic detection, segmentation and 3D reconstruction of tumor volume for non-small cell lung cancer using YOLOv4 and region-based active contour model
Dlamini, S.
Chen, Y. H.
Kuo, C. F. J.
Expert Systems with Applications2023Journal Article, cited 0 times
QIN LUNG CT
LIDC-IDRI
LUNA16 Challenge
lung tumor
detection
segmentation
active contour
YOLO
k-means
treatment response
classification
networks
accurate
nodules
level
We aim to develop a fully automatic system that will detect, segment and accurately reconstruct non-small cell lung cancer tumors into space using YOLOv4 and region-based active contour model. The system consists of two main sections which are detection and volumetric rendering. The detection section is composed of image enhancement, augmentation, labeling and localization while the volumetric rendering is mainly image filtering, tumor extraction, region-based active contour and 3D reconstruction. In this method the images are enhanced to eliminate noise before augmentation which is intended to multiply and diversify the image data. Labeling was then carried out in order to create a solid learning foundation for the localization model. Images with localized tumors were passed through smoothing filters and then clustered to extract tumor masks. Lastly contour information was obtained to render the volumetric tumor. The designed system displays a strong detection performance with a precision of 96.57%, sensitivity and F 1 score of 97.02% and 96.79% respectively at a detection speed of 34 fps, prediction time per image of 21.38 ms. The system segmentation validation achieved a dice score coefficient of 92.19 % on tumor extraction. A 99.74 % accuracy was obtained during the verification of the method's volumetric rendering using a 3D printed image of the rendered tumor. The rendering of the volumetric tumor was obtained at an average time of 11 s. This system shows a strong performance and reliability due to its ability to detect, segment and reconstruct a volumetric tumor into space with high confidence.
Learning Multi-Class Segmentations From Single-Class Datasets
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.
Deep Learning Based Ensemble Approach for 3D MRI Brain Tumor Segmentation
Brain tumor segmentation has wide applications and important potential values for glioblastoma research. Because of the complexity of the structure of subtype tumors and the different visual scenes of multi modalities like T1, T1ce, T2, and FLAIR, most methods fail to segment the brain tumors with high accuracy. The sizes and shapes of tumors are very diverse in the wild. Another problem is that most recent algorithms ignore the multi-scale information of brain tumor features. To handle these problems, an ensemble method that utilizes the strength of dilated convolution in capturing larger receptive fields, which has more context information of brain image, also gets the ability of small tumor segmentation by using multiple tasks learning. Besides, we apply the generalized wasserstein dice loss function in training the model to solve the problem of imbalanced between multi-class segmentation. The experimental results demonstrate that the proposed ensemble method improves the accuracy in brain tumor segmentation, showing superiority to other recent segmentation methods.
Combining CNNs with Transformer for Multimodal 3D MRI Brain Tumor Segmentation
We apply an ensemble of modified TransBTS, nnU-Net, and a combination of both for the segmentation task of the BraTS 2021 challenge. We change the original architecture of the TransBTS model by adding Squeeze-and-Excitation blocks, increasing the number of CNN layers, replacing positional encoding in the Transformer block with a learnable Multilayer Perceptron (MLP) embeddings, which makes Transformer adjustable to any input size during inference. With these modifications, we can improve TransBTS performance largely. Inspired by a nnU-Net framework, we decided to combine it with our modified TransBTS by changing the architecture inside nnU-Net to our custom model. On the Validation set of BraTS 2021, the ensemble of these approaches achieves 0.8496, 0.8698, 0.9256 Dice score and 15.72, 11.057, 3.374 HD95 for enhancing tumor, tumor core, and whole tumor, correspondingly. On test set we get Dice score 0.8789, 0.8759, 0.9279, and HD95: 10.426, 17.203, 4.93. Our code is publicly available. (Implementation is available at https://github.com/ucuapps/BraTS2021_Challenge).
A deep unsupervised learning framework for the 4D CBCT artifact correction
Dong, Guoya
Zhang, Chenglong
Deng, Lei
Zhu, Yulin
Dai, Jingjing
Song, Liming
Meng, Ruoyan
Niu, Tianye
Liang, Xiaokun
Xie, Yaoqin
Physics in Medicine and Biology2022Journal Article, cited 0 times
4D-Lung
Objective.Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT.Approach.The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset.Main results.The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation.Significance.The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.
A Separate 3D Convolutional Neural Network Architecture for 3D Medical Image Semantic Segmentation
Dong, Shidu
Liu, Zhi
Wang, Huaqiu
Zhang, Yihao
Cui, Shaoguo
Journal of Medical Imaging and Health Informatics2019Journal Article, cited 0 times
BraTS-TCGA-LGG
Machine Learning
To exploit three-dimensional (3D) context information and improve 3D medical image semantic segmentation, we propose a separate 3D (S3D) convolution neural network (CNN) architecture. First, a two-dimensional (2D) CNN is used to extract the 2D features of each slice in the
xy
-plane
of 3D medical images. Second, one-dimensional (1D) features reassembled from the 2D features in the
z
-axis are input into a 1D-CNN and are then classified feature-wise. Analysis shows that S3D-CNN has lower time complexity, fewer parameters and less memory space requirements than other
3D-CNNs with a similar structure. As an example, we extend the deep convolutional encoder–decoder architecture (SegNet) to S3D-SegNet for brain tumor image segmentation. We also propose a method based on priority queues and the dice loss function to address the class imbalance for medical
image segmentation. The experimental results show the following: (1) S3D-SegNet extended from SegNet can improve brain tumor image segmentation. (2) The proposed imbalance accommodation method can increase the speed of training convergence and reduce the negative impact of the imbalance. (3)
S3D-SegNet with the proposed imbalance accommodation method offers performance comparable to that of some state-of-the-art 3D-CNNs and experts in brain tumor image segmentation.
Collaborative learning of joint medical image segmentation tasks from heterogeneous and weakly-annotated data
Convolutional Neural Networks (CNNs) have become the state-of-the-art for most image segmentation tasks and therefore one would expect them to be able to learn joint tasks, such as brain structures and pathology segmentation. However, annotated databases required to train CNNs are usually dedicated to a single task, leading to partial annotations (e.g. brain structure or pathology delineation but not both for joint tasks). Moreover, the information required for these tasks may come from distinct magnetic resonance (MR) sequences to emphasise different types of tissue contrast, leading to datasets with different sets of image modalities. Similarly, the scans may have been acquired at different centres, with different MR parameters, leading to differences in resolution and visual appearance among databases (domain shift). Given the large amount of resources, time and expertise required to carefully annotate medical images, it is unlikely that large and fully-annotated databases will become readily available for every joint problem. For this reason, there is a need to develop collaborative approaches that exploit existing heterogeneous and task-specific datasets, as well as weak annotations instead of time-consuming pixel-wise annotations.; ; In this thesis, I present methods to learn joint medical segmentation tasks from task-specific, domain-shifted, hetero-modal and weakly-annotated datasets. The problem lies at the intersection of several branches of Machine Learning: Multi-Task Learning, Hetero-Modal Learning, Domain Adaptation and Weakly Supervised Learning. First, I introduce a mathematical formulation of a joint segmentation problem under the constraint of missing modalities and partial annotations, in which Domain Adaptation techniques can be directly integrated, and a procedure to optimise it. Secondly, I propose a principled approach to handle missing modalities based on Hetero-Modal Variational Auto-Encoders. Thirdly, in this thesis, I focus on Weakly Supervised Learning techniques and present a novel approach to train deep image segmentation networks using particularly weak train-time annotations: only 4 (2D) or 6 (3D) extreme clicks at the boundary of the objects of interest. The proposed framework connects the extreme points using a new formulation of geodesics that integrates the network outputs and uses the generated paths for supervision. Fourthly, I introduce a new weakly-supervised Domain Adaptation technique using scribbles on the target domain and formulate as a cross-domain CRF optimisation problem. Finally, I led the organisation of the first medical segmentation challenge for unsupervised cross-modality domain adaptation (crossMoDA). The benchmark reported in this thesis provides a comprehensive characterisation of cross-modality domain adaptation techniques.; ; Experiments are performed on brain MR images from patients with different types of brain diseases: gliomas, white matter lesions and vestibular schwannoma. The results demonstrate the broad applicability of the presented frameworks to learn joint segmentation tasks with the potential to improve brain disease diagnosis and patient management in clinical practice.
Inter Extreme Points Geodesics for End-to-End Weakly Supervised Image Segmentation
We introduce InExtremIS, a weakly supervised 3D approach to train a deep image segmentation network using particularly weak train-time annotations: only 6 extreme clicks at the boundary of the objects of interest. Our fully-automatic method is trained end-to-end and does not require any test-time annotations. From the extreme points, 3D bounding boxes are extracted around objects of interest. Then, deep geodesics connecting extreme points are generated to increase the amount of “annotated” voxels within the bounding boxes. Finally, a weakly supervised regularised loss derived from a Conditional Random Field formulation is used to encourage prediction consistency over homogeneous regions. Extensive experiments are performed on a large open dataset for Vestibular Schwannoma segmentation. InExtremIS obtained competitive performance, approaching full supervision and outperforming significantly other weakly supervised techniques based on bounding boxes. Moreover, given a fixed annotation time budget, InExtremIS outperformed full supervision. Our code and data are available online.
Assessment of Renal Cell Carcinoma by Texture Analysis in Clinical Practice: A Six-Site, Six-Platform Analysis of Reliability
Doshi, A. M.
Tong, A.
Davenport, M. S.
Khalaf, A.
Mresh, R.
Rusinek, H.
Schieda, N.
Shinagare, A.
Smith, A. D.
Thornhill, R.
Vikram, R.
Chandarana, H.
AJR Am J Roentgenol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Background: Multiple commercial and open-source software applications are available for texture analysis. Nonstandard techniques can cause undesirable variability that impedes result reproducibility and limits clinical utility. Objective: The purpose of this study is to measure agreement of texture metrics extracted by 6 software packages. Methods: This retrospective study included 40 renal cell carcinomas with contrast-enhanced CT from The Cancer Genome Atlas and Imaging Archive. Images were analyzed by 7 readers at 6 sites. Each reader used 1 of 6 software packages to extract commonly studied texture features. Inter and intra-reader agreement for segmentation was assessed with intra-class correlation coefficients. First-order (available in 6 packages) and second-order (available in 3 packages) texture features were compared between software pairs using Pearson correlation. Results: Inter- and intra-reader agreement was excellent (ICC 0.93-1). First-order feature correlations were strong (r>0.8, p<0.001) between 75% (21/28) of software pairs for mean and standard deviation, 48% (10/21) for entropy, 29% (8/28) for skewness, and 25% (7/28) for kurtosis. Of 15 second-order features, only co-occurrence matrix correlation, grey-level non-uniformity, and run-length non-uniformity showed strong correlation between software packages (0.90-1, p<0.001). Conclusion: Variability in first and second order texture features was common across software configurations and produced inconsistent results. Standardized algorithms and reporting methods are needed before texture data can be reliably used for clinical applications. Clinical Impact: It is important to be aware of variability related to texture software processing and configuration when reporting and comparing outputs.
Comparative analysis of modifications of U-Net neuronal network architectures in medical image segmentation
Dostovalova, Anastasia M.
Gorshenin, Andrey K.
Starichkova, Julia V.
Arzamasov, Kirill M.
2024Journal Article, cited 0 times
Lung-PET-CT-Dx
Pancreas-CT
Data processing methods using neural networks are gaining increasing popularity in a variety of medical diagnostic problems. Most often, such methods are used in the study of medical images of human organs using CT scan and magnetic resonance imaging, ultrasound and other non-invasive research methods. Diagnosing pathology in this case is the problem of segmenting a medical image, that is, searching for groups (regions) of pixels that characterize certain objects in them. One of the most successful methods for solving this problem is the U-Net neural network architecture developed in 2015. This review examines various modifications of the classic U-Net architecture. The reviewed papers are divided into several key areas: modifications of the encoder and decoder, the use of attention blocks, combination with elements of other architectures, methods for introducing additional features, transfer learning and approaches for processing small sets of real data. Various training sets are considered, for which the best values of various metrics achieved in the literature are given (similarity coefficient Dice, intersection over union IoU, overall accuracy and some others). A summary table is provided indicating the types of images analyzed and the pathologies detected on them. Promising directions for further modifications to improve the quality of solving segmentation problems are outlined. This review can be useful for determining a set of tools for identifying various diseases, primarily cancers. The presented algorithms can be a basis of professional intelligent medical assistants.
Proteogenomic insights suggest druggable pathways in endometrial carcinoma
Yongchao Dou
Lizabeth Katsnelson
Marina A. Gritsenko
Yingwei Hu
Boris Reva
Runyu Hong
Yi-Ting Wang
Iga Kolodziejczak
Rita Jui-Hsien Lu
Chia-Feng Tsai
Wen Bu
Wenke Liu
Xiaofang Guo
Eunkyung An
Rebecca C. Arend
Jasmin Bavarva
Lijun Chen
Rosalie K. Chu
Andrzej Czekański
Teresa Davoli
Elizabeth G. Demicco
Deborah DeLair
Kelly Devereaux
Saravana M. Dhanasekaran
Peter Dottino
Bailee Dover
Thomas L. Fillmore
McKenzie Foxall
Catherine E. Hermann
Tara Hiltke
Galen Hostetter
Marcin Jędryka
Scott D. Jewell
Isabelle Johnson
Andrea G. Kahn
Amy T. Ku
Chandan Kumar-Sinha
Paweł Kurzawa
Alexander J. Lazar
Rossana Lazcano
Jonathan T. Lei
Yi Li
Yuxing Liao
Tung-Shing M. Lih
Tai-Tu Lin
John A. Martignetti
Ramya P. Masand
Rafał Matkowski
Wilson McKerrow
Mehdi Mesri
Matthew E. Monroe
Jamie Moon
Ronald J. Moore
Michael D. Nestor
Chelsea Newton
Tatiana Omelchenko
Gilbert S. Omenn
Samuel H. Payne
Vladislav A. Petyuk
Ana I. Robles
Henry Rodriguez
Kelly V. Ruggles
Dmitry Rykunov
Sara R. Savage
Athena A. Schepmoes
Tujin Shi
Zhiao Shi
Jimin Tan
Mason Taylor
Mathangi Thiagarajan
Joshua M. Wang
Karl K. Weitz
Bo Wen
C.M. Williams
Yige Wu
Matthew A. Wyczalkowski
Xinpei Yi
Xu Zhang
Rui Zhao
David Mutch
Arul M. Chinnaiyan
Richard D. Smith
Alexey I. Nesvizhskii
Pei Wang
Maciej Wiznerowicz
Li Ding
D.R. Mani
Hui Zhang
Matthew L. Anderson
Karin D. Rodland
Bing Zhang
Tao Liu
David Fenyö
Clinical Proteomic Tumor Analysis Consortium
Andrzej Antczak
Meenakshi Anurag
Thomas Bauer
Chet Birger
Michael J. Birrer
Melissa Borucki
Shuang Cai
Anna Calinawan
Steven A. Carr
Patricia Castro
Sandra Cerda
Daniel W. Chan
David Chesla
Marcin P. Cieslik
Sandra Cottingham
Rajiv Dhir
Marcin J. Domagalski
Brian J. Druker
Elizabeth Duffy
Nathan J. Edwards
Robert Edwards
Matthew J. Ellis
Jennifer Eschbacher
Mina Fam
Brenda Fevrier-Sullivan
Jesse Francis
John Freymann
Stacey Gabriel
Gad Getz
Michael A. Gillette
Andrew K. Godwin
Charles A. Goldthwaite
Pamela Grady
Jason Hafron
Pushpa Hariharan
Barbara Hindenach
Katherine A. Hoadley
Jasmine Huang
Michael M. Ittmann
Ashlie Johnson
Corbin D. Jones
Karen A. Ketchum
Justin Kirby
Toan Le
Avi Ma'ayan
Rashna Madan
Sailaja Mareedu
Peter B. McGarvey
Francesmary Modugno
Rebecca Montgomery
Kristen Nyce
Amanda G. Paulovich
Barbara L. Pruetz
Liqun Qi
Shannon Richey
Eric E. Schadt
Yvonne Shutack
Shilpi Singh
Michael Smith
Darlene Tansil
Ratna R. Thangudu
Matt Tobin
Ki Sung Um
Negin Vatanian
Alex Webster
George D. Wilson
Jason Wright
Kakhaber Zaalishvili
Zhen Zhang
Grace Zhao
Cancer Cell2023Journal Article, cited 0 times
CPTAC-UCEC
TCGA-UCEC
We characterized a prospective endometrial carcinoma (EC) cohort containing 138 tumors and 20 enriched normal tissues using 10 different omics platforms. Targeted quantitation of two peptides can predict antigen processing and presentation machinery activity, and may inform patient selection for immunotherapy. Association analysis between MYC activity and metformin treatment in both patients and cell lines suggests a potential role for metformin treatment in non-diabetic patients with elevated MYC activity. PIK3R1 in-frame indels are associated with elevated AKT phosphorylation and increased sensitivity to AKT inhibitors. CTNNB1 hotspot mutations are concentrated near phosphorylation sites mediating pS45-induced degradation of β-catenin, which may render Wnt-FZD antagonists ineffective. Deep learning accurately predicts EC subtypes and mutations from histopathology images, which may be useful for rapid diagnosis. Overall, this study identified molecular and imaging markers that can be further investigated to guide patient stratification for more precise treatment of EC.
Sensorless End-to-End Freehand Ultrasound with Physics Inspired Network
Dou, Yimeng
Mu, Fangzhou
Li, Yin
Varghese, Tomy
2023Conference Paper, cited 0 times
Prostate-MRI-US-Biopsy
Three-dimensional ultrasound (3D US) imaging finds broad applications in clinical practice. Compared to using 'wobbler' (motor embedded) or 'matrix array' transducers which suffers from a restricted field of view, freehand US offers more flexibility in mapping the 3D volume along the scanning path. Nevertheless, current methods for reconstructing 3D US volumes using freehand scanning are challenged by significant elevational shifts along the scanning trajectory. Previous research explored the integration of motion sensors to resolve transducer drift, yet frequent motion artifacts often compromise the benefits brought by these sensors. Recent work turned to deep neural networks (DNNs) for estimating the relative pose of imaging planes between frames in the scanning trajectory, allowing for sensorless freehand US. However, these data-driven models fall short in accuracy as they lack physical constraints, hence limiting their use in clinical practice. Inspired by the physical properties of US, we design a physics inspired DNN architecture that can better leverage the contextual cues between elevational frames for sensorless freehand 3D US reconstruction, and demonstrate substantially improving reconstruction accuracy.
A segmentation-based method improving the performance of N4 bias field correction on T2weighted MR imaging data of the prostate
Dovrou, A.
Nikiforaki, K.
Zaridis, D.
Manikis, G. C.
Mylona, E.
Tachos, N.
Tsiknakis, M.
Fotiadis, D. I.
Marias, K.
Magn Reson Imaging2023Journal Article, cited 2 times
Website
PROSTATEx
PROSTATE-MRI
PROSTATE-DIAGNOSIS
PI-CAI
Male
Humans
*Prostate/pathology
*Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging/methods
Bias
Phantoms
Imaging
Full width at half maximum
N4 bias field correction
Periprostatic fat segmentation
Prostate imaging
Magnetic Resonance (MR) images suffer from spatial inhomogeneity, known as bias field corruption. The N4ITK filter is a state-of-the-art method used for correcting the bias field to optimize MR-based quantification. In this study, a novel approach is presented to quantitatively evaluate the performance of N4 bias field correction for pelvic prostate imaging. An exploratory analysis, regarding the different values of convergence threshold, shrink factor, fitting level, number of iterations and use of mask, is performed to quantify the performance of N4 filter in pelvic MR images. The performance of a total of 240 different N4 configurations is examined using the Full Width at Half Maximum (FWHM) of the segmented periprostatic fat distribution as evaluation metric. Phantom T2weighted images were used to assess the performance of N4 for a uniform test tissue mimicking material, excluding factors such as patient related susceptibility and anatomy heterogeneity. Moreover, 89 and 204 T2weighted patient images from two public datasets acquired by scanners with a combined surface and endorectal coil at 1.5 T and a surface coil at 3 T, respectively, were utilized and corrected with a variable set of N4 parameters. Furthermore, two external public datasets were used to validate the performance of the N4 filter in T2weighted patient images acquired by various scanning conditions with different magnetic field strengths and coils. The results show that the set of N4 parameters, converging to optimal representations of fat in the image, were: convergence threshold 0.001, shrink factor 2, fitting level 6, number of iterations 100 and the use of default mask for prostate images acquired by a combined surface and endorectal coil at both 1.5 T and 3 T. The corresponding optimal N4 configuration for MR prostate images acquired by a surface coil at 1.5 T or 3 T was: convergence threshold 0.001, shrink factor 2, fitting level 5, number of iterations 25 and the use of default mask. Hence, periprostatic fat segmentation can be used to define the optimal settings for achieving T2weighted prostate images free from bias field corruption to provide robust input for further analysis.
Magnetic resonance imaging of in vitro urine flow in single and tandem stented ureters subject to extrinsic ureteral obstruction
Dror, I.
Harris, T.
Kalchenko, V.
Shilo, Y.
Berkowitz, B.
Int J Urol2022Journal Article, cited 1 times
Website
OBJECTIVE: To quantify the relative volumetric flows in stent and ureter lumina, as a function of stent size and configuration, in both unobstructed and externally obstructed stented ureters. METHODS: Magnetic resonance imaging was used to measure flow in stented ureters using a phantom kidney model. Volumetric flow in the stent and ureter lumina were determined along the stented ureters, for each of four single stent sizes (4.8F, 6F, 7F, and 8F), and for tandem (6F and 7F) configurations. Measurements were made in the presence of a fully encircling extrinsic ureteral obstruction as well as in benchmark cases with no extrinsic ureteral obstruction. RESULTS: Under no obstruction, the relative contribution of urine flow in single stents is 1-10%, while the relative contributions to flow are ~6 and ~28% for tandem 6F and 7F, respectively. In the presence of an extrinsic ureteral obstruction and single stents, all urine passes within the stent lumen near the extrinsic ureteral obstruction. For tandem 6F and 7F stents under extrinsic ureteral obstruction, relative volumetric flows in the two stent lumina are ~73% and ~81%, respectively, with the remainder passing through the ureter lumen. CONCLUSIONS: Magnetic resonance imaging demonstrates that with no extrinsic ureteral obstruction, minimal urine flow occurs within a stent. Stent lumen flow is significant in the presence of extrinsic ureteral obstruction, in the vicinity of the extrinsic ureteral obstruction. For tandem stents subjected to extrinsic ureteral obstruction, urine flow also occurs in the ureter lumen between the stents, which can reduce the likelihood of kidney failure even in the case of both stent lumina being occluded.
Breast MRI radiomics for the pretreatment prediction of response to neoadjuvant chemotherapy in node-positive breast cancer patients
Drukker, Karen
Edwards, Alexandra
Doyle, Christopher
Papaioannou, John
Kulkarni, Kirti
Giger, Maryellen L.
Journal of Medical Imaging2019Journal Article, cited 0 times
ISPY1
The purpose of this study was to evaluate breast MRI radiomics in predicting, prior to any treatment, the response to neoadjuvant chemotherapy (NAC) in patients with invasive lymph node (LN)-positive breast cancer for two tasks: (1) prediction of pathologic complete response and (2) prediction of post-NAC LN status. Our study included 158 patients, with 19 showing post-NAC complete pathologic response (pathologic TNM stage T0,N0,MX) and 139 showing incomplete response. Forty-two patients were post-NAC LN-negative, and 116 were post-NAC LN-positive. We further analyzed prediction of response by hormone receptor subtype of the primary cancer (77 hormone receptor-positive, 39 HER2-enriched, 38 triple negative, and 4 cancers with unknown receptor status). Only pre-NAC MRIs underwent computer analysis, initialized by an expert breast radiologist indicating index cancers and metastatic axillary sentinel LNs on DCE-MRI images. Forty-nine computer-extracted radiomics features were obtained, both for the primary cancers and for the metastatic sentinel LNs. Since the dataset contained MRIs acquired at 1.5 T and at 3.0 T, we eliminated features affected by magnet strength using the Mann-Whitney U-test with the null-hypothesis that 1.5 T and 3.0 T samples were selected from populations having the same distribution. Bootstrapping and ROC analysis were used to assess performance of individual features in the two classification tasks. Eighteen features appeared unaffected by magnet strength. Pre-NAC tumor features generally appeared uninformative in predicting response to therapy. In contrast, some pre-NAC LN features were able to predict response: two pre-NAC LN features were able to predict pathologic complete response (area under the ROC curve (AUC) up to 0.82 [0.70; 0.88]), and another two were able to predict post-NAC LN-status (AUC up to 0.72 [0.62; 0.77]), respectively. In the analysis by a hormone receptor subtype, several potentially useful features were identified for predicting response to therapy in the hormone receptor-positive and HER2-enriched cancers.
Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.
Most-enhancing tumor volume by MRI radiomics predicts recurrence-free survival “early on” in neoadjuvant treatment of breast cancer
Drukker, Karen
Li, Hui
Antropova, Natalia
Edwards, Alexandra
Papaioannou, John
Giger, Maryellen L
Cancer Imaging2018Journal Article, cited 0 times
ACRIN-FLT-Breast
Radiomics
BREAST
BACKGROUND: The hypothesis of this study was that MRI-based radiomics has the ability to predict recurrence-free survival "early on" in breast cancer neoadjuvant chemotherapy. METHODS: A subset, based on availability, of the ACRIN 6657 dynamic contrast-enhanced MR images was used in which we analyzed images of all women imaged at pre-treatment baseline (141 women: 40 with a recurrence, 101 without) and all those imaged after completion of the first cycle of chemotherapy, i.e., at early treatment (143 women: 37 with a recurrence vs. 105 without). Our method was completely automated apart from manual localization of the approximate tumor center. The most enhancing tumor volume (METV) was automatically calculated for the pre-treatment and early treatment exams. Performance of METV in the task of predicting a recurrence was evaluated using ROC analysis. The association of recurrence-free survival with METV was assessed using a Cox regression model controlling for patient age, race, and hormone receptor status and evaluated by C-statistics. Kaplan-Meier analysis was used to estimate survival functions. RESULTS: The C-statistics for the association of METV with recurrence-free survival were 0.69 with 95% confidence interval of [0.58; 0.80] at pre-treatment and 0.72 [0.60; 0.84] at early treatment. The hazard ratios calculated from Kaplan-Meier curves were 2.28 [1.08; 4.61], 3.43 [1.83; 6.75], and 4.81 [2.16; 10.72] for the lowest quartile, median quartile, and upper quartile cut-points for METV at early treatment, respectively. CONCLUSION: The performance of the automatically-calculated METV rivaled that of a semi-manual model described for the ACRIN 6657 study (published C-statistic 0.72 [0.60; 0.84]), which involved the same dataset but required semi-manual delineation of the functional tumor volume (FTV) and knowledge of the pre-surgical residual cancer burden.
BRATS2021: Exploring Each Sequence in Multi-modal Input for Baseline U-net Performance
Druzhinina, Polina
Kondrateva, Ekaterina
Bozhenko, Arseny
Yarkin, Vyacheslav
Sharaev, Maxim
Kurmukov, Anvar
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BraTS 2020
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Since 2012 the BraTS competition has become a benchmark for brain MRI segmentation. The top-ranked solutions from the competition leaderboard of past years are primarily heavy and sophisticated ensembles of deep neural networks. The complexity of the proposed solutions can restrict their clinical use due to the long execution time and complicate the model transfer to the other datasets, especially with the lack of some MRI sequences in multimodal input. The current paper provides a baseline segmentation accuracy for each separate MRI modality and all four sequences (T1, T1c, T2, and FLAIR) on conventional 3D U-net architecture. We explore the predictive ability of each modality to segment enhancing core, tumor core, and whole tumor. We then compare the baseline performance with BraTS 2019–2020 state-of-the-art solutions. Finally, we share the code and trained weights to facilitate further research on model transfer to different domains and use in other applications.
3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks
Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0) compared to training from scratch (DICE=41.8). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.
Prediction of Pathological Complete Response to Neoadjuvant Chemotherapy in Breast Cancer Using Deep Learning with Integrative Imaging, Molecular and Demographic Data
Neoadjuvant chemotherapy is widely used to reduce tumor size to make surgical excision manageable and to minimize distant metastasis. Assessing and accurately predicting pathological complete response is important in treatment planing for breast cancer patients. In this study, we propose a novel approach integrating 3D MRI imaging data, molecular data and demographic data using convolutional neural network to predict the likelihood of pathological complete response to neoadjuvant chemotherapy in breast cancer. We take post-contrast T1-weighted 3D MRI images without the need of tumor segmentation, and incorporate molecular subtypes and demographic data. In our predictive model, MRI data and non-imaging data are convolved to inform each other through interactions, instead of a concatenation of multiple data type channels. This is achieved by channel-wise multiplication of the intermediate results of imaging and non-imaging data. We use a subset of curated data from the I-SPY-1 TRIAL of 112 patients with stage 2 or 3 breast cancer with breast tumors underwent standard neoadjuvant chemotherapy. Our method yielded an accuracy of 0.83, AUC of 0.80, sensitivity of 0.68 and specificity of 0.88. Our model significantly outperforms models using imaging data only or traditional concatenation models. Our approach has the potential to aid physicians to identify patients who are likely to respond to neoadjuvant chemotherapy at diagnosis or early treatment, thus facilitate treatment planning, treatment execution, or mid-treatment adjustment.
Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases
Dubey, Shiv Ram
Singh, Satish Kumar
Singh, Rajat Kumar
IEEE Trans Image Process2015Journal Article, cited 52 times
Website
wavelet
Computed Tomography (CT)
A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.
Tissue Differentiation Based on Classification of Morphometric Features of Nuclei
Dudzińska, Dominika
Piórkowski, Adam
2020Book Section, cited 0 times
Pan-Cancer-Nuclei-Seg
The aim of the article is to analyze the shape of nuclei of various tissues and to assess the tumor differentiation based on morphometric measurements. For this purpose, an experiment was conducted, the results of which determine whether it is possible to determine a tissue’s type based on the mentioned features. The measurements were performed on a publicly available data set containing 1,356 hematoxylin- and eosin-stained images with nucleus segmentations for 14 different human tissues. Morphometric analysis of cell nuclei using ImageJ software took 17 parameters into account. Classification of the obtained results was performed in Matlab R2018b software using the SVM and t-SNE algorithms, which showed that some cancers can be distinguished with an accuracy close to 90% (lung squamous cell cancer vs others; breast cancer vs cervical cancer).
An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images
Duggento, Andrea
Aiello, Marco
Cavaliere, Carlo
Cascella, Giuseppe L
Cascella, Davide
Conte, Giovanni
Guerrisi, Maria
Toschi, Nicola
Contrast Media Mol Imaging2019Journal Article, cited 1 times
Website
CBIS-DDSM
Breast
Convolutional Neural Network (CNN)
Radiomics
Classification
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.
Disparities in the Demographic Composition of The Cancer Imaging Archive
Dulaney, A.
Virostko, J.
Radiol Imaging Cancer2024Journal Article, cited 1 times
Website
ACRIN-Contralateral-Breast-MR
ACRIN-DSC-MR-Brain
ACRIN-FLT-Breast
ACRIN-NSCLC-FDG-PET
ISPY1/ACRIN 6657
ACRIN-FMISO-Brain (ACRIN 6684)
ACRIN 6698
ACRIN 6657
Brain-TR-GammaKnife
Breast-Cancer-Screening-DBT
Breast-MRI-NACT-Pilot
Burdenko-GBM-Progression
CBIS-DDSM
CDD-CESM
CMMD
CPTAC-BRCA
CPTAC-GBM
CPTAC-LSCC
CPTAC-LUAD
Duke-Breast-Cancer-MRI
Lung-Fused-CT-Pathology
Lung-PET-CT-Dx
LungCT-Diagnosis
Meningioma-SEG-CLASS
NLST
NSCLC Radiogenomics
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
NSCLC-Radiomics-Interobserver1
Post-NAT-BRCA
QIN-BREAST-02
REMBRANDT
TCGA-BRCA
TCGA-GBM
TCGA-LGG
TCGA-LUAD
TCGA-LUSC
UCSF-PDGM
UPENN-GBM
Female
Humans
Male
Artificial Intelligence
Ethnicity
*Neoplasms/diagnostic imaging/epidemiology
Retrospective Studies
Racial Groups
Datasets as Topic
Age
Bias
Cancer Health Disparities
Ethics
Health Disparities
Machine Learning
Meta-Analysis
Race
Sex
Purpose To characterize the demographic distribution of The Cancer Imaging Archive (TCIA) studies and compare them with those of the U.S. cancer population. Materials and Methods In this retrospective study, data from TCIA studies were examined for the inclusion of demographic information. Of 189 studies in TCIA up until April 2023, a total of 83 human cancer studies were found to contain supporting demographic data. The median patient age and the sex, race, and ethnicity proportions of each study were calculated and compared with those of the U.S. cancer population, provided by the Surveillance, Epidemiology, and End Results Program and the Centers for Disease Control and Prevention U.S. Cancer Statistics Data Visualizations Tool. Results The median age of TCIA patients was found to be 6.84 years lower than that of the U.S. cancer population (P = .047) and contained more female than male patients (53% vs 47%). American Indian and Alaska Native, Black or African American, and Hispanic patients were underrepresented in TCIA studies by 47.7%, 35.8%, and 14.7%, respectively, compared with the U.S. cancer population. Conclusion The results demonstrate that the patient demographics of TCIA data sets do not reflect those of the U.S. cancer population, which may decrease the generalizability of artificial intelligence radiology tools developed using these imaging data sets. Keywords: Ethics, Meta-Analysis, Health Disparities, Cancer Health Disparities, Machine Learning, Artificial Intelligence, Race, Ethnicity, Sex, Age, Bias Published under a CC BY 4.0 license.
This manuscript outlines the design of methods, and initial progress on automatic detection of glioma from MRI images using deep neural networks, all applied and evaluated for the 2020 Brain Tumor Segmentation (BraTS) Challenge. Our approach builds on existing work using U-net architectures, and evaluates a variety deep learning techniques including model averaging and adaptive learning rates.
Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis
Dunn, Bryce
Pierobon, Mariaelena
Wei, Qi
Bioengineering2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Artificial intelligence and emerging data science techniques are being leveraged to interpret medical image scans. Traditional image analysis relies on visual interpretation by a trained radiologist, which is time-consuming and can, to some degree, be subjective. The development of reliable, automated diagnostic tools is a key goal of radiomics, a fast-growing research field which combines medical imaging with personalized medicine. Radiomic studies have demonstrated potential for accurate lung cancer diagnoses and prognostications. The practice of delineating the tumor region of interest, known as segmentation, is a key bottleneck in the development of generalized classification models. In this study, the incremental multiple resolution residual network (iMRRN), a publicly available and trained deep learning segmentation model, was applied to automatically segment CT images collected from 355 lung cancer patients included in the dataset "Lung-PET-CT-Dx", obtained from The Cancer Imaging Archive (TCIA), an open-access source for radiological images. We report a failure rate of 4.35% when using the iMRRN to segment tumor lesions within plain CT images in the lung cancer CT dataset. Seven classification algorithms were trained on the extracted radiomic features and tested for their ability to classify different lung cancer subtypes. Over-sampling was used to handle unbalanced data. Chi-square tests revealed the higher order texture features to be the most predictive when classifying lung cancers by subtype. The support vector machine showed the highest accuracy, 92.7% (0.97 AUC), when classifying three histological subtypes of lung cancer: adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. The results demonstrate the potential of AI-based computer-aided diagnostic tools to automatically diagnose subtypes of lung cancer by coupling deep learning image segmentation with supervised classification. Our study demonstrated the integrated application of existing AI techniques in the non-invasive and effective diagnosis of lung cancer subtypes, and also shed light on several practical issues concerning the application of AI in biomedicine.
Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma
Dunn, William D Jr
Aerts, Hugo J W L
Cooper, Lee A
Holder, Chad A
Hwang, Scott N
Jaffe, Carle C
Brat, Daniel J
Jain, Rajan
Flanders, Adam E
Zinn, Pascal O
Colen, Rivka R
Gutman, David A
J Neuroimaging Psychiatry Neurol2016Journal Article, cited 0 times
Website
Radiogenomics
Magnetic resonance imaging (MRI)
Segmentation
TCGA
3D Slicer
BRAIN
Background: Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods: Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results: We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman's r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion: Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses.;
ProstAttention-Net: A deep attention model for prostate cancer segmentation by aggressiveness in MRI scans
Duran, A.
Dussert, G.
Rouviere, O.
Jaouen, T.
Jodoin, P. M.
Lartizien, C.
Med Image Anal2022Journal Article, cited 7 times
Website
Prostate Fused-MRI-Pathology
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI)
Humans
Male
Multiparametric Magnetic Resonance Imaging
Neoplasm Grading
PROSTATE
*Attention models
*Deep learning
*Prostate cancer
*Semantic segmentation
Multiparametric magnetic resonance imaging (mp-MRI) has shown excellent results in the detection of prostate cancer (PCa). However, characterizing prostate lesions aggressiveness in mp-MRI sequences is impossible in clinical practice, and biopsy remains the reference to determine the Gleason score (GS). In this work, we propose a novel end-to-end multi-class network that jointly segments the prostate gland and cancer lesions with GS group grading. After encoding the information on a latent space, the network is separated in two branches: 1) the first branch performs prostate segmentation 2) the second branch uses this zonal prior as an attention gate for the detection and grading of prostate lesions. The model was trained and validated with a 5-fold cross-validation on a heterogeneous series of 219 MRI exams acquired on three different scanners prior prostatectomy. In the free-response receiver operating characteristics (FROC) analysis for clinically significant lesions (defined as GS >6) detection, our model achieves 69.0%+/-14.5% sensitivity at 2.9 false positive per patient on the whole prostate and 70.8%+/-14.4% sensitivity at 1.5 false positive when considering the peripheral zone (PZ) only. Regarding the automatic GS group grading, Cohen's quadratic weighted kappa coefficient (kappa) is 0.418+/-0.138, which is the best reported lesion-wise kappa for GS segmentation to our knowledge. The model has encouraging generalization capacities with kappa=0.120+/-0.092 on the PROSTATEx-2 public dataset and achieves state-of-the-art performance for the segmentation of the whole prostate gland with a Dice of 0.875+/-0.013. Finally, we show that ProstAttention-Net improves performance in comparison to reference segmentation models, including U-Net, DeepLabv3+ and E-Net. The proposed attention mechanism is also shown to outperform Attention U-Net.
Saliency Based Deep Neural Network for Automatic Detection of Gadolinium-Enhancing Multiple Sclerosis Lesions in Brain MRI
The appearance of contrast-enhanced pathologies (e.g. lesion, cancer) is an important marker of disease activity, stage and treatment efficacy in clinical trials. The automatic detection and segmentation of these enhanced pathologies remains a difficult challenge, as they can be very small and visibly similar to other non-pathological enhancements (e.g. blood vessels). In this paper, we propose a deep neural network classifier for the detection and segmentation of Gadolinium enhancing lesions in brain MRI of patients with Multiple Sclerosis (MS). To avoid false positive and false negative assertions, the proposed end-to-end network uses an enhancement-based attention mechanism which assigns saliency based on the differences between the T1-weighted images before and after injection of Gadolinium, and works to first identify candidate lesions and then to remove the false positives. The effect of the saliency map is evaluated on 2293 patient multi-channel MRI scans acquired during two proprietary, multi-center clinical trials for MS treatments. Inclusion of the attention mechanism results in a decrease in false positive lesion voxels over a basic U-Net [2] and DeepMedic [6]. In terms of lesion-level detection, the framework achieves a sensitivity of 82% at a false discovery rate of 0.2, significantly outperforming the other two methods when detecting small lesions. Experiments aimed at predicting the presence of Gad lesion activity in patient scans (i.e. the presence of more than 1 lesion) result in high accuracy showing: (a) significantly improved accuracy over DeepMedic, and (b) a reduction in the errors in predicting the degree of lesion activity (in terms of per scan lesion counts) over a standard U-Net and DeepMedic.
Deep learning‐based aggregate analysis to identify cut‐off points for decision‐making in pancreatic cancer detection
Dzemyda, Gintautas
Kurasova, Olga
Medvedev, Viktor
Šubonienė, Aušra
Gulla, Aistė
Samuilis, Artūras
Jagminas, Džiugas
Strupas, Kęstutis
Expert Systems2024Journal Article, cited 0 times
Pancreas-CT
Computed tomography
Deep Learning
Abstract This study addresses the problem of detecting pancreatic cancer by classifying computed tomography (CT) images into cancerous and non‐cancerous classes using the proposed deep learning‐based aggregate analysis framework. The application of deep learning, as a branch of machine learning and artificial intelligence, to specific medical challenges can lead to the early detection of diseases, thus accelerating the process towards timely and effective intervention. The concept of classification is to reasonably select an optimal cut‐off point, which is used as a threshold for evaluating the model results. The choice of this point is key to ensure efficient evaluation of the classification results, which directly affects the diagnostic accuracy. A significant aspect of this research is the incorporation of private CT images from Vilnius University Hospital Santaros Klinikos, combined with publicly available data sets. To investigate the capabilities of the deep learning‐based framework and to maximize pancreatic cancer diagnostic performance, experimental studies were carried out combining data from different sources. Classification accuracy metrics such as the Youden index, (0, 1)‐criterion, Matthew's correlation coefficient, the F1 score, LR+, LR−, balanced accuracy, and g‐mean were used to find the optimal cut‐off point in order to balance sensitivity and specificity. By carefully analyzing and comparing the obtained results, we aim to develop a reliable system that will not only improve the accuracy of pancreatic cancer detection but also have wider application in the early diagnosis of other malignancies.
Optimal Cut-Off Points for Pancreatic Cancer Detection Using Deep Learning Techniques
Dzemyda, Gintautas
Kurasova, Olga
Medvedev, Viktor
Šubonienė, Aušra
Gulla, Aistė
Samuilis, Artūras
Jagminas, Džiugas
Strupas, Kȩstutis
2024Book Section, cited 0 times
Pancreas-CT
Machine Learning
Deep learning-based approaches are attracting increasing attention in medicine. Applying deep learning models to specific tasks in the medical field is very useful for early disease detection. In this study, the problem of detecting pancreatic cancer by classifying CT images was solved using the provided deep learning-based framework. The choice of the optimal cut-off point is particularly important for an effective assessment of the results of the classification. In order to investigate the capabilities of the deep learning-based framework and to maximise pancreatic cancer diagnostic performance through the selection of optimal cut-off points, experimental studies were carried out using open-access data. Four classification accuracy metrics (Youden index, closest-to-(0,1) criterion, balanced accuracy, g-mean) were used to find the optimal cut-off point in order to balance sensitivity and specificity. This study compares different approaches for finding the optimal cut-off points and selects those that are most clinically relevant.
Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks
Gibson E
Giganti F
Hu Y
Bonmati
S. Bandula E
Gurusamy K
Davidson B
Pereira S
Clarkson M
Barratt D
IEEE Transactions on Medical Imaging2018Journal Article, cited 14 times
Website
Computed tomography (CT)
Image segmentation
KIDNEY
LIVER
PANCREAS
Three-dimensional displays
Abdominal CT
Deep learning
Duodenum
ESOPHAGUS
Gallbladder
Gastrointestinal tract
Segmentation
Spleen
STOMACH
Algorithm Development
Prospective Evaluation of Repeatability and Robustness of Radiomic Descriptors in Healthy Brain Tissue Regions In Vivo Across Systematic Variations in T2‐Weighted Magnetic Resonance Imaging Acquisition Parameters
Eck, Brendan
Chirra, Prathyush V.
Muchhala, Avani
Hall, Sophia
Bera, Kaustav
Tiwari, Pallavi
Madabhushi, Anant
Seiberlich, Nicole
Viswanath, Satish E.
Journal of Magnetic Resonance Imaging2021Journal Article, cited 0 times
TCGA-GBM
BACKGROUND: Radiomic descriptors from magnetic resonance imaging (MRI) are promising for disease diagnosis and characterization but may be sensitive to differences in imaging parameters.
OBJECTIVE: To evaluate the repeatability and robustness of radiomic descriptors within healthy brain tissue regions on prospectively acquired MRI scans; in a test-retest setting, under controlled systematic variations of MRI acquisition parameters, and after postprocessing.
STUDY TYPE: Prospective.
SUBJECTS: Fifteen healthy participants.
FIELD STRENGTH/SEQUENCE: A 3.0 T, axial T2 -weighted 2D turbo spin-echo pulse sequence, 181 scans acquired (2 test/retest reference scans and 12 with systematic variations in contrast weighting, resolution, and acceleration per participant; removing scans with artifacts).
ASSESSMENT: One hundred and forty-six radiomic descriptors were extracted from a contiguous 2D region of white matter in each scan, before and after postprocessing.
STATISTICAL TESTS: Repeatability was assessed in a test/retest setting and between manual and automated annotations for the reference scan. Robustness was evaluated between the reference scan and each group of variant scans (contrast weighting, resolution, and acceleration). Both repeatability and robustness were quantified as the proportion of radiomic descriptors that fell into distinct ranges of the concordance correlation coefficient (CCC): excellent (CCC > 0.85), good (0.7 ≤ CCC ≤ 0.85), moderate (0.5 ≤ CCC < 0.7), and poor (CCC < 0.5); for unprocessed and postprocessed scans separately.
RESULTS: Good to excellent repeatability was observed for 52% of radiomic descriptors between test/retest scans and 48% of descriptors between automated vs. manual annotations, respectively. Contrast weighting (TR/TE) changes were associated with the largest proportion of highly robust radiomic descriptors (21%, after processing). Image resolution changes resulted in the largest proportion of poorly robust radiomic descriptors (97%, before postprocessing). Postprocessing of images with only resolution/acceleration differences resulted in 73% of radiomic descriptors showing poor robustness.
DATA CONCLUSIONS: Many radiomic descriptors appear to be nonrobust across variations in MR contrast weighting, resolution, and acceleration, as well in test-retest settings, depending on feature formulation and postprocessing.
EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 2.
Improving Brain Tumor Diagnosis Using MRI Segmentation Based on Collaboration of Beta Mixture Model and Learning Automata
Edalati-rad, Akram
Mosleh, Mohammad
Arabian Journal for Science and Engineering2018Journal Article, cited 0 times
Website
brain cancer
segmentation
beta mixture
learning automata (LA)
dice similarity score (DSS)
Jaccard similarity Index (JSI)
GBM
Interpretable Machine Learning with Brain Image and Survival Data
Eder, Matthias
Moser, Emanuel
Holzinger, Andreas
Jean-Quartier, Claire
Jeanquartier, Fleur
BioMedInformatics2022Journal Article, cited 1 times
Website
BraTS 2020
Radiomics
Glioma
Image Interpretation
Computer-Assisted/*methods
Deep learning
Convolutional Neural Network (CNN)
Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explainable AI (XAI) to interpret the decision-making of an ML algorithm in the use case of predicting the survival rate of patients with brain tumors based on MRI scans. Therefore, we explore the analysis of brain images together with survival data to predict survival in gliomas with a focus on improving the interpretability of the results. Using the Brain Tumor Segmentation dataset BraTS 2020, we used a well-validated dataset for evaluation and relied on a convolutional neural network structure to improve the explainability of important features by adding Shapley overlays. The trained network models were used to evaluate SHapley Additive exPlanations (SHAP) directly and were not optimized for accuracy. The resulting overfitting of some network structures is therefore seen as a use case of the presented interpretation method. It is shown that the network structure can be validated by experts using visualizations, thus making the decision-making of the method interpretable. Our study highlights the feasibility of combining explainers with 3D voxels and also the fact that the interpretation of prediction results significantly supports the evaluation of results. The implementation in python is available on gitlab as “XAIforBrainImgSurv”.
Automated 3-D Tissue Segmentation Via Clustering
Edwards, Samuel
Brown, Scott
Lee, Michael
Journal of Biomedical Engineering and Medical Imaging2018Journal Article, cited 0 times
Head-Neck Cetuximab
Segmentation
clustering
Performance Analysis of Prediction Methods for Lossless Image Compression
Performance analysis of several state-of-the-art prediction approaches is performed for lossless image compression. To provide this analysis special models of edges are presented: bound-oriented and gradient-oriented approaches. Several heuristic assumptions are proposed for considered intra- and inter-component predictors using determined edge models. Numerical evaluation using image test sets with various statistical features confirms obtained heuristic assumptions.
Decision forests for learning prostate cancer probability maps from multiparametric MRI
MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network
Eidex, Z. A.
Wang, T.
Lei, Y.
Axente, M.
Akin-Akintayo, O. O.
Ojo, O. A. A.
Akintayo, A. A.
Roper, J.
Bradley, J. D.
Liu, T.
Schuster, D. M.
Yang, X.
Med Phys2022Journal Article, cited 0 times
PROSTATEx
Humans
Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging (MRI)
Male
Neural Networks
Computer
PET/CT
*Positron Emission Tomography Computed Tomography
*Prostate/diagnostic imaging
Retrospective Studies
Deep learning
prostate and dominant lesion segmentation
PURPOSE: Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS: The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with fivefold cross-validation and holdout testing. RESULTS: The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3 +/- 7.5/3.73 +/- 3.78 mm, 4.5 +/- 7.9/0.41 +/- 0.59 cc, and 89.6 +/- 8.9/84.3 +/- 11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches. CONCLUSIONS: The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy.
Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network
El Hamdi, Dhekra
Elouedi, Ines
Slim, Ihsen
International Journal of Image and Graphics2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
LUNG
Computed Tomography (CT)
Positron Emission Tomography (PET)
Convolutional Neural Network (CNN)
Classification
Imaging features
Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.
A Content-Based-Image-Retrieval Approach for Medical Image Repositories
A deep learning approach for ovarian cancer detection and classification based on fuzzy deep learning
El-Latif, Eman I Abd
El-Dosuky, Mohamed
Darwish, Ashraf
Hassanien, Aboul Ella
Scientific RepoRtS2024Journal Article, cited 0 times
Website
Ovarian Bevacizumab Response
Intelligent Computer-Aided Model for Efficient Diagnosis of Digital Breast Tomosynthesis 3D Imaging Using Deep Learning
El-Shazli, Alaa M. Adel
Youssef, Sherin M.
Soliman, Abdel Hamid
Applied Sciences2022Journal Article, cited 0 times
Breast-Cancer-Screening-DBT
Digital breast tomosynthesis (DBT) is a highly promising 3D imaging modality for breast diagnosis. Tissue overlapping is a challenge with traditional 2D mammograms; however, since digital breast tomosynthesis can obtain three-dimensional images, tissue overlapping is reduced, making it easier for radiologists to detect abnormalities and resulting in improved and more accurate diagnosis. In this study, a new computer-aided multi-class diagnosis system is proposed that integrates DBT augmentation and colour feature map with a modified deep learning architecture (Mod_AlexNet). To the proposed modified deep learning architecture (Mod AlexNet), an optimization layer with multiple high performing optimizers is incorporated so that it can be evaluated and optimised using various optimization techniques. Two experimental scenarios are applied, the first scenario proposed a computer-aided diagnosis (CAD) model that integrated DBT augmentation, image enhancement techniques and colour feature mapping with six deep learning models for feature extraction, including ResNet-18, AlexNet, GoogleNet, MobileNetV2, VGG-16 and DenseNet-201, to efficiently classify DBT slices. The second scenario compared the performance of the newly proposed Mod_AlexNet architecture and traditional AlexNet, using several optimization techniques and different evaluation performance metrics were computed. The optimization techniques included adaptive moment estimation (Adam), root mean squared propagation (RMSProp), and stochastic gradient descent with momentum (SGDM), for different batch sizes, including 32, 64 and 512. Experiments have been conducted on a large benchmark dataset of breast tomography scans. The performance of the first scenario was compared in terms of accuracy, precision, sensitivity, specificity, runtime, and f1-score. While in the second scenario, performance was compared in terms of training accuracy, training loss, and test accuracy. In the first scenario, results demonstrated that AlexNet reported improvement rates of 1.69%, 5.13%, 6.13%, 4.79% and 1.6%, compared to ResNet-18, MobileNetV2, GoogleNet, DenseNet-201 and VGG16, respectively. Experimental analysis with different optimization techniques and batch sizes demonstrated that the proposed Mod_AlexNet architecture outperformed AlexNet in terms of test accuracy with improvement rates of 3.23%, 1.79% and 1.34% when compared using SGDM, Adam, and RMSProp optimizers, respectively.
Extraction of Cancer Section from 2D Breast MRI Slice Using Brain Strom Optimization
Imaging genomics of glioblastoma: state of the art bridge between genomics and neuroradiology
ElBanan, Mohamed G
Amer, Ahmed M
Zinn, Pascal O
Colen, Rivka R
Neuroimaging Clinics of North America2015Journal Article, cited 29 times
Website
Radiogenomics
IDH mutation
BRAIN
Glioblastoma Multiforme (GBM)
Computer Aided Diagnosis (CADx)
Glioblastoma (GBM) is the most common and most aggressive primary malignant tumor of the central nervous system. Recently, researchers concluded that the "one-size-fits-all" approach for treatment of GBM is no longer valid and research should be directed toward more personalized and patient-tailored treatment protocols. Identification of the molecular and genomic pathways underlying GBM is essential for achieving this personalized and targeted therapeutic approach. Imaging genomics represents a new era as a noninvasive surrogate for genomic and molecular profile identification. This article discusses the basics of imaging genomics of GBM, its role in treatment decision-making, and its future potential in noninvasive genomic identification.
The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database
Elbers, Danne C.
Fillmore, Nathanael R.
Sung, Feng-Chi
Ganas, Spyridon S.
Prokhorenkov, Andrew
Meyer, Christopher
Hall, Robert B.
Ajjarapu, Samuel J.
Chen, Daniel C.
Meng, Frank
Grossman, Robert L.
Brophy, Mary T.
Do, Nhan V.
Patterns2020Journal Article, cited 0 times
Website
APOLLO-1-VA
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.
Lung Cancer Detection and Severity Analysis with a 3D Deep Learning CNN Model Using CT-DICOM Clinical Dataset
Eldho, K. J.
Nithyanandh, S.
Indian Journal of Science and Technology2024Journal Article, cited 0 times
Lung-PET-CT-Dx
Objectives: To propose a new AI based CAD model for early detection and severity analysis of pulmonary (lung) cancer disease. A deep learning artificial intelligence-based approach is employed to maximize the discrimination power in CT images and minimize the dimensionality in order to boost detection accuracy. Methods: The AI-based 3D Convolutional Neural Network (3D-DLCNN) method is employed to learn complex patterns and features in a robust way for efficient detection and classification. The pulmonary nodules are identified by 3D Mask-R-CNN at the initial level, and classification is done by 3D-DLCNN. Kernel Density Estimation (KDE) is used to discover the error data points in the extracted features for early removal before candidate screening. The study uses the CT-DICOM dataset, which includes 355 instances and 251135 CT-DICOM images with target attributes of cancer, healthy, and severity condition (if cancer is positive). Statistical outlier detection is utilized to measure the z-score of each feature to reduce the data point deviation. The intensity and pixel masking of CT-DOCIM is measured by using the ER-NCN method to identify the severity of the disease. The performance of the 3D-DLCNN model is done using the MATLAB R2020a tool and comparative analysis is done with prevailing detection and classification approaches such as GA-PSO, SVM, KNN, and BPNN. Findings: The suggested pulmonary detection 3D-DLCNN model outperforms the prevailing models with promising results of 93% accuracy rate, 92.7% sensitivity, 93.4% specificity, 0.8 AUC-ROC, 6.6% FPR, and 0.87 C-Index, which helps the pulmonologists detect the PC and identify the severity for early diagnosis. Novelty: The novel hybrid 3D-DLCNN approach has the ability to detect pulmonary disease and analyze the severity score of the patient at an early stage during the screening process of candidates. It overcomes the limitations of the prevailing machine learning models, GA-PSO, SVM, KNN, and BPNN. Keywords: Artificial Intelligence, Disease Prediction, Lung Cancer, Deep Learning, Cancer Detection, Computational Model, 3D-DLCNN
Hiding privacy and clinical information in medical images using QR code
This study aims to hide patient's privacy details of DICOM files using the QR code images with the same size using steganographic technique. The proposed method is based on the properties of the discrete cosine transform (DCT) of the DICOM images to embed a QR code image. The proposed approach includes two main parts: the embedding of data and the extraction procedure. Moreover, the embedded QR code will be reconstructed blindly from the Stego DICOM without the presence of the original DICOM file. The performances of proposed approach were tested using TCIA COVID-19 Dataset and the terms of the Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index (SSIM) and the Bit Error Rate (BER) values. The simulation results achieved high PSNR values ranged between 63.47 dB and 81.97 dB after the embedding procedure by using a QR code image within the DICOM image of the same size.
CAE-ResVGG FusionNet: A Feature Extraction Framework Integrating Convolutional Autoencoders and Transfer Learning for Immature White Blood Cells in Acute Myeloid Leukemia
Acute myeloid leukemia (AML) is a highly aggressive cancer form that affects myeloid cells, leading to the excessive growth of immature white blood cells (WBCs) in both bone marrow and peripheral blood. Timely AML detection is crucial for effective treatment and patient well-being. Currently, AML diagnosis relies on the manual recognition of immature WBCs through peripheral blood smear analysis, which is time-consuming, prone to errors, and subject to inter-observers' variation. This study aimed to develop a computer-aided diagnostic framework for AML, called "CAE-ResVGG FusionNet", that precisely identifies and classifies immature WBCs into their respective subtypes. The proposed framework leverages an integrated approach, by combining a convolutional autoencoder (CAE) with finely tuned adaptations of the VGG19 and ResNet50 architectures to extract features from CAE-derived embeddings. The process begins with a binary classification model distinguishing between mature and immature WBCs followed by a multiclassifier further classifying immature cells into four subtypes: myeloblasts, monoblasts, erythroblasts, and promyelocytes. The CAE-ResVGG FusionNet workflow comprises four primary stages, including data preprocessing, feature extraction, classification, and validation. The preprocessing phase involves applying data augmentation methods using geometric transformations and synthetic image generation using the CAE to address imbalance in the WBC distribution. Feature extraction involves image embedding and transfer learning, where CAE-derived image representations are used by a custom integrated model of VGG19 and ResNet50 pretrained models. The classification phase employs a weighted ensemble approach that leverages VGG19 and ResNet50, where the optimal weighting parameters are selected using a grid search. The model performance was assessed during the validation phase using the overall accuracy, precision, and sensitivity, while the area under the receiver characteristic curve (AUC) was used to evaluate the model's discriminatory capability. The proposed framework exhibited notable results, achieving an average accuracy of 99.9%, sensitivity of 91.7%, and precision of 98.8%. The model demonstrated exceptional discriminatory ability, as evidenced by an AUC of 99.6%. Significantly, the proposed system outperformed previous methods, indicating its superior diagnostic ability.
Feature Extraction of White Blood Cells Using CMYK-Moment Localization and Deep Learning in Acute Myeloid Leukemia Blood Smear Microscopic Images
Elhassan, Tusneem Ahmed M.
Rahim, Mohd Shafry Mohd
Swee, Tan Tian
Hashim, Siti Zaiton Mohd
Aljurf, Mahmoud
IEEE Access2022Journal Article, cited 0 times
AML-Cytomorphology_LMU
Artificial intelligence has revolutionized medical diagnosis, particularly for cancers. Acute myeloid leukemia (AML) diagnosis is a tedious protocol that is prone to human and machine errors. In several instances, it is difficult to make an accurate final decision even after careful examination by an experienced pathologist. However, computer-aided diagnosis (CAD) can help reduce the errors and time associated with AML diagnosis. White Blood Cells (WBC) detection is a critical step in AML diagnosis, and deep learning is considered a state-of-the-art approach for WBC detection. However, the accuracy of WBC detection is strongly associated with the quality of the extracted features used in training the pixel-wise classification models. CAD depends on studying the different patterns of changes associated with WBC counts and features. In this study, a new hybrid feature extraction method was developed using image processing and deep learning methods. The proposed method consists of two steps: 1) a region of interest (ROI) is extracted using the CMYK-moment localization method and 2) deep learning-based features are extracted using a CNN-based feature fusion method. Several classification algorithms are used to evaluate the significance of the extracted features. The proposed feature extraction method was evaluated using an external dataset and benchmarked against other feature extraction methods. The proposed method achieved excellent performance, generalization, and stability using all the classifiers, with overall classification accuracies of 97.57% and 96.41% using the primary and secondary datasets, respectively. This method has opened a new alternative to improve the detection of WBCs, which could lead to a better diagnosis of AML.
Artificial intelligence in oncology: From bench to clinic
Elkhader, Jamal
Elemento, Olivier
2021Journal Article, cited 0 times
PROSTATE-DIAGNOSIS
PROSTATE-MRI
PROSTATEx
In the past few years, Artificial Intelligence (AI) techniques have been applied to almost every facet of oncology, from basic research to drug development and clinical care. In the clinical arena where AI has perhaps received the most attention, AI is showing promise in enhancing and automating image-based diagnostic approaches in fields such as radiology and pathology. Robust AI applications, which retain high performance and reproducibility over multiple datasets, extend from predicting indications for drug development to improving clinical decision support using electronic health record data. In this article, we review some of these advances. We also introduce common concepts and fundamentals of AI and its various uses, along with its caveats, to provide an overview of the opportunities and challenges in the field of oncology. Leveraging AI techniques productively to provide better care throughout a patient's medical journey can fuel the predictive promise of precision medicine.
An Integrative Approach to Drug Development Using Machine Learning
Despite recent advances in life sciences and technology, the amount of time and money spent in the drug development process remain drastically inflated. Thus, there is a need to rapidly recognize characteristics that will help identify novel therapies.; First, we address the increased need for drug repurposing, the approach of identifying new indications for approved or investigational drugs. We present a novel drug repurposing method called Creating A Translational Network for Indication Prediction (CATNIP) which relies solely on biological and chemical drug characteristics to identify disease areas for specific drugs and drug classes. This drug-focused approach could allow our approach to be used for both FDA approved drugs as well as investigational drugs. Our method, trained with 2,576 diverse small molecules, is built using easily interpretable features, such as chemical structure and targets, allowing for probable drug-disease mechanisms to be discovered from the predictions made. The strength of this method's approach is demonstrated through a repurposing network that can be utilized identify drug class candidate opportunities. In order to treat many of these conditinos, a drug compound is orally ingested by a patient. One of the major absorption sites for drugs is the small intestine, and drug properties such as permeability are proven important to maximize treatment efforts. Poor absorption of drug candidates is likely to lead to failure in the drug development process, so we propose an innovative approach to predict the permeability of a drug. The Caco-2 cell model is a standard surrogate for predicting in vitro intestinal permeability. We collected one of the largest experimentally based datasets of Caco-2 values to create a computational model. Using an approach called graph convolutional networks that treats molecules as graphs, we are able to take in a line-notation form molecular structure and successfully make predictions about a drug compound's permeability. ; Altogether, this work demonstrates how the integration of diverse datasets can aid in addressing the multitutde of challenging problems in the field of drug discovery. Computational approaches such as these, that prioritize applicability and interpretability, have the strong potential to transform and improve upon the drug development pipeline.
A genome-wide gain-of-function screen identifies CDKN2C as a HBV host factor
Eller, Carla
Heydmann, Laura
Colpitts, Che C.
El Saghire, Houssein
Piccioni, Federica
Jühling, Frank
Majzoub, Karim
Pons, Caroline
Bach, Charlotte
Lucifora, Julie
Lupberger, Joachim
Nassal, Michael
Cowley, Glenn S.
Fujiwara, Naoto
Hsieh, Sen-Yung
Hoshida, Yujin
Felli, Emanuele
Pessaux, Patrick
Sureau, Camille
Schuster, Catherine
Root, David E.
Verrier, Eloi R.
Baumert, Thomas F.
Nature Communications2020Journal Article, cited 0 times
Website
TCGA-LIHC
Chronic HBV infection is a major cause of liver disease and cancer worldwide. Approaches for cure are lacking, and the knowledge of virus-host interactions is still limited. Here, we perform a genome-wide gain-of-function screen using a poorly permissive hepatoma cell line to uncover host factors enhancing HBV infection. Validation studies in primary human hepatocytes identified CDKN2C as an important host factor for HBV replication. CDKN2C is overexpressed in highly permissive cells and HBV-infected patients. Mechanistic studies show a role for CDKN2C in inducing cell cycle G1 arrest through inhibition of CDK4/6 associated with the upregulation of HBV transcription enhancers. A correlation between CDKN2C expression and disease progression in HBV-infected patients suggests a role in HBV-induced liver disease. Taken together, we identify a previously undiscovered clinically relevant HBV host factor, allowing the development of improved infectious model systems for drug discovery and the study of the HBV life cycle.
Diffusion MRI quality control and functional diffusion map results in ACRIN 6677/RTOG 0625: a multicenter, randomized, phase II trial of bevacizumab and chemotherapy in recurrent glioblastoma
Ellingson, Benjamin M
Kim, Eunhee
Woodworth, Davis C
Marques, Helga
Boxerman, Jerrold L
Safriel, Yair
McKinstry, Robert C
Bokstein, Felix
Jain, Rajan
Chi, T Linda
Sorensen, A Gregory
Gilbert, Mark R
Barboriak, Daniel P
Int J Oncol2015Journal Article, cited 27 times
Website
ACRIN-DSC-MR-Brain
ACRIN 6677
BRAIN
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
Functional diffusion mapping (fDM) is a cancer imaging technique that quantifies voxelwise changes in apparent diffusion coefficient (ADC). Previous studies have shown value of fDMs in bevacizumab therapy for recurrent glioblastoma multiforme (GBM). The aim of the present study was to implement explicit criteria for diffusion MRI quality control and independently evaluate fDM performance in a multicenter clinical trial (RTOG 0625/ACRIN 6677). A total of 123 patients were enrolled in the current multicenter trial and signed institutional review board-approved informed consent at their respective institutions. MRI was acquired prior to and 8 weeks following therapy. A 5-point QC scoring system was used to evaluate DWI quality. fDM performance was evaluated according to the correlation of these metrics with PFS and OS at the first follow-up time-point. Results showed ADC variability of 7.3% in NAWM and 10.5% in CSF. A total of 68% of patients had usable DWI data and 47% of patients had high quality DWI data when also excluding patients that progressed before the first follow-up. fDM performance was improved by using only the highest quality DWI. High pre-treatment contrast enhancing tumor volume was associated with shorter PFS and OS. A high volume fraction of increasing ADC after therapy was associated with shorter PFS, while a high volume fraction of decreasing ADC was associated with shorter OS. In summary, DWI in multicenter trials are currently of limited value due to image quality. Improvements in consistency of image quality in multicenter trials are necessary for further advancement of DWI biomarkers.
Trialing U-Net Training Modifications for Segmenting Gliomas Using Open Source Deep Learning Framework
Ellis, David G.
Aizenberg, Michele R.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Automatic brain segmentation has the potential to save time and resources for researchers and clinicians. We aimed to improve upon previously proposed methods by implementing the U-Net model and trialing various modifications to the training and inference strategies. The trials were performed and tested on the Multimodal Brain Tumor Segmentation dataset that provides MR images of brain tumors along with manual segmentations for hundreds of subjects. The U-Net models were trained on a training set of MR images from 369 subjects and then tested against a validation set of images from 125 subjects. The proposed modifications included predicting the labeled region contours, permutations of the input data via rotation and reflection, grouping labels together, as well as creating an ensemble of models. The ensemble of models provided the best results compared to any of the other methods, but the other modifications did not demonstrate improvement. Future work will look at reducing the level of the training augmentation so that the models are better able to generalize to the validation set. Overall, our open source deep learning framework allowed us to quickly implement and test multiple U-Net training modifications. The code for this project is available at https://github.com/ellisdg/3DUnetCNN.
Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields
Elmezain, M.
Mahmoud, A.
Mosa, D. T.
Said, W.
J Imaging2022Journal Article, cited 4 times
Website
BraTS 2015
BraTS 2021
Algorithm Development
Segmentation
Because of the large variabilities in brain tumors, automating segmentation remains a difficult task. We propose an automated method to segment brain tumors by integrating the deep capsule network (CapsNet) and the latent-dynamic condition random field (LDCRF). The method consists of three main processes to segment the brain tumor-pre-processing, segmentation, and post-processing. In pre-processing, the N4ITK process involves correcting each MR image's bias field before normalizing the intensity. After that, image patches are used to train CapsNet during the segmentation process. Then, with the CapsNet parameters determined, we employ image slices from an axial view to learn the LDCRF-CapsNet. Finally, we use a simple thresholding method to correct the labels of some pixels and remove small 3D-connected regions from the segmentation outcomes. On the BRATS 2015 and BRATS 2021 datasets, we trained and evaluated our method and discovered that it outperforms and can compete with state-of-the-art methods in comparable conditions.
A Novel Hybrid Perceptron Neural Network Algorithm for Classifying Breast MRI Tumors
Breast cancer today is the leading cause of death amongstcancer patients inflicting women around the world. Breast cancer is themost common cancer in women worldwide. It is also the principle cause ofdeath from cancer among women globally. Early detection of this diseasecan greatly enhance the chances of long-term survival of breast cancervictims. Classification of cancer data helps widely in detection of the dis-ease and it can be achieved using many techniques such as Perceptronwhich is an Artificial Neural Network (ANN) classification technique.In this paper, we proposed a new hybrid algorithm by combining theperceptron algorithm and the feature extraction algorithm after apply-ing the Scale Invariant Feature Transform (SIFT) algorithm in orderto classify magnetic resonance imaging (MRI) breast cancer images. Theproposed algorithm is called breast MRI cancer classifier (BMRICC) andit has been tested tested on 281 MRI breast images (138 abnormal and143 normal). The numerical results of the general performance of theBMRICC algorithm and the comparasion results between it and other 5benchmark classifiers show that, the BMRICC algorithm is a promisingalgorithm and its performance is better than the other algorithms.
Multi-stage Association Analysis of Glioblastoma Gene Expressions with Texture and Spatial Patterns
Elsheikh, Samar S. M.
Bakas, Spyridon
Mulder, Nicola J.
Chimusa, Emile R.
Davatzikos, Christos
Crimi, Alessandro
2019Book Section, cited 0 times
BraTS-TCGA-GBM
Glioblastoma is the most aggressive malignant primary brain tumor with a poor prognosis. Glioblastoma heterogeneous neuroimaging, pathologic, and molecular features provide opportunities for subclassification, prognostication, and the development of targeted therapies. Magnetic resonance imaging has the capability of quantifying specific phenotypic imaging features of these tumors. Additional insight into disease mechanism can be gained by exploring genetics foundations. Here, we use the gene expressions to evaluate the associations with various quantitative imaging phenomic features extracted from magnetic resonance imaging. We highlight a novel correlation by carrying out multi-stage genome-wide association tests at the gene-level through a non-parametric correlation framework that allows testing multiple hypotheses about the integrated relationship of imaging phenotype-genotype more efficiently and less expensive computationally. Our result showed several novel genes previously associated with glioblastoma and other types of cancers, as the LRRC46 (chromosome 17), EPGN (chromosome 4) and TUBA1C (chromosome 12), all associated with our radiographic tumor features.
Accurately identifying vertebral levels in large datasets
Elton, Daniel C.
Sandfort, Veit
Pickhardt, Perry J.
Summers, Ronald M.
2020Conference Proceedings, cited 0 times
CT Lymph Nodes
The vertebral levels of the spine provide a useful coordinate system when making measurements of plaque, muscle, fat, and bone mineral density. Correctly classifying vertebral levels with high accuracy is challenging due to the similar appearance of each vertebra, the curvature of the spine, and the possibility of anomalies such as fractured vertebrae, implants, lumbarization of the sacrum, and sacralization of L5. The goal of this work is to develop a system that can accurately and robustly identify the L1 level in large heterogeneous datasets. The first approach we study is using a 3D U-Net to segment the L1 vertebra directly using the entire scan volume to provide context. We also tested models for two class segmentation of L1 and T12 and a three class segmentation of L1, T12 and the rib attached to T12. By increasing the number of training examples to 249 scans using pseudo-segmentations from an in-house segmentation tool we were able to achieve 98% accuracy with respect to identifying the L1 vertebra, with an average error of 4.5 mm in the craniocaudal level. We next developed an algorithm which performs iterative instance segmentation and classification of the entire spine with a 3D U-Net. We found the instance based approach was able to yield better segmentations of nearly the entire spine, but had lower classification accuracy for L1.
A Deep Learning Approach to Glioblastoma Radiogenomic Classification Using Brain MRI
A malignant brain tumor known as a glioblastoma is an extremely life-threatening condition. It has been proven that the existence of a specific genetic sequence in the tumor known as MGMT promoter methylation is a favourable prognostic factor and a sign of how well a patient will respond to chemotherapy. Currently, the only way to identify the presence of the MGMT promoter is to perform a genetic analysis that requires surgical intervention. The development of an accurate method for determining the presence of the MGMT promoter using only MRI would help to reduce the number of surgeries. In this work, we developed a method for glioblastoma classification using just MRI by choosing an appropriate loss function, neural network architecture and ensembling trained models. This problem was successfully solved as part of the “RSNA-MICCAI Brain Tumor Radiogenomic Classification” competition, and the proposed algorithm was included in the top 5% of best solutions.
A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM
Computer aided diagnosis is starting to be implemented broadly in the diagnosis and; detection of many varieties of abnormities acquired during various imaging procedures.; The main aim of the CAD systems is to increase the accuracy and decrease the time of; diagnoses, while the general achievement for CAD systems are to find the place of nodules; and to determine the characteristic features of the nodule. As lung cancer is one of the fatal; and leading cancer types, there has been plenty of studies for the usage of the CAD; systems to detect lung cancer. Yet, the CAD systems need to be developed a lot in order to; identify the different shapes of nodules, lung segmentation and to have higher level of; sensitivity, specifity and accuracy. This challenge is the motivation of this study in; implementation of CAD system for lung cancer detection. In the study, LIDC database is; used which comprises of an image set of lung cancer thoracic documented CT scans. The; presented CAD system consists of CT image reading, image pre-processing, segmentation,; feature extraction and classification steps. To avoid losing important features, the CT; images were read as a raw form in DICOM file format. Then, filtration and enhancement; techniques were used as an image processing. Otsu’s algorithm, edge detection and; morphological operations are applied for the segmentation, following the feature; extractions step. Finally, support vector machine with Gaussian RBF is utilized for the; classification step which is widely used as a supervised classifier.
4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy
Engwall, Erik
Fredriksson, Albin
Glimelius, Lars
Medical Physics2018Journal Article, cited 2 times
Website
non-small-cell lung cancer
4D-Lung
Effectiveness of different rescanning techniques for scanned proton radiotherapy in lung cancer patients
Engwall, E
Glimelius, L
Hynning, E
Physics in Medicine and Biology2018Journal Article, cited 54 times
Website
4D-Lung
Non-Small-Cell Lung cancer
4D CT
radiotherapy
Non-small cell lung cancer (NSCLC) is a tumour type thought to be well-suited for proton radiotherapy. However, the lung region poses many problems related to organ motion and can for actively scanned beams induce severe interplay effects. In this study we investigate four mitigating rescanning techniques: (1) volumetric rescanning, (2) layered rescanning, (3) breath-sampled (BS) layered rescanning, and (4) continuous breath-sampled (CBS) layered rescanning. The breath-sampled methods will spread the layer rescans over a full breathing cycle, resulting in an improved averaging effect at the expense of longer treatment times. In CBS, we aim at further improving the averaging by delivering as many rescans as possible within one breathing cycle. The interplay effect was evaluated for 4D robustly optimized treatment plans (with and without rescanning) for seven NSCLC patients in the treatment planning system RayStation. The optimization and final dose calculation used a Monte Carlo dose engine to account for the density heterogeneities in the lung region. A realistic treatment delivery time structure given from the IBA ScanAlgo simulation tool served as basis for the interplay evaluation. Both slow (2.0 s) and fast (0.1 s) energy switching times were simulated. For all seven studied patients, rescanning improves the dose conformity to the target. The general trend is that the breath-sampled techniques are superior to layered and volumetric rescanning with respect to both target coverage and variability in dose to OARs. The spacing between rescans in our breath-sampled techniques is set at planning, based on the average breathing cycle length obtained in conjunction with CT acquisition. For moderately varied breathing cycle lengths between planning and delivery (up to 15%), the breath-sampled techniques still mitigate the interplay effect well. This shows the potential for smooth implementation at the clinic without additional motion monitoring equipment.
Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.
Radiology and Enterprise Medical Imaging Extensions (REMIX)
Erdal, Barbaros S
Prevedello, Luciano M
Qian, Songyue
Demirer, Mutlu
Little, Kevin
Ryu, John
O’Donnell, Thomas
White, Richard D
Journal of Digital Imaging2017Journal Article, cited 1 times
Website
Algorithm Development
QIN
enterprise medical imaging
Image reconstruction
quantitative imaging
business intelligence
artificial intelligence
Multisite Image Data Collection and Management Using the RSNA Image Sharing Network
Erickson, Bradley J
Fajnwaks, Patricio
Langer, Steve G
Perry, John
Translational Oncology2014Journal Article, cited 3 times
Website
Algorithm Development
Image de-identification
The execution of a multisite trial frequently includes image collection. The Clinical Trials Processor (CTP) makes removal of protected health information highly reliable. It also provides reliable transfer of images to a central review site. Trials using central review of imaging should consider using CTP for handling image data when a multisite trial is being designed.
Analysis of Computed Tomography Images of Lung Cancer Patients with The Marker Controlled Based Method
Erkoc, Merve
Icer, Semra
2022Conference Paper, cited 0 times
RIDER Lung PET-CT
LIDC-IDRI
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
Radiomics
Segmentation
In this study, it was aimed to obtain the tumor region from computed tomography images after a number of pre-processes using the Marker-Controlled watershed segmentation. In accordance with this purpose, tumor segmentation was performed using four different data sets. Segmentation success was analyzed with the Jaccard index method in terms of similarity rates to the reference images. The index was calculated as average 0.8231 for the RIDER lung CT dataset, 0.8365 for the lung 1 dataset, 0.8578 for the lung 3 dataset and 0.8641 for the LIDC-IDRI dataset. Our current work on the practical and successful segmentation of lung tumor has been promising for next steps.
Sinogram upsampling using Primal-Dual UNet for undersampled CT and radial MRI reconstruction
Ernst, P.
Chatterjee, S.
Rose, G.
Speck, O.
Nurnberger, A.
Neural Netw2023Journal Article, cited 6 times
Website
CT Lymph Nodes
Algorithm Development
U-Net
Artifacts
Brain/diagnostic imaging
Computed Tomography (CT)
Deep learning
Magnetic Resonance Imaging (MRI)
Radial MRI reconstruction
Sparse CT reconstruction
Undersampled MR reconstruction
Computed tomography (CT) and magnetic resonance imaging (MRI) are two widely used clinical imaging modalities for non-invasive diagnosis. However, both of these modalities come with certain problems. CT uses harmful ionising radiation, and MRI suffers from slow acquisition speed. Both problems can be tackled by undersampling, such as sparse sampling. However, such undersampled data leads to lower resolution and introduces artefacts. Several techniques, including deep learning based methods, have been proposed to reconstruct such data. However, the undersampled reconstruction problem for these two modalities was always considered as two different problems and tackled separately by different research works. This paper proposes a unified solution for both sparse CT and undersampled radial MRI reconstruction, achieved by applying Fourier transform-based pre-processing on the radial MRI and then finally reconstructing both modalities using sinogram upsampling combined with filtered back-projection. The Primal-Dual network is a deep learning based method for reconstructing sparsely-sampled CT data. This paper introduces Primal-Dual UNet, which improves the Primal-Dual network in terms of accuracy and reconstruction speed. The proposed method resulted in an average SSIM of 0.932+/-0.021 while performing sparse CT reconstruction for fan-beam geometry with a sparsity level of 16, achieving a statistically significant improvement over the previous model, which resulted in 0.919+/-0.016. Furthermore, the proposed model resulted in 0.903+/-0.019 and 0.957+/-0.023 average SSIM while reconstructing undersampled brain and abdominal MRI data with an acceleration factor of 16, respectively - statistically significant improvements over the original model, which resulted in 0.867+/-0.025 and 0.949+/-0.025. Finally, this paper shows that the proposed network not only improves the overall image quality, but also improves the image quality for the regions-of-interest: liver, kidneys, and spleen; as well as generalises better than the baselines in presence the of a needle.
Sparse View Deep Differentiated Backprojection for Circular Trajectories in CBCT
In this paper, we present a method for removing streak artifacts from reconstructions of sparse cone beam CT (CBCT) projections along circular trajectories. The differentiated backprojection on 2-D planes is combined with convolutional neural networks for both artifact reduction and the ill-posed inversion of the Hilbert transform. Undersampling errors occur at different stages of the algorithm, so the influence of applying the neural networks at these stages is investigated. Spectral blending is used to combine coronal and sagittal planes to a full 3-D reconstruction. Experimental results show that using a neural network to reconstruct a plane-of-interest from the differentiated backprojection of few projections works best by additionally providing FDK reconstructed planes to the network. This approach reduces streaking and cone beam artifacts compared to the direct FDK reconstruction and is also superior to post-processing CNNs.
Fusing clinical and image data for detecting the severity level of hospitalized symptomatic COVID-19 patients using hierarchical model
Ershadi, Mohammad Mahdi
Rise, Zeinab Rahimi
Research on Biomedical Engineering2023Journal Article, cited 0 times
Website
COVID-19-AR
Radiomic features
Deep Learning
Clustering
MATLAB
Purpose; Based on medical reports, it is hard to find levels of different hospitalized symptomatic COVID-19 patients according to their features in a short time. Besides, there are common and special features for COVID-19 patients at different levels based on physicians’ knowledge that make diagnosis difficult. For this purpose, a hierarchical model is proposed in this paper based on experts’ knowledge, fuzzy C-mean (FCM) clustering, and adaptive neuro-fuzzy inference system (ANFIS) classifier.; ; Methods; Experts considered a special set of features for different groups of COVID-19 patients to find their treatment plans. Accordingly, the structure of the proposed hierarchical model is designed based on experts’ knowledge. In the proposed model, we applied clustering methods to patients’ data to determine some clusters. Then, we learn classifiers for each cluster in a hierarchical model. Regarding different common and special features of patients, FCM is considered for the clustering method. Besides, ANFIS had better performances than other classification methods. Therefore, FCM and ANFIS were considered to design the proposed hierarchical model. FCM finds the membership degree of each patient’s data based on common and special features of different clusters to reinforce the ANFIS classifier. Next, ANFIS identifies the need of hospitalized symptomatic COVID-19 patients to ICU and to find whether or not they are in the end-stage (mortality target class). Two real datasets about COVID-19 patients are analyzed in this paper using the proposed model. One of these datasets had only clinical features and another dataset had both clinical and image features. Therefore, some appropriate features are extracted using some image processing and deep learning methods.; ; Results; According to the results and statistical test, the proposed model has the best performance among other utilized classifiers. Its accuracies based on clinical features of the first and second datasets are 92% and 90% to find the ICU target class. Extracted features of image data increase the accuracy by 94%.; ; Conclusion; The accuracy of this model is even better for detecting the mortality target class among different classifiers in this paper and the literature review. Besides, this model is compatible with utilized datasets about COVID-19 patients based on clinical data and both clinical and image data, as well.; ; Highlights; • A new hierarchical model is proposed using ANFIS classifiers and FCM clustering method in this paper. Its structure is designed based on experts’ knowledge and real medical process. FCM reinforces the ANFIS classification learning phase based on the features of COVID-19 patients.; ; • Two real datasets about COVID-19 patients are studied in this paper. One of these datasets has both clinical and image data. Therefore, appropriate features are extracted based on its image data and considered with available meaningful clinical data. Different levels of hospitalized symptomatic COVID-19 patients are considered in this paper including the need of patients to ICU and whether or not they are in end-stage.; ; • Well-known classification methods including case-based reasoning (CBR), decision tree, convolutional neural networks (CNN), K-nearest neighbors (KNN), learning vector quantization (LVQ), multi-layer perceptron (MLP), Naive Bayes (NB), radial basis function network (RBF), support vector machine (SVM), recurrent neural networks (RNN), fuzzy type-I inference system (FIS), and adaptive neuro-fuzzy inference system (ANFIS) are designed for these datasets and their results are analyzed for different random groups of the train and test data;; ; • According to unbalanced utilized datasets, different performances of classifiers including accuracy, sensitivity, specificity, precision, F-score, and G-mean are compared to find the best classifier. ANFIS classifiers have the best results for both datasets.; ; • To reduce the computational time, the effects of the Principal Component Analysis (PCA) feature reduction method are studied on the performances of the proposed model and classifiers. According to the results and statistical test, the proposed hierarchical model has the best performances among other utilized classifiers.
A hierarchical machine learning model based on Glioblastoma patients' clinical, biomedical, and image data to analyze their treatment plans
Ershadi, Mohammad Mahdi
Rise, Zeinab Rahimi
Niaki, Seyed Taghi Akhavan
Computers in Biology and Medicine2022Journal Article, cited 0 times
TCGA-GBM
Glioblastoma
Machine Learning
Deep Learning
AIM OF STUDY: Glioblastoma Multiforme (GBM) is an aggressive brain cancer in adults that kills most patients in the first year due to ineffective treatment. Different clinical, biomedical, and image data features are needed to analyze GBM, increasing complexities. Besides, they lead to weak performances for machine learning models due to ignoring physicians' knowledge. Therefore, this paper proposes a hierarchical model based on Fuzzy C-mean (FCM) clustering, Wrapper feature selection, and twelve classifiers to analyze treatment plans.
METHODOLOGY/APPROACH: The proposed method finds the effectiveness of previous and current treatment plans, hierarchically determining the best decision for future treatment plans for GBM patients using clinical data, biomedical data, and different image data. A case study is presented based on the Cancer Genome Atlas Glioblastoma Multiforme dataset to prove the effectiveness of the proposed model. This dataset is analyzed using data preprocessing, experts' knowledge, and a feature reduction method based on the Principal Component Analysis. Then, the FCM clustering method is utilized to reinforce classifier learning.
OUTCOMES OF STUDY: The proposed model finds the best combination of Wrapper feature selection and classifier for each cluster based on different measures, including accuracy, sensitivity, specificity, precision, F-score, and G-mean according to a hierarchical structure. It has the best performance among other reinforced classifiers. Besides, this model is compatible with real-world medical processes for GBM patients based on clinical, biomedical, and image data.
New prognostic factor telomerase reverse transcriptase promotor mutation presents without MR imaging biomarkers in primary glioblastoma
Ersoy, Tunc F
Keil, Vera C
Hadizadeh, Dariusch R
Gielen, Gerrit H
Fimmers, Rolf
Waha, Andreas
Heidenreich, Barbara
Kumar, Rajiv
Schild, Hans H
Simon, Matthias
Neuroradiology2017Journal Article, cited 1 times
Website
Radiomics
Radiogenomics
Glioblastoma Multiforme (GBM)
REMBRANDT
TERT mutation
VASARI
Magnetic Resonance Imaging (MRI)
PURPOSE: Magnetic resonance (MR) imaging biomarkers can assist in the non-invasive assessment of the genetic status in glioblastomas (GBMs). Telomerase reverse transcriptase (TERT) promoter mutations are associated with a negative prognosis. This study was performed to identify MR imaging biomarkers to forecast the TERT mutation status. METHODS: Pre-operative MRIs of 64/67 genetically confirmed primary GBM patients (51/67 TERT-mutated with rs2853669 polymorphism) were analyzed according to Visually AcceSAble Rembrandt Images (VASARI) ( https://wiki.cancerimagingarchive.net/display/Public/VASARI+Research+Project ) imaging criteria by three radiological raters. TERT mutation and O(6)-methylguanine-DNA methyltransferase (MGMT) hypermethylation data were obtained through direct and pyrosequencing as described in a previous study. Clinical data were derived from a prospectively maintained electronic database. Associations of potential imaging biomarkers and genetic status were assessed by Fisher and Mann-Whitney U tests and stepwise linear regression. RESULTS: No imaging biomarkers could be identified to predict TERT mutational status (alone or in conjunction with TERT promoter polymorphism rs2853669 AA-allele). TERT promoter mutations were more common in patients with tumor-associated seizures as first symptom (26/30 vs. 25/37, p = 0.07); these showed significantly smaller tumors [13.1 (9.0-19.0) vs. 24.0 (16.6-37.5) all cm(3); p = 0.007] and prolonged median overall survival [17.0 (11.5-28.0) vs. 9.0 (4.0-12.0) all months; p = 0.02]. TERT-mutated GBMs were underrepresented in the extended angularis region (p = 0.03), whereas MGMT-methylated GBMs were overrepresented in the corpus callosum (p = 0.03) and underrepresented temporomesially (p = 0.01). CONCLUSION: Imaging biomarkers for prediction of TERT mutation status remain weak and cannot be derived from the VASARI protocol. Tumor-associated seizures are less common in TERT mutated glioblastomas.
Computer-aided detection of Pulmonary Nodules based on SVM in thoracic CT images
Eskandarian, Parinaz
Bagherzadeh, Jamshid
2015Conference Proceedings, cited 12 times
Website
LIDC-IDRI
Computer-Aided diagnosis of Solitary Pulmonary Nodules using the method of X-ray CT images is the early detection of lung cancer. In this study, a computer-aided system for detection of pulmonary nodules on CT scan based support vector machine classifier is provided for the diagnosis of solitary pulmonary nodules. So at the first step, by data mining techniques, volume of data are reduced. Then divided by the area of the chest, the suspicious nodules are identified and eventually nodules are detected. In comparison with the threshold-based methods, support vector machine classifier to classify more accurately describes areas of the lungs. In this study, the false positive rate is reduced by combination of threshold with support vector machine classifier. Experimental results based on data from 147 patients with lung LIDC image database show that the proposed system is able to obtained sensitivity of 89.9% and false positive of 3.9 per scan. In comparison to previous systems, the proposed system demonstrates good performance.
Generative Adversarial Networks for Anomaly Detection in Biomedical Imaging: A Study on Seven Medical Image Datasets
Esmaeili, Marzieh
Toosi, Amirhosein
Roshanpoor, Arash
Changizi, Vahid
Ghazisaeedi, Marjan
Rahmim, Arman
Sabokrou, Mohammad
IEEE Access2023Journal Article, cited 0 times
C-NMC 2019
GAN
Anomaly detection (AD) is a challenging problem in computer vision. Particularly in the field of medical imaging, AD poses even more challenges due to a number of reasons, including insufficient availability of ground truth (annotated) data. In recent years, AD models based on generative adversarial networks (GANs) have made significant progress. However, their effectiveness in biomedical imaging remains underexplored. In this paper, we present an overview of using GANs for AD, as well as an investigation of state-of-the-art GAN-based AD methods for biomedical imaging and the challenges encountered in detail. We have also specifically investigated the advantages and limitations of AD methods on medical image datasets, conducting experiments using 3 AD methods on 7 medical imaging datasets from different modalities and organs/tissues. Given the highly different findings achieved across these experiments, we further analyzed the results from both data-centric and model-centric points of view. The results showed that none of the methods had a reliable performance for detecting abnormalities in medical images. Factors such as the number of training samples, the subtlety of the anomaly, and the dispersion of the anomaly in the images are among the phenomena that highly impact the performance of the AD models. The obtained results were highly variable (AUC: 0.475-0.991; Sensitivity: 0.17-0.98; Specificity: 0.14-0.97). In addition, we provide recommendations for the deployment of AD models in medical imaging and foresee important research directions.
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
Esmaeili, Morteza
Vettukattil, Riyas
Banitalebi, Hasan
Krogh, Nina R
Geitung, Jonn Terje
J Pers Med2021Journal Article, cited 0 times
Website
TCGA-LGG
BraTS-TCGA-GBM
TCGA-GBM
black box CNN
Magnetic Resonance Imaging (MRI)
explainable AI
gliomas
machine learning
tumor localization
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human-machine interactions and assist in the selection of optimal training methods.
Comparison of Accuracy of Color Spaces in Cell Features Classificationin Images of Leukemia types ALL and MM
Espinoza-Del Angel, Cinthia
Femat-Diaz, Aurora
Mexican Journal of Biomedical Engineering2022Journal Article, cited 0 times
Website
SN-AM
Leukemia
Pathomics
Classification
Model
Algorithm Development
This study presents a methodology for identifying the color space that provides the best performance in an image processing application. When measurements are performed without selecting the appropriate color model, the accuracy of the results is considerably altered. It is significant in computation, mainly when a diagnostic is based on stained cell microscopy images. This work shows how the proper selection of the color model provides better characterization in two types of cancer, acute lymphoid leukemia, and multiple myeloma. The methodology uses images from a public database. First, the nuclei are segmented, and then statistical moments are calculated for class identification. After, a principal component analysis is performed to reduce the extracted features and identify the most significant ones. At last, the predictive model is evaluated using the k-nearest neighbor algorithm and a confusion matrix. For the images used, the results showed that the CIE L*a*b color space best characterized the analyzed cancer types with an average accuracy of 95.52%. With an accuracy of 91.81%, RGB and CMY spaces followed. HSI and HSV spaces had an accuracy of 87.86% and 89.39%, respectively, and the worst performer was grayscale with an accuracy of 55.56%.
The main challenge preventing a fully-automatic X-ray to CT registration is an initialization scheme that brings the X-ray pose within the capture range of existing intensity-based registration methods. By providing such an automatic initialization, the present study introduces the first end-to-end fully-automatic registration framework. A network is first trained once on artificial X-rays to extract 2D landmarks resulting from the projection of CT-labels. A patient-specific refinement scheme is then carried out: candidate points detected from a new set of artificial X-rays are back-projected onto the patient CT and merged into a refined meaningful set of landmarks used for network re-training. This network-landmarks combination is finally exploited for intraoperative pose-initialization with a runtime of 102 ms. Evaluated on 6 pelvis anatomies (486 images in total), the mean Target Registration Error was 15.0±7.3 mm. When used to initialize the BOBYQA optimizer with normalized cross-correlation, the average (± STD) projection distance was 3.4±2.3 mm, and the registration success rate (projection distance <2.5% of the detector width) greater than 97%.
Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation
Estienne, T.
Lerousseau, M.
Vakalopoulou, M.
Alvarez Andres, E.
Battistella, E.
Carre, A.
Chandra, S.
Christodoulidis, S.
Sahasrabudhe, M.
Sun, R.
Robert, C.
Talbot, H.
Paragios, N.
Deutsch, E.
Front Comput Neurosci2020Journal Article, cited 15 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
brain tumor segmentation
convolutional neural networks (CNN)
deep learning
deformable registration
multi-task networks
Algorithm Development
Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
A Comparison of Three Different Deep Learning-Based Models to Predict the MGMT Promoter Methylation Status in Glioblastoma Using Brain MRI
Faghani, S.
Khosravi, B.
Moassefi, M.
Conte, G. M.
Erickson, B. J.
J Digit Imaging2023Journal Article, cited 0 times
Website
BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor
Classification
Deep learning
MGMT methylation status
Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 x 32 x 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.
Detection of effective genes in colon cancer: A machine learning approach
Fahami, Mohammad Amin
Roshanzamir, Mohamad
Izadi, Navid Hoseini
Keyvani, Vahideh
Alizadehsani, Roohallah
Informatics in Medicine Unlocked2021Journal Article, cited 0 times
Website
TCGA-COAD
Machine Learning
Radiogenomics
Nowadays, a variety of cancers have become common among humans which unfortunately are the cause of death for many of these people. Early detection and diagnosis of cancers can have a significant impact on the survival of patients and treatment cost reduction. Colon cancer is the third and the second main cause of women's and men's death worldwide among cancers. Hence, many researchers have been trying to provide new methods for early diagnosis of colon cancer. In this study, we apply statistical hypothesis tests such as t-test and Mann–Whitney–Wilcoxon and machine learning methods such as Neural Network, KNN and Decision Tree to detect the most effective genes in the vital status of colon cancer patients. We normalize the dataset using a new two-step method. In the first step, the genes within each sample (patient) are normalized to have zero mean and unit variance. In the second step, normalization is done for each gene across the whole dataset. Analyzing the results shows that this normalization method is more efficient than the others and improves the overall performance of the research. Afterwards, we apply unsupervised learning methods to find the meaningful structures in colon cancer gene expressions. In this regard, the dimensionality of the dataset is reduced by employing Principle Component Analysis (PCA). Next, we cluster the patients according to the PCA extracted features. We then check the labeling results of unsupervised learning methods using different supervised learning algorithms. Finally, we determine genes which have major impact on colon cancer mortality rate in each cluster. Our conducted study is the first which suggests that the colon cancer patients can be categorized into two clusters. In each cluster, 20 effective genes were extracted which can be important for early diagnosis of colon cancer. Many of these genes have been identified for the first time.
Breast Mass Detection With Faster R-CNN: On the Feasibility of Learning From Noisy Annotations
Famouri, Sina
Morra, Lia
Mangia, Leonardo
Lamberti, Fabrizio
IEEE Access2021Journal Article, cited 0 times
CBIS-DDSM
In this work we study the impact of noise on the training of object detection networks for the medical domain, and how it can be mitigated by improving the training procedure. Annotating large medical datasets for training data-hungry deep learning models is expensive and time consuming. Leveraging information that is already collected in clinical practice, in the form of text reports, bookmarks or lesion measurements would substantially reduce this cost. Obtaining precise lesion bounding boxes through automatic mining procedures, however, is difficult. We provide here a quantitative evaluation of the effect of bounding box coordinate noise on the performance of Faster R-CNN object detection networks for breast mass detection. Varying degrees of noise are simulated by randomly modifying the bounding boxes: in our experiments, bounding boxes could be enlarged up to six times the original size. The noise is injected in the CBIS-DDSM collection, a well curated public mammography dataset for which accurate lesion location is available. We show how, due to an imperfect matching between the ground truth and the network bounding box proposals, the noise is propagated during training and reduces the ability of the network to correctly classify lesions from background. When using the standard Intersection over Union criterion, the area under the FROC curve decreases by up to 9%. A novel matching criterion is proposed to improve tolerance to noise.
Research on Feature Detection Based on Convolutional Network with Deep Instance Segmentation
Fan, Miyu
2021Conference Paper, cited 0 times
BraTS-TCGA-GBM
In this project, computer image enhancement processing, learning and analysis were carried out on a large number of brain multimodal medical images (MRI) from patients and a deep convolutional neural network was established to generalize the mask mapping function paradigm of learning images and diseased areas, so as to achieve accurate classification of high-grade glioma (HGG) and low-grade glioma (LGG); the semantic segmentation whole tumor area (WT) was carried out.
Radiomic analysis reveals diverse prognostic and molecular insights into the response of breast cancer to neoadjuvant chemotherapy: a multicohort study
Breast cancer patients exhibit various response patterns to neoadjuvant chemotherapy (NAC). However, it is uncertain whether diverse tumor response patterns to NAC in breast cancer patients can predict survival outcomes. We aimed to develop and validate radiomic signatures indicative of tumor shrinkage and therapeutic response for improved survival analysis.
Radiogenomic analysis of cellular tumor-stroma heterogeneity as a prognostic predictor in breast cancer
Fan, M.
Wang, K.
Zhang, Y.
Ge, Y.
Lu, Z.
Li, L.
J Transl Med2023Journal Article, cited 0 times
Website
TCGA-BRCA
Breast-MRI-NACT-Pilot
ACRIN 6657
ISPY1
DCE-MRI
Radiomics
Radiogenomics
Humans
Female
Middle Aged
Prognosis
*Breast Neoplasms/diagnostic imaging/genetics
Retrospective Studies
Gene Expression Profiling/methods
Biomarkers
Tumor/genetics/analysis
Thyrotropin/genetics
Tumor Microenvironment/genetics
Breast cancer
Cell subpopulation
Radiogenomics
BACKGROUND: The tumor microenvironment and intercellular communication between solid tumors and the surrounding stroma play crucial roles in cancer initiation, progression, and prognosis. Radiomics provides clinically relevant information from radiological images; however, its biological implications in uncovering tumor pathophysiology driven by cellular heterogeneity between the tumor and stroma are largely unknown. We aimed to identify radiogenomic signatures of cellular tumor-stroma heterogeneity (TSH) to improve breast cancer management and prognosis analysis. METHODS: This retrospective multicohort study included five datasets. Cell subpopulations were estimated using bulk gene expression data, and the relative difference in cell subpopulations between the tumor and stroma was used as a biomarker to categorize patients into good- and poor-survival groups. A radiogenomic signature-based model utilizing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) was developed to target TSH, and its clinical significance in relation to survival outcomes was independently validated. RESULTS: The final cohorts of 1330 women were included for cellular TSH biomarker identification (n = 112, mean age, 57.3 years +/- 14.6) and validation (n = 886, mean age, 58.9 years +/- 13.1), radiogenomic signature of TSH identification (n = 91, mean age, 55.5 years +/- 11.4), and prognostic (n = 241) assessments. The cytotoxic lymphocyte biomarker differentiated patients into good- and poor-survival groups (p < 0.0001) and was independently validated (p = 0.014). The good survival group exhibited denser cell interconnections. The radiogenomic signature of TSH was identified and showed a positive association with overall survival (p = 0.038) and recurrence-free survival (p = 3 x 10(-4)). CONCLUSION: Radiogenomic signatures provide insights into prognostic factors that reflect the imbalanced tumor-stroma environment, thereby presenting breast cancer-specific biological implications and prognostic significance.
Tumour heterogeneity revealed by unsupervised decomposition of dynamic contrast-enhanced magnetic resonance imaging is associated with underlying gene expression patterns and poor survival in breast cancer patients
Fan, M.
Xia, P.
Liu, B.
Zhang, L.
Wang, Y.
Gao, X.
Li, L.
Breast Cancer Res2019Journal Article, cited 3 times
Website
ISPY1
TCGA-BRCA
BREAST
Radiogenomics
BACKGROUND: Heterogeneity is a common finding within tumours. We evaluated the imaging features of tumours based on the decomposition of tumoural dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data to identify their prognostic value for breast cancer survival and to explore their biological importance. METHODS: Imaging features (n = 14), such as texture, histogram distribution and morphological features, were extracted to determine their associations with recurrence-free survival (RFS) in patients in the training cohort (n = 61) from The Cancer Imaging Archive (TCIA). The prognostic value of the features was evaluated in an independent dataset of 173 patients (i.e. the reproducibility cohort) from the TCIA I-SPY 1 TRIAL dataset. Radiogenomic analysis was performed in an additional cohort, the radiogenomic cohort (n = 87), using DCE-MRI from TCGA-BRCA and corresponding gene expression data from The Cancer Genome Atlas (TCGA). The MRI tumour area was decomposed by convex analysis of mixtures (CAM), resulting in 3 components that represent plasma input, fast-flow kinetics and slow-flow kinetics. The prognostic MRI features were associated with the gene expression module in which the pathway was analysed. Furthermore, a multigene signature for each prognostic imaging feature was built, and the prognostic value for RFS and overall survival (OS) was confirmed in an additional cohort from TCGA. RESULTS: Three image features (i.e. the maximum probability from the precontrast MR series, the median value from the second postcontrast series and the overall tumour volume) were independently correlated with RFS (p values of 0.0018, 0.0036 and 0.0032, respectively). The maximum probability feature from the fast-flow kinetics subregion was also significantly associated with RFS and OS in the reproducibility cohort. Additionally, this feature had a high correlation with the gene expression module (r = 0.59), and the pathway analysis showed that Ras signalling, a breast cancer-related pathway, was significantly enriched (corrected p value = 0.0044). Gene signatures (n = 43) associated with the maximum probability feature were assessed for associations with RFS (p = 0.035) and OS (p = 0.027) in an independent dataset containing 1010 gene expression samples. Among the 43 gene signatures, Ras signalling was also significantly enriched. CONCLUSIONS: Dynamic pattern deconvolution revealed that tumour heterogeneity was associated with poor survival and cancer-related pathways in breast cancer.
UMRFormer-net: a three-dimensional U-shaped pancreas segmentation method based on a double-layer bridged transformer network
Fang, Kun
He, Baochun
Liu, Libo
Hu, Haoyu
Fang, Chihua
Huang, Xuguang
Jia, Fucang
Quantitative Imaging in Medicine and Surgery2023Journal Article, cited 0 times
CPTAC-PDA
Background: Methods based on the combination of transformer and convolutional neural networks (CNNs) have achieved impressive results in the field of medical image segmentation. However, most of the recently proposed combination segmentation approaches simply treat transformers as auxiliary modules which help to extract long-range information and encode global context into convolutional representations, and there is a lack of investigation on how to optimally combine self-attention with convolution.
Methods: We designed a novel transformer block (MRFormer) that combines a multi-head self-attention layer and a residual depthwise convolutional block as the basic unit to deeply integrate both long-range and local spatial information. The MRFormer block was embedded between the encoder and decoder in U-Net at the last two layers. This framework (UMRFormer-Net) was applied to the segmentation of three-dimensional (3D) pancreas, and its ability to effectively capture the characteristic contextual information of the pancreas and surrounding tissues was investigated.
Results: Experimental results show that the proposed UMRFormer-Net achieved accuracy in pancreas segmentation that was comparable or superior to that of existing state-of-the-art 3D methods in both the Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPTAC-PDA) dataset and the public Medical Segmentation Decathlon dataset (self-division). UMRFormer-Net statistically significantly outperformed existing transformer-related methods and state-of-the-art 3D methods (P<0.05, P<0.01, or P<0.001), with a higher Dice coefficient (85.54% and 77.36%, respectively) or a lower 95% Hausdorff distance (4.05 and 8.34 mm, respectively).
Conclusions: UMRFormer-Net can obtain more matched and accurate segmentation boundary and region information in pancreas segmentation, thus improving the accuracy of pancreas segmentation. The code is available at https://github.com/supersunshinefk/UMRFormer-Net.
EGFR/SRC/ERK-stabilized YTHDF2 promotes cholesterol dysregulation and invasive growth of glioblastoma
Fang, Runping
Chen, Xin
Zhang, Sicong
Shi, Hui
Ye, Youqiong
Shi, Hailing
Zou, Zhongyu
Li, Peng
Guo, Qing
Ma, Li
Nature Communications2021Journal Article, cited 14 times
Website
REMBRANDT
GBM
YTHDF2
Resolution enhancement for lung 4D-CT based on transversal structures by using multiple Gaussian process regression learning
Fang, Shiting
Hu, Runyue
Yuan, Xinrui
Liu, Shangqing
Zhang, Yuan
Phys Med2020Journal Article, cited 0 times
Website
Machine Learning
4D-Lung
LUNG
Image Enhancement/methods
PURPOSE: Four-dimensional computed tomography (4D-CT) plays a useful role in many clinical situations. However, due to the hardware limitation of system, dense sampling along superior-inferior direction is often not practical. In this paper, we develop a novel multiple Gaussian process regression model to enhance the superior-inferior resolution for lung 4D-CT based on transversal structures. METHODS: The proposed strategy is based on the observation that high resolution transversal images can recover missing pixels in the superior-inferior direction. Based on this observation and motived by random forest algorithm, we employ multiple Gaussian process regression model learned from transversal images to improve superior-inferior resolution. Specifically, we first randomly sample 3 x 3 patches from original transversal images. The central pixel of these patches and the eight-neighbour pixels of their corresponding degraded versions form the label and input of training data, respectively. Multiple Gaussian process regression model is then built on the basis of multiple training subsets obtained by random sampling. Finally, the central pixel of the patch is estimated based on the proposed model, with the eight-neighbour pixels of each 3 x 3 patch from interpolated superior-inferior direction images as inputs. RESULTS: The performance of our method is extensively evaluated using simulated and publicly available datasets. Our experiments show the remarkable performance of the proposed method. CONCLUSIONS: In this paper, we propose a new approach to improve the 4D-CT resolution, which does not require any external data and hardware support, and can produce clear coronal/sagittal images for easy viewing.
The peritumoral edema index and related mechanisms influence the prognosis of GBM patients
Fang, Zhansheng
Shu, Ting
Luo, Pengxiang
Shao, Yiqing
Lin, Li
Tu, Zewei
Zhu, Xingen
Wu, Lei
Frontiers in Oncology2024Journal Article, cited 0 times
Website
TCGA-GBM
GBM
Explainable prediction model for the human papillomavirus status in patients with oropharyngeal squamous cell carcinoma using CNN on CT images
Squamous Cell Carcinoma of Head and Neck/virology/diagnostic imaging/pathology
Tumor Burden
Human Papillomavirus Viruses
Convolutional neural network
Explainable artificial intelligence
Grad-CAM
Human papillomavirus
Oropharyngeal squamous cell carcinoma
Several studies have emphasised how positive and negative human papillomavirus (HPV+ and HPV-, respectively) oropharyngeal squamous cell carcinoma (OPSCC) has distinct molecular profiles, tumor characteristics, and disease outcomes. Different radiomics-based prediction models have been proposed, by also using innovative techniques such as Convolutional Neural Networks (CNNs). Although some of these models reached encouraging predictive performances, there evidence explaining the role of radiomic features in achieving a specific outcome is scarce. In this paper, we propose some preliminary results related to an explainable CNN-based model to predict HPV status in OPSCC patients. We extracted the Gross Tumor Volume (GTV) of pre-treatment CT images related to 499 patients (356 HPV+ and 143 HPV-) included into the OPC-Radiomics public dataset to train an end-to-end Inception-V3 CNN architecture. We also collected a multicentric dataset consisting of 92 patients (43 HPV+ , 49 HPV-), which was employed as an independent test set. Finally, we applied Gradient-weighted Class Activation Mapping (Grad-CAM) technique to highlight the most informative areas with respect to the predicted outcome. The proposed model reached an AUC value of 73.50% on the independent test. As a result of the Grad-CAM algorithm, the most informative areas related to the correctly classified HPV+ patients were located into the intratumoral area. Conversely, the most important areas referred to the tumor edges. Finally, since the proposed model provided additional information with respect to the accuracy of the classification given by the visualization of the areas of greatest interest for predictive purposes for each case examined, it could contribute to increase confidence in using computer-based predictive models in the actual clinical practice.
Image analysis-based tumor infiltrating lymphocytes measurement predicts breast cancer pathologic complete response in SWOG S0800 neoadjuvant chemotherapy trial
Fanucci, Kristina A.
Bai, Yalai
Pelekanou, Vasiliki
Nahleh, Zeina A.
Shafi, Saba
Burela, Sneha
Barlow, William E.
Sharma, Priyanka
Thompson, Alastair M.
Godwin, Andrew K.
Rimm, David L.
Hortobagyi, Gabriel N.
Liu, Yihan
Wang, Leona
Wei, Wei
Pusztai, Lajos
Blenman, Kim R. M.
NPJ Breast Cancer2023Journal Article, cited 0 times
Website
breast cancer
Neoadjuvant chemotherapy
lymphocytes
We assessed the predictive value of an image analysis-based tumor-infiltrating lymphocytes (TILs) score for pathologic complete response (pCR) and event-free survival in breast cancer (BC). About 113 pretreatment samples were analyzed from patients with stage IIB-IIIC HER-2-negative BC randomized to neoadjuvant chemotherapy ± bevacizumab. TILs quantification was performed on full sections using QuPath open-source software with a convolutional neural network cell classifier (CNN11). We used easTILs% as a digital metric of TILs score defined as [sum of lymphocytes area (mm2)/stromal area(mm2)] × 100. Pathologist-read stromal TILs score (sTILs%) was determined following published guidelines. Mean pretreatment easTILs% was significantly higher in cases with pCR compared to residual disease (median 36.1 vs.14.8%, p < 0.001). We observed a strong positive correlation (r = 0.606, p < 0.0001) between easTILs% and sTILs%. The area under the prediction curve (AUC) was higher for easTILs% than sTILs%, 0.709 and 0.627, respectively. Image analysis-based TILs quantification is predictive of pCR in BC and had better response discrimination than pathologist-read sTILs%.
Feature fusion for lung nodule classification
Farag, Amal A
Ali, Asem
Elshazly, Salwa
Farag, Aly A
International Journal of Computer Assisted Radiology and Surgery2017Journal Article, cited 3 times
Website
LIDC-IDRI
LUNG
Computed tomography (CT)
Features extraction
Gabor filter
Classification
K Nearest Neighbor (KNN)
support vector machine (SVM)
Hybrid intelligent approach for diagnosis of the lung nodule from CT images using spatial kernelized fuzzy c-means and ensemble learning
Farahani, Farzad Vasheghani
Ahmadi, Abbas
Zarandi, Mohammad Hossein Fazel
Mathematics and Computers in Simulation2018Journal Article, cited 1 times
Website
LIDC-IDRI
Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network
Farahani, Keyvan
Kalpathy-Cramer, Jayashree
Chenevert, Thomas L
Rubin, Daniel L
Sunderland, John J
Nordstrom, Robert J
Buatti, John
Hylton, Nola
Tomography2016Journal Article, cited 2 times
Website
Radiomics
Quantitative Imaging Network (QIN)
The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.
Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans
M. Mehdi Farhangi
Nicholas Petrick
Berkman Sahiner
Hichem Frigui
Amir A. Amini
Aria Pezeshk
Med Phys2020Journal Article, cited 0 times
Website
LIDC-IDRI
LUNA16 Challenge
National Lung Screening Trial (NLST)
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.
A study of machine learning and deep learning models for solving medical imaging problems
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task.; Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.
Signal intensity analysis of ecological defined habitat in soft tissue sarcomas to predict metastasis development
Magnetic Resonance Imaging (MRI) is the standard of care in the clinic for diagnosis and follow up of Soft Tissue Sarcomas (STS) which presents an opportunity to explore the heterogeneity inherent in these rare tumors. Tumor heterogeneity is a challenging problem to quantify and has been shown to exist at many scales, from genomic to radiomic, existing both within an individual tumor, between tumors from the same primary in the same patient and across different patients. In this paper, we propose a method which focuses on spatially distinct sub-regions or habitats in the diagnostic MRI of patients with STS by using pixel signal intensity. Habitat characteristics likely represent areas of differing underlying biology within the tumor, and delineation of these differences could provide clinically relevant information to aid in selecting a therapeutic regimen (chemotherapy or radiation). To quantify tumor heterogeneity, first we assay intra-tumoral segmentations based on signal intensity and then build a spatial mapping scheme from various MRI modalities. Finally, we predict clinical outcomes, using in this paper the appearance of distant metastasis - the most clinically meaningful endpoint. After tumor segmentation into high and low signal intensities, a set of quantitative imaging features based on signal intensity is proposed to represent variation in habitat characteristics. This set of features is utilized to predict metastasis in a cohort of STS patients. We show that this framework, using only pre-therapy MRI, predicts the development of metastasis in STS patients with 72.41% accuracy, providing a starting point for a number of clinical hypotheses.
Radiogenomic Prediction of MGMT Using Deep Learning with Bayesian Optimized Hyperparameters
Glioblastoma (GBM) is the most aggressive primary brain tumor. The standard radiotherapeutic treatment for newly diagnosed GBM patients is Temozolomide (TMZ). O6-methylguanine-DNA-methyltransferase (MGMT) gene methylation status is a genetic biomarker for patient response to the treatment and is associated with a longer survival time. The standard method of assessing genetic alternation is surgical resection which is invasive and time-consuming. Recently, imaging genomics has shown the potential to associate imaging phenotype with genetic alternation. Imaging genomics provides an opportunity for noninvasive assessment of treatment response. Accordingly, we propose a convolutional neural network (CNN) framework with Bayesian optimized hyperparameters for the prediction of MGMT status from multimodal magnetic resonance imaging (mMRI). The goal of the proposed method is to predict the MGMT status noninvasively. Using the RSNA-MICCAI dataset, the proposed framework achieves an area under the curve (AUC) of 0.718 and 0.477 for validation and testing phase, respectively.
DETECT-LC: A 3D Deep Learning and Textural Radiomics Computational Model for Lung Cancer Staging and Tumor Phenotyping Based on Computed Tomography Volumes
Fathalla, Karma M.
Youssef, Sherin M.
Mohammed, Nourhan
Applied Sciences2022Journal Article, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
Lung Cancer is one of the primary causes of cancer-related deaths worldwide. Timely diagnosis and precise staging are pivotal for treatment planning, and thus can lead to increased survival rates. The application of advanced machine learning techniques helps in effective diagnosis and staging. In this study, a multistage neurobased computational model is proposed, DETECT-LC learning. DETECT-LC handles the challenge of choosing discriminative CT slices for constructing 3D volumes, using Haralick, histogram-based radiomics, and unsupervised clustering. ALT-CNN-DENSE Net architecture is introduced as part of DETECT-LC for voxel-based classification. DETECT-LC offers an automatic threshold-based segmentation approach instead of the manual procedure, to help mitigate this burden for radiologists and clinicians. Also, DETECT-LC presents a slice selection approach and a newly proposed relatively light weight 3D CNN architecture to improve existing studies performance. The proposed pipeline is employed for tumor phenotyping and staging. DETECT-LC performance is assessed through a range of experiments, in which DETECT-LC attains outstanding performance surpassing its counterparts in terms of accuracy, sensitivity, F1-score and Area under Curve (AuC). For histopathology classification, DETECT-LC average performance achieved an improvement of 20% in overall accuracy, 0.19 in sensitivity, 0.16 in F1-Score and 0.16 in AuC over the state of the art. A similar enhancement is reached for staging, where higher overall accuracy, sensitivity and F1-score are attained with differences of 8%, 0.08 and 0.14.
B2C3NetF2: Breast cancer classification using an end‐to‐end deep learning feature fusion and satin bowerbird optimization controlled Newton Raphson feature selection
Fatima, Mamuna
Khan, Muhammad Attique
Shaheen, Saima
Almujally, Nouf Abdullah
Wang, Shui‐Hua
CAAI Transactions on Intelligence Technology2023Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Features extraction
Computer Aided Diagnosis (CADx)
ResNet101
Contrast enhancement
Transfer learning
Currently, the improvement in AI is mainly related to deep learning techniques that are employed for the classification, identification, and quantification of patterns in clinical images. The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks, such as skin cancer, colorectal cancer, brain tumour, cardiac disease, Breast cancer (BrC), and a few more. The manual diagnosis of medical issues always requires an expert and is also expensive. Therefore, developing some computer diagnosis techniques based on deep learning is essential. Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage. It is estimated that patients with BrC will rise to 70% in the next 20 years. If diagnosed at a later stage, the survival rate of patients with BrC is shallow. Hence, early detection is essential, increasing the survival rate to 50%. A new framework for BrC classification is presented that utilises deep learning and feature optimization. The significant steps of the presented framework include (i) hybrid contrast enhancement of acquired images, (ii) data augmentation to facilitate better learning of the Convolutional Neural Network (CNN) model, (iii) a pre-trained ResNet-101 model is utilised and modified according to selected dataset classes, (iv) deep transfer learning based model training for feature extraction, (v) the fusion of features using the proposed highly corrected function-controlled canonical correlation analysis approach, and (vi) optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers. The experiments of the proposed framework have been carried out using the most critical and publicly available dataset, such as CBIS-DDSM, and obtained the best accuracy of 94.5% along with improved computation time. The comparison depicts that the presented method surpasses the current state-of-the-art approaches.
Towards accurate and efficient diagnoses in nephropathology: An AI-based approach for assessing kidney transplant rejection
The Banff classification is useful for diagnosing renal transplant rejection. However, it has limitations due to subjectivity and varying concordance in physicians' assessments. Artificial intelligence (AI) can help standardize research, increase objectivity and accurately quantify morphological characteristics, improving reproducibility in clinical practice. This study aims to develop an AI-based solutions for diagnosing acute kidney transplant rejection by introducing automated evaluation of prognostic morphological patterns. The proposed approach aims to help accurately distinguish borderline changes from rejection. We trained a deep-learning model utilizing a fine-tuned Mask R-CNN architecture which achieved a mean Average Precision value of 0.74 for the segmentation of renal tissue structures. A strong positive nonlinear correlation was found between the measured infiltration areas and fibrosis, indicating the model's potential for assessing these parameters in kidney biopsies. The ROC analysis showed a high predictive ability for distinguishing between ci and i scores based on infiltration area and fibrosis area measurements. The AI model demonstrated high precision in predicting clinical scores which makes it a promising AI assisting tool for pathologists. The application of AI in nephropathology has a potential for advancements, including automated morphometric evaluation, 3D histological models and faster processing to enhance diagnostic accuracy and efficiency.
Measuring breathing induced oesophageal motion and its dosimetric impact
Fechter, Tobias
Adebahr, Sonja
Grosu, Anca-Ligia
Baltas, Dimos
Physica Medica2021Journal Article, cited 0 times
4D-Lung
PURPOSE: Stereotactic body radiation therapy allows for a precise dose delivery. Organ motion bears the risk of undetected high dose healthy tissue exposure. An organ very susceptible to high dose is the oesophagus. Its low contrast on CT and the oblong shape render motion estimation difficult. We tackle this issue by modern algorithms to measure oesophageal motion voxel-wise and estimate motion related dosimetric impacts.
METHODS: Oesophageal motion was measured using deformable image registration and 4DCT of 11 internal and 5 public datasets. Current clinical practice of contouring the organ on 3DCT was compared to timely resolved 4DCT contours. Dosimetric impacts of the motion were estimated by analysing the trajectory of each voxel in the 4D dose distribution. Finally an organ motion model for patient-wise comparisons was built.
RESULTS: Motion analysis showed mean absolute maximal motion amplitudes of 4.55 ± 1.81 mm left-right, 5.29 ± 2.67 mm anterior-posterior and 10.78 ± 5.30 mm superior-inferior. Motion between cohorts differed significantly. In around 50% of the cases the dosimetric passing criteria was violated. Contours created on 3DCT did not cover 14% of the organ for 50% of the respiratory cycle and were around 38% smaller than the union of all 4D contours. The motion model revealed that the maximal motion is not limited to the lower part of the organ. Our results showed motion amplitudes higher than most reported values in the literature and that motion is very heterogeneous across patients.
CONCLUSIONS: Individual motion information should be considered in contouring and planning.
The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data
Fechter, Tobias
Sachpazidis, Ilias
Baltas, Dimos
2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PROSTATEx
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Quantitative Imaging Informatics for Cancer Research
Fedorov, Andrey
Beichel, Reinhard
Kalpathy-Cramer, Jayashree
Clunie, David
Onken, Michael
Riesmeier, Jorg
Herz, Christian
Bauer, Christian
Beers, Andrew
Fillion-Robin, Jean-Christophe
Lasso, Andras
Pinter, Csaba
Pieper, Steve
Nolden, Marco
Maier-Hein, Klaus
Herrmann, Markus D
Saltz, Joel
Prior, Fred
Fennessy, Fiona
Buatti, John
Kikinis, Ron
JCO Clin Cancer Inform2020Journal Article, cited 0 times
Website
QIICR
QIN-HEADNECK
QIN-PROSTATE-Repeatability
TCGA-GBM
TCGA-LGG
LIDC-IDRI
NSCLC-Radiomics
NSCLC-Radiomics-Interobserver1
Head-Neck-Radiomics-HN1
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM((R))) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation
Fedorov, Andriy
Fluckiger, Jacob
Ayers, Gregory D
Li, Xia
Gupta, Sandeep N
Tempany, Clare
Mulkern, Robert
Yankeelov, Thomas E
Fennessy, Fiona M
Magnetic resonance imaging2014Journal Article, cited 30 times
Website
Algorithm Development
PROSTATE
Dynamic Contrast-Enhanced (DCE)-MRI
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in significantly different (p<0.05) mean estimates of ve, but not of K(trans). Comparing cAIF, different estimates for both ve, and K(trans) were obtained. Intra-method comparison between the iAIF- and cAIF-driven analyses showed the lack of effect on ve, while K(trans) values were significantly different for one of the methods. Our results indicate that the choice of the algorithm used for automated image-based AIF determination can lead to significant differences in the values of the estimated PK parameters. K(trans) estimates are more sensitive to the choice between cAIF/iAIF as compared to ve, leading to potentially significant differences depending on the AIF method. These observations may have practical consequences in evaluating the PK analysis results obtained in a multi-site setting.
PURPOSE: The dataset contains annotations for lung nodules collected by the Lung Imaging Data Consortium and Image Database Resource Initiative (LIDC) stored as standard DICOM objects. The annotations accompany a collection of computed tomography (CT) scans for over 1000 subjects annotated by multiple expert readers, and correspond to "nodules ≥ 3 mm", defined as any lesion considered to be a nodule with greatest in-plane dimension in the range 3-30 mm regardless of presumed histology. The present dataset aims to simplify reuse of the data with the readily available tools, and is targeted towards researchers interested in the analysis of lung CT images.
ACQUISITION AND VALIDATION METHODS: Open source tools were utilized to parse the project-specific XML representation of LIDC-IDRI annotations and save the result as standard DICOM objects. Validation procedures focused on establishing compliance of the resulting objects with the standard, consistency of the data between the DICOM and project-specific representation, and evaluating interoperability with the existing tools.
DATA FORMAT AND USAGE NOTES: The dataset utilizes DICOM Segmentation objects for storing annotations of the lung nodules, and DICOM Structured Reporting objects for communicating qualitative evaluations (nine attributes) and quantitative measurements (three attributes) associated with the nodules. The total of 875 subjects contain 6859 nodule annotations. Clustering of the neighboring annotations resulted in 2651 distinct nodules. The data are available in TCIA at https://doi.org/10.7937/TCIA.2018.h7umfurq.
POTENTIAL APPLICATIONS: The standardized dataset maintains the content of the original contribution of the LIDC-IDRI consortium, and should be helpful in developing automated tools for characterization of lung lesions and image phenotyping. In addition to those properties, the representation of the present dataset makes it more FAIR (Findable, Accessible, Interoperable, Reusable) for the research community, and enables its integration with other standardized data collections.
An annotated test-retest collection of prostate multiparametric MRI
Fedorov, Andriy
Schwier, Michael
Clunie, David
Herz, Christian
Pieper, Steve
Kikinis, Ron
Tempany, Clare
Fennessy, Fiona
Scientific Data2018Journal Article, cited 0 times
Website
QIN-PROSTATE-Repeatability
Detection of malignancy in whole slide images of endometrial cancer biopsies using artificial intelligence
Fell, Christina
Mohammadi, Mahnaz
Morrison, David
Arandjelović, Ognjen
Syed, Sheeba
Konanahalli, Prakash
Bell, Sarah
Bryson, Gareth
Harrison, David J.
Harris-Birtill, David
PLoS One2023Journal Article, cited 0 times
CPTAC-UCEC
In this study we use artificial intelligence (AI) to categorise endometrial biopsy whole slide images (WSI) from digital pathology as either "malignant", "other or benign" or "insufficient". An endometrial biopsy is a key step in diagnosis of endometrial cancer, biopsies are viewed and diagnosed by pathologists. Pathology is increasingly digitised, with slides viewed as images on screens rather than through the lens of a microscope. The availability of these images is driving automation via the application of AI. A model that classifies slides in the manner proposed would allow prioritisation of these slides for pathologist review and hence reduce time to diagnosis for patients with cancer. Previous studies using AI on endometrial biopsies have examined slightly different tasks, for example using images alongside genomic data to differentiate between cancer subtypes. We took 2909 slides with "malignant" and "other or benign" areas annotated by pathologists. A fully supervised convolutional neural network (CNN) model was trained to calculate the probability of a patch from the slide being "malignant" or "other or benign". Heatmaps of all the patches on each slide were then produced to show malignant areas. These heatmaps were used to train a slide classification model to give the final slide categorisation as either "malignant", "other or benign" or "insufficient". The final model was able to accurately classify 90% of all slides correctly and 97% of slides in the malignant class; this accuracy is good enough to allow prioritisation of pathologists' workload.
Comparison of methods for sensitivity correction in Talbot-Lau computed tomography
Felsner, L.
Roser, P.
Maier, A.
Riess, C.
Int J Comput Assist Radiol Surg2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Algorithms
Image Processing
Image reconstruction
Phantom
Talbot-Lau interferometer
X-ray phase contrast imaging
Computed Tomography (CT)
PURPOSE: In Talbot-Lau X-ray phase contrast imaging, the measured phase value depends on the position of the object in the measurement setup. When imaging large objects, this may lead to inhomogeneous phase contributions within the object. These inhomogeneities introduce artifacts in tomographic reconstructions of the object. METHODS: In this work, we compare recently proposed approaches to correct such reconstruction artifacts. We compare an iterative reconstruction algorithm, a known operator network and a U-net. The methods are qualitatively and quantitatively compared on the Shepp-Logan phantom and on the anatomy of a human abdomen. We also perform a dedicated experiment on the noise behavior of the methods. RESULTS: All methods were able to reduce the specific artifacts in the reconstructions for the simulated and virtual real anatomy data. The results show method-specific residual errors that are indicative for the inherently different correction approaches. While all methods were able to correct the artifacts, we report a different noise behavior. CONCLUSION: The iterative reconstruction performs very well, but at the cost of a high runtime. The known operator network shows consistently a very competitive performance. The U-net performs slightly worse, but has the benefit that it is a general-purpose network that does not require special application knowledge.
Brain Tumor Segmentation with Patch-Based 3D Attention UNet from Multi-parametric MRI
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multiparametric MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. In this paper we developed a deep-learning-based segmentation method using a patch-based 3D UNet with the attention block. Hyper-parameters tuning and training and testing augmentations were applied to increase the model performance. Preliminary results showed effectiveness of the segmentation model and achieved mean Dice scores of 0.806 (ET), 0.863 (TC) and 0.918 (WT) in the validation dataset.
Brain Tumor Segmentation with Uncertainty Estimation and Overall Survival Prediction
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multimodal MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. Most models in brain tumor segmentation use a 2D/3D patch to predict the class label for the center voxel and variant patch sizes and scales are used to improve the model performance. However, it has low computation efficiency and also has limited receptive field. U-Net is a widely used network structure for end-to-end segmentation and can be used on the entire image or extracted patches to provide classification labels over the entire input voxels so that it is more efficient and expect to yield better performance with larger input size. In this paper we developed a deep-learning-based segmentation method using an ensemble of 3D U-Nets with different hyper-parameters. Furthermore, we estimated the uncertainty of the segmentation from the probabilistic outputs of each network and studied the correlation between the uncertainty and the performances. Preliminary results showed effectiveness of the segmentation model. Finally, we developed a linear model for survival prediction using extracted imaging and non-imaging features, which, despite the simplicity, can effectively reduce overfitting and regression errors.
Brain Tumor Segmentation for Multi-Modal MRI with Missing Information
Feng, X.
Ghimire, K.
Kim, D. D.
Chandra, R. S.
Zhang, H.
Peng, J.
Han, B.
Huang, G.
Chen, Q.
Patel, S.
Bettagowda, C.
Sair, H. I.
Jones, C.
Jiao, Z.
Yang, L.
Bai, H.
J Digit Imaging2023Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS 2021
3D U-Net
Brain tumor segmentation
Deep learning
Multi-contrast MRI
Sequence dropout
Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.
Brain Tumor Segmentation Using an Ensemble of 3D U-Nets and Overall Survival Prediction Using Radiomic Features
Feng, Xue
Tustison, Nicholas
Meyer, Craig
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Abstract
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multimodal MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. Most models in brain tumor segmentation use a 2D/3D patch to predict the class label for the center voxel and variant patch sizes and scales are used to improve the model performance. However, it has low computation efficiency and also has limited receptive field. U-Net is a widely used network structure for end-to-end segmentation and can be used on the entire image or extracted patches to provide classification labels over the entire input voxels so that it is more efficient and expect to yield better performance with larger input size. Furthermore, instead of picking the best network structure, an ensemble of multiple models, trained on different dataset or different hyper-parameters, can generally improve the segmentation performance. In this study we propose to use an ensemble of 3D U-Nets with different hyper-parameters for brain tumor segmentation. Preliminary results showed effectiveness of this model. In addition, we developed a linear model for survival prediction using extracted imaging and non-imaging features, which, despite the simplicity, can effectively reduce overfitting and regression errors.
Identifying BAP1 Mutations in Clear-Cell Renal Cell Carcinoma by CT Radiomics: Preliminary Findings
Feng, Zhan
Zhang, Lixia
Qi, Zhong
Shen, Qijun
Hu, Zhengyu
Chen, Feng
Frontiers in Oncology2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
KIDNEY
Renal cancer
Clear cell renal cell carcinoma (ccRCC)
To evaluate the potential application of computed tomography (CT) radiomics in the prediction of BRCA1-associated protein 1 (BAP1) mutation status in patients with clear-cell renal cell carcinoma (ccRCC). In this retrospective study, clinical and CT imaging data of 54 patients were retrieved from The Cancer Genome Atlas–Kidney Renal Clear Cell Carcinoma database. Among these, 45 patients had wild-type BAP1 and nine patients had BAP1 mutation. The texture features of tumor images were extracted using the Matlab-based IBEX package. To produce class-balanced data and improve the stability of prediction, we performed data augmentation for the BAP1 mutation group during cross validation. A model to predict BAP1 mutation status was constructed using Random Forest Classification algorithms, and was evaluated using leave-one-out-cross-validation. Random Forest model of predict BAP1 mutation status had an accuracy of 0.83, sensitivity of 0.72, specificity of 0.87, precision of 0.65, AUC of 0.77, F-score of 0.68. CT radiomics is a potential and feasible method for predicting BAP1 mutation status in patients with ccRCC.
Deep Learning Model for Automatic Contouring of Cardiovascular Substructures on Radiotherapy Planning CT Images: Dosimetric Validation and Reader Study based Clinical Acceptability Testing
Fernandes, Miguel Garrett
Bussink, Johan
Stam, Barbara
Wijsman, Robin
Schinagl, Dominic AX
Teuwen, Jonas
Monshouwer, René
Radiotherapy and Oncology2021Journal Article, cited 0 times
Website
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
Deep and statistical learning in biomedical imaging: State of the art in 3D MRI brain tumor segmentation
Fernando, K. Ruwani M.
Tsokos, Chris P.
Information Fusion2023Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Clinical diagnosis and treatment decisions rely upon the integration of patient-specific data with clinical reasoning. Cancer presents a unique context that influences treatment decisions, given its diverse forms of disease evolution. Biomedical imaging allows non-invasive assessment of diseases based on visual evaluations, leading to better clinical outcome prediction and therapeutic planning. Early methods of brain cancer characterization predominantly relied upon the statistical modeling of neuroimaging data. Driven by breakthroughs in computer vision, deep learning has become the de facto standard in medical imaging. Integrated statistical and deep learning methods have recently emerged as a new direction in the automation of medical practice unifying multi-disciplinary knowledge in medicine, statistics, and artificial intelligence. In this study, we critically review major statistical, deep learning, and probabilistic deep learning models and their applications in brain imaging research with a focus on MRI-based brain tumor segmentation. These results highlight that model-driven classical statistics and data-driven deep learning is a potent combination for developing automated systems in clinical oncology.
2D and 2.5 D Pancreas and Tumor Segmentation in Heterogeneous CT Images of PDAC Patients
Characterization of Pulmonary Nodules Based on Features of Margin Sharpness and Texture
Ferreira, José Raniery
Oliveira, Marcelo Costa
de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging2017Journal Article, cited 1 times
Website
LIDC-IDRI
lung cancer
pulmonary nodule
image classification
pattern recognition
On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations
Ferri, Marcelino
Bravo, Jose Maria
Redondo, Javier
Jimenez-Gambin, Sergio
Jimenez, Noe
Camarena, Francisco
Sanchez-Perez, Juan Vicente
Polymers (Basel)2019Journal Article, cited 2 times
Website
HEAD
Computed Tomography (CT)
Ultrasound
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.
Enhanced Numerical Method for the Design of 3-D-Printed Holographic Acoustic Lenses for Aberration Correction of Single-Element Transcranial Focused Ultrasound
Marcelino Ferri
José M. Bravo
Javier Redondo
Juan V. Sánchez-Pérez
Ultrasound in Medicine & Biology2018Journal Article, cited 0 times
Website
TCIA General
Head
Computed Tomography (CT)
Ultrasound
The correction of transcranial focused ultrasound aberrations is a relevant issue for enhancing various non-invasive medical treatments. The emission through multi-element phased arrays has been the most widely accepted method to improve focusing in recent years; however, the number and size of transducers represent a bottleneck that limits the focusing accuracy of the technique. To overcome this limitation, a new disruptive technology, based on 3-D-printed acoustic lenses, has recently been proposed. As the submillimeter precision of the latest generation of 3-D printers has been proven to overcome the spatial limitations of phased arrays, a new challenge is to improve the accuracy of the numerical simulations required to design this type of ultrasound lens. In the study described here, we evaluated two improvements in the numerical model applied in previous works for the design of 3-D-printed lenses: (i) allowing the propagation of shear waves in the skull by means of its simulation as an isotropic solid and (ii) introduction of absorption into the set of equations that describes the dynamics of the wave in both fluid and solid media. The results obtained in the numerical simulations are evidence that the inclusion of both s-waves and absorption significantly improves focusing.
Transferring CT image biomarkers from fibrosing idiopathic interstitial pneumonia to COVID-19 analysis
Fetita, Catalin
Rennotte, Simon
Latrasse, Marjorie
Tapu, Ruxandra
Maury, Mathilde
Mocanu, Bogdan
Nunes, Hilario
Brillet, Pierre-Yves
2021Conference Proceedings, cited 0 times
CT Images in COVID-19
Fibrosing idiopathic interstitial pneumonia (fIIP) is a subclass of interstitial lung diseases, which leads to fibrosis in a continuous and irreversible process of lung function decay. Patients with fIIP require regular quantitative follow-up with CT and several image biomarkers have already been proposed to grade the pathology severity and try to predict the evolution. Among them, we cite the spatial extent of the diseased lung parenchyma and airway and vascular remodeling markers. COVID-19 (Cov-19) presents several similarities with fIIP and this condition is moreover suspected to evolve to fIIP in 10-30% of severe cases. Note also that the main difference between Cov-19 and fIIP is the presence of peripheral ground glass opacities and less or no amount of fibrosis in the lung, as well as the absence of airway remodeling. This paper proposes a preliminary study to investigate how existing image markers for fIIP may apply to Cov-19 phenotyping, namely texture classification and vascular remodeling. In addition, since for some patients, the fIIP/Cov-19 follow-up protocol imposes CT acquisitions at both full inspiration and full expiration, this information could also be exploited to extract additional knowledge for each individual case. We hypothesize that taking into account the two respiratory phases to analyze breathing parameters through interpolation and registration might contribute to a better phenotyping of the pathology. This preliminary study, conducted on a reduced number of patients (eight Cov-19 of different severity degrees, two fIIP patients and one control), shows a great potential of the selected CT image markers.
Generalized Wasserstein Dice Score, Distributionally Robust Deep Learning, and Ranger for Brain Tumor Segmentation: BraTS 2020 Challenge
Fidon, Lucas
Ourselin, Sébastien
Vercauteren, Tom
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Optimization
Convolutional Neural Network (CNN)
Training a deep neural network is an optimization problem with four main ingredients: the design of the deep neural network, the per-sample loss function, the population loss function, and the optimizer. However, methods developed to compete in recent BraTS challenges tend to focus only on the design of deep neural network architectures, while paying less attention to the three other aspects. In this paper, we experimented with adopting the opposite approach. We stuck to a generic and state-of-the-art 3D U-Net architecture and experimented with a non-standard per-sample loss function, the generalized Wasserstein Dice loss, a non-standard population loss function, corresponding to distributionally robust optimization, and a non-standard optimizer, Ranger. Those variations were selected specifically for the problem of multi-class brain tumor segmentation. The generalized Wasserstein Dice loss is a per-sample loss function that allows taking advantage of the hierarchical structure of the tumor regions labeled in BraTS. Distributionally robust optimization is a generalization of empirical risk minimization that accounts for the presence of underrepresented subdomains in the training dataset. Ranger is a generalization of the widely used Adam optimizer that is more stable with small batch size and noisy labels. We found that each of those variations of the optimization of deep neural networks for brain tumor segmentation leads to improvements in terms of Dice scores and Hausdorff distances. With an ensemble of three deep neural networks trained with various optimization procedures, we achieved promising results on the validation dataset and the testing dataset of the BraTS 2020 challenge. Our ensemble ranked fourth out of 78 for the segmentation task of the BraTS 2020 challenge with mean Dice scores of 88.9, 84.1, and 81.4, and mean Hausdorff distances at 95% of 6.4, 19.4, and 15.8 for the whole tumor, the tumor core, and the enhancing tumor.
Generalized Wasserstein Dice Loss, Test-Time Augmentation, and Transformers for the BraTS 2021 Challenge
Fidon, Lucas
Shit, Suprosanna
Ezhov, Ivan
Paetzold, Johannes C.
Ourselin, Sébastien
Vercauteren, Tom
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation from multiple Magnetic Resonance Imaging (MRI) modalities is a challenging task in medical image computation. The main challenges lie in the generalizability to a variety of scanners and imaging protocols. In this paper, we explore strategies to increase model robustness without increasing inference time. Towards this aim, we explore finding a robust ensemble from models trained using different losses, optimizers, and train-validation data split. Importantly, we explore the inclusion of a transformer in the bottleneck of the U-Net architecture. While we find transformer in the bottleneck performs slightly worse than the baseline U-Net in average, the generalized Wasserstein Dice loss consistently produces superior results. Further, we adopt an efficient test time augmentation strategy for faster and robust inference. Our final ensemble of seven 3D U-Nets with test-time augmentation produces an average dice score of 89.4% and an average Hausdorff 95% distance of 10.0 mm when evaluated on the BraTS 2021 testing dataset. Our code and trained models are publicly available at https://github.com/LucasFidon/TRABIT_BraTS2021.
LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada
Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy
Firmino, Macedo
Angelo, Giovani
Morais, Higor
Dantas, Marcel R
Valentim, Ricardo
BioMedical Engineering OnLine2016Journal Article, cited 63 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
LUNG
Computed Tomography (CT)
BACKGROUND: CADe and CADx systems for the detection and diagnosis of lung cancer have been important areas of research in recent decades. However, these areas are being worked on separately. CADe systems do not present the radiological characteristics of tumors, and CADx systems do not detect nodules and do not have good levels of automation. As a result, these systems are not yet widely used in clinical settings. METHODS: The purpose of this article is to develop a new system for detection and diagnosis of pulmonary nodules on CT images, grouping them into a single system for the identification and characterization of the nodules to improve the level of automation. The article also presents as contributions: the use of Watershed and Histogram of oriented Gradients (HOG) techniques for distinguishing the possible nodules from other structures and feature extraction for pulmonary nodules, respectively. For the diagnosis, it is based on the likelihood of malignancy allowing more aid in the decision making by the radiologists. A rule-based classifier and Support Vector Machine (SVM) have been used to eliminate false positives. RESULTS: The database used in this research consisted of 420 cases obtained randomly from LIDC-IDRI. The segmentation method achieved an accuracy of 97 % and the detection system showed a sensitivity of 94.4 % with 7.04 false positives per case. Different types of nodules (isolated, juxtapleural, juxtavascular and ground-glass) with diameters between 3 mm and 30 mm have been detected. For the diagnosis of malignancy our system presented ROC curves with areas of: 0.91 for nodules highly unlikely of being malignant, 0.80 for nodules moderately unlikely of being malignant, 0.72 for nodules with indeterminate malignancy, 0.67 for nodules moderately suspicious of being malignant and 0.83 for nodules highly suspicious of being malignant. CONCLUSIONS: From our preliminary results, we believe that our system is promising for clinical applications assisting radiologists in the detection and diagnosis of lung cancer.
Prompt tuning for parameter-efficient medical image segmentation
Fischer, Marc
Bartler, Alexander
Yang, Bin
Med Image Anal2023Journal Article, cited 1 times
Website
Pancreas-CT
Segmentation
Algorithm Development
Prompt tuning
Self-attention
Self-supervision
Semantic segmentation
Semi-supervised deep learning
Transformer
Neural networks pre-trained on a self-supervision scheme have become the standard when operating in data rich environments with scarce annotations. As such, fine-tuning a model to a downstream task in a parameter-efficient but effective way, e.g. for a new set of classes in the case of semantic segmentation, is of increasing importance. In this work, we propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets. Relying on the recently popularized prompt tuning approach, we provide a prompt-able UNETR (PUNETR) architecture, that is frozen after pre-training, but adaptable throughout the network by class-dependent learnable prompt tokens. We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes (contrastive prototype assignment, CPA) of a student teacher combination. Concurrently, an additional segmentation loss is applied for a subset of classes during pre-training, further increasing the effectiveness of leveraged prompts in the fine-tuning phase. We demonstrate that the resulting method is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models on CT imaging datasets. To this end, the difference between fully fine-tuned and prompt-tuned variants amounts to 7.81 pp for the TCIA/BTCV dataset as well as 5.37 and 6.57 pp for subsets of the TotalSegmentator dataset in the mean Dice Similarity Coefficient (DSC, in %) while only adjusting prompt tokens, corresponding to 0.51% of the pre-trained backbone model with 24.4M frozen parameters. The code for this work is available on https://github.com/marcdcfischer/PUNETR.
A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer
Fischer, Sarah
Tahoun, Mohamed
Klaan, Bastian
Thierfelder, Kolja M
Weber, Marc-Andre
Krause, Bernd J
Hakenberg, Oliver
Fuellen, Georg
Hamed, Mohamed
Cancers (Basel)2019Journal Article, cited 0 times
Website
TCGA-PRAD
Radiogenomics
Classification
PROSTATE
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.
The ASNR-ACR-RSNA Common Data Elements Project: What Will It Do for the House of Neuroradiology?
Flanders, AE
Jordan, JE
American Journal of Neuroradiology2018Journal Article, cited 0 times
Website
REMBRANDT
VASARI
BRAIN
Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images
Florescu, L. M.
Streba, C. T.
Serbanescu, M. S.
Mamuleanu, M.
Florescu, D. N.
Teica, R. V.
Nica, R. E.
Gheonea, I. A.
Life (Basel)2022Journal Article, cited 0 times
Website
COVID-19
Lung-PET-CT-Dx
Computed Tomography (CT)
Federated learning
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially declared on 26 February 2020. (2) Methods: This study proposes a federated learning approach with pre-trained deep learning models for COVID-19 detection. Three clients were locally deployed with their own dataset. The goal of the clients was to collaborate in order to obtain a global model without sharing samples from the dataset. The algorithm we developed was connected to our internal picture archiving and communication system and, after running backwards, it encountered chest CT changes suggestive for COVID-19 in a patient investigated in our medical imaging department on the 28 January 2020. (4) Conclusions: Based on our results, we recommend using an automated AI-assisted software in order to detect COVID-19 based on the lung imaging changes as an adjuvant diagnostic method to the current gold standard (RT-PCR) in order to greatly enhance the management of these patients and also limit the spread of the disease, not only to the general population but also to healthcare professionals.
Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?
Shape matters: unsupervised exploration of IDH-wildtype glioma imaging survival predictors
Foltyn-Dumitru, M.
Mahmutoglu, M. A.
Brugnara, G.
Kessler, T.
Sahm, F.
Wick, W.
Heiland, S.
Bendszus, M.
Vollmuth, P.
Schell, M.
Eur Radiol2024Journal Article, cited 0 times
Website
UCSF-PDGM
Cluster analysis
Glioma
Radiogenomics
Magnetic Resonance Imaging (MRI)
Radiomics
OBJECTIVES: This study examines clustering based on shape radiomic features and tumor volume to identify IDH-wildtype glioma phenotypes and assess their impact on overall survival (OS). MATERIALS AND METHODS: This retrospective study included 436 consecutive patients diagnosed with IDH-wt glioma who underwent preoperative MR imaging. Alongside the total tumor volume, nine distinct shape radiomic features were extracted using the PyRadiomics framework. Different imaging phenotypes were identified using partition around medoids (PAM) clustering on the training dataset (348/436). The prognostic efficacy of these phenotypes in predicting OS was evaluated on the test dataset (88/436). External validation was performed using the public UCSF glioma dataset (n = 397). A decision-tree algorithm was employed to determine the relevance of features associated with cluster affiliation. RESULTS: PAM clustering identified two clusters in the training dataset: Cluster 1 (n = 233) had a higher proportion of patients with higher sphericity and elongation, while Cluster 2 (n = 115) had a higher proportion of patients with higher maximum 3D diameter, surface area, axis lengths, and tumor volume (p < 0.001 for each). OS differed significantly between clusters: Cluster 1 showed a median OS of 23.8 compared to 11.4 months of Cluster 2 in the holdout test dataset (p = 0.002). Multivariate Cox regression showed improved performance with cluster affiliation over clinical data alone (C index 0.67 vs 0.59, p = 0.003). Cluster-based models outperformed the models with tumor volume alone (evidence ratio: 5.16-5.37). CONCLUSION: Data-driven clustering reveals imaging phenotypes, highlighting the improved prognostic power of combining shape-radiomics with tumor volume, thereby outperforming predictions based on tumor volume alone in high-grade glioma survival outcomes. CLINICAL RELEVANCE STATEMENT: Shape-radiomics and volume-based cluster analyses of preoperative MRI scans can reveal imaging phenotypes that improve the prediction of OS in patients with IDH-wild type gliomas, outperforming currently known models based on tumor size alone or clinical parameters. KEY POINTS: Shape radiomics and tumor volume clustering in IDH-wildtype gliomas are investigated for enhanced prognostic accuracy. Two distinct phenotypic clusters were identified with different median OSs. Integrating shape radiomics and volume-based clustering enhances OS prediction in IDH-wildtype glioma patients.
Impact of signal intensity normalization of MRI on the generalizability of radiomic-based prediction of molecular glioma subtypes
Foltyn-Dumitru, Martha
Schell, Marianne
Rastogi, Aditya
Sahm, Felix
Kessler, Tobias
Wick, Wolfgang
Bendszus, Martin
Brugnara, Gianluca
Vollmuth, Philipp
European Radiology2023Journal Article, cited 0 times
UCSF-PDGM
glioma
radiomics
MRI
IDH genotype
Radiomic features have demonstrated encouraging results for non-invasive detection of molecular biomarkers, but the lack of guidelines for pre-processing MRI-data has led to poor generalizability. Here, we assessed the influence of different MRI-intensity normalization techniques on the performance of radiomics-based models for predicting molecular glioma subtypes.
3D MRI Brain Tumour Segmentation with Autoencoder Regularization and Hausdorff Distance Loss Function
Manual segmentation of the Glioblastoma is a challenging task for the radiologists, essential for treatment planning. In recent years deep convolutional neural networks have been shown to perform exceptionally well, in particular the winner of the BraTS challenge 2019 uses 3D U-net architecture in combination with variational autoencoder, using Dice overlap measure as a cost function. In this work we are proposing a loss function that approximates Hausdorff Distance metric that is used to evaluate performance of different segmentation in the hopes that it will allow achieving better performance of the segmentation on new data.
Breast cancer is one of the most common cancers in women. Typically, the course of the disease is asymptomatic in the early stages of breast cancer. Imaging breast examinations allow early detection of the cancer, which is associated with increased chances of a complete cure. There are many breast imaging techniques such as: mammography (MM), ultrasound imaging (US), positron-emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). These imaging techniques differ in terms of effectiveness, price, type of physical phenomenon, the impact on the patient and its availability. In this paper, we focus on MRI imaging and we compare three breast lesion segmentation algorithms that have been tested on QIN Breast DCE-MRI database, which is publicly available. The obtained values of Dice and Jaccard indices indicate the segmentation using k-means algorithm.
SABOS-Net: Self-supervised attention based network for automatic organ segmentation of head and neck CT images
Francis, S.
Pooloth, G.
Singam, S. B. S.
Puzhakkal, N.
Narayanan, P. P.
Balakrishnan, J. P.
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
OPC-Radiomics
auto-contouring
Deep Learning
head and neck ct
organs at risk(oar)
radiation therapy
residual u-net
self supervision
auto-segmentation
framework
Algorithm Development
Atlas
Radiotherapy
The segmentation of Organs At Risk (OAR) in Computed Tomography (CT) images is an essential part of the planning phase of radiation treatment to avoid the adverse effects of cancer radiotherapy treatment. Accurate segmentation is a tedious task in the head and neck region due to a large number of small and sensitive organs and the low contrast of CT images. Deep learning-based automatic contouring algorithms can ease this task even when the organs have irregular shapes and size variations. This paper proposes a fully automatic deep learning-based self-supervised 3D Residual UNet architecture with CBAM(Convolution Block Attention Mechanism) for the organ segmentation in head and neck CT images. The Model Genesis structure and image context restoration techniques are used for self-supervision, which can help the network learn image features from unlabeled data, hence solving the annotated medical data scarcity problem in deep networks. A new loss function is applied for training by integrating Focal loss, Tversky loss, and Cross-entropy loss. The proposed model outperforms the state-of-the-art methods in terms of dice similarity coefficient in segmenting the organs. Our self-supervised model could achieve a 4% increase in the dice score of Chiasm, which is a small organ that is present only in a very few CT slices. The proposed model exhibited better accuracy for 5 out of 7 OARs than the recent state-of-the-art models. The proposed model could simultaneously segment all seven organs in an average time of 0.02 s. The source code of this work is made available at .
Teacher-student approach for lung tumor segmentation from mixed-supervised datasets
Fredriksen, Vemund
Sevle, Svein Ole M.
Pedersen, André
Langø, Thomas
Kiss, Gabriel
Lindseth, Frank
PLoS One2022Journal Article, cited 0 times
Lung-PET-CT-Dx
PURPOSE: Cancer is among the leading causes of death in the developed world, and lung cancer is the most lethal type. Early detection is crucial for better prognosis, but can be resource intensive to achieve. Automating tasks such as lung tumor localization and segmentation in radiological images can free valuable time for radiologists and other clinical personnel. Convolutional neural networks may be suited for such tasks, but require substantial amounts of labeled data to train. Obtaining labeled data is a challenge, especially in the medical domain.
METHODS: This paper investigates the use of a teacher-student design to utilize datasets with different types of supervision to train an automatic model performing pulmonary tumor segmentation on computed tomography images. The framework consists of two models: the student that performs end-to-end automatic tumor segmentation and the teacher that supplies the student additional pseudo-annotated data during training.
RESULTS: Using only a small proportion of semantically labeled data and a large number of bounding box annotated data, we achieved competitive performance using a teacher-student design. Models trained on larger amounts of semantic annotations did not perform better than those trained on teacher-annotated data. Our model trained on a small number of semantically labeled data achieved a mean dice similarity coefficient of 71.0 on the MSD Lung dataset.
CONCLUSIONS: Our results demonstrate the potential of utilizing teacher-student designs to reduce the annotation load, as less supervised annotation schemes may be performed, without any real degradation in segmentation accuracy.
Ultrasound DICOM Renamer: A MATLAB graphical user interface for workflow improvement for DICOM ultrasound renaming
Freeborn, Todd J.
Mota, Jacob A.
2024Journal Article, cited 0 times
CMB-CRC
Ultrasound DICOM Renamer is a MATLAB-based graphical user interface that facilitates workflow improvements in organizing and renaming DICOM format ultrasound image files. It provides a platform to quickly visualize exported images, generate descriptive filenames using DICOM meta-data and optical character recognition applied to the image, and save renamed files organized by subject. The goal of this program is to both speed up and reduce errors during data cleaning and organization prior to analysis activities in project workflows.
Memory Efficient Brain Tumor Segmentation Using an Autoencoder-Regularized U-Net
Early diagnosis and accurate segmentation of brain tumors are imperative for successful treatment. Unfortunately, manual segmentation is time consuming, costly and despite extensive human expertise often inaccurate. Here, we present an MRI-based tumor segmentation framework using an autoencoder-regularized 3D-convolutional neural network. We trained the model on manually segmented structural T1, T1ce, T2, and Flair MRI images of 335 patients with tumors of variable severity, size and location. We then tested the model using independent data of 125 patients and successfully segmented brain tumors into three subregions: the tumor core (TC), the enhancing tumor (ET) and the whole tumor (WT). We also explored several data augmentations and preprocessing steps to improve segmentation performance. Importantly, our model was implemented on a single NVIDIA GTX1060 graphics unit and hence optimizes tumor segmentation for widely affordable hardware. In sum, we present a memory-efficient and affordable solution to tumor segmentation to support the accurate diagnostics of oncological brain pathologies.
Classification of COVID-19 in chest radiographs: assessing the impact of imaging parameters using clinical and simulated images
As computer-aided diagnostics develop to address new challenges in medical imaging, including emerging diseases such as COVID-19, the initial development is hampered by availability of imaging data. Deep learning algorithms are particularly notorious for performance that tends to improve proportionally to the amount of available data. Simulated images, as available through advanced virtual trials, may present an alternative in data-constrained applications. We begin with our previously trained COVID-19 x-ray classification model (denoted as CVX) that leveraged additional training with existing pre-pandemic chest radiographs to improve classification performance in a set of COVID-19 chest radiographs. The CVX model achieves demonstrably better performance on clinical images compared to an equivalent model that applies standard transfer learning from ImageNet weights. The higher performing CVX model is then shown to generalize effectively to a set of simulated COVID-19 images, both quantitative comparisons of AUCs from clinical to simulated image sets, but also in a qualitative sense where saliency map patterns are consistent when compared between sets. We then stratify the classification results in simulated images to examine dependencies in imaging parameters when patient features are constant. Simulated images show promise in optimizing imaging parameters for accurate classification in data-constrained applications.
Supervised Machine-Learning Framework and Classifier Evaluation for Automated Three-dimensional Medical Image Segmentation based on Body MRI
A novel approach to 2D/3D registration of X-ray images using Grangeat's relation
Frysch, R.
Pfeiffer, T.
Rose, G.
Med Image Anal2020Journal Article, cited 0 times
CPTAC-GBM
Image registration
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.
Distributed and scalable optimization for robust proton treatment planning
Fu, Anqi
Taasti, Vicki T.
Zarepisheh, Masoud
Medical Physics2022Journal Article, cited 0 times
HNSCC-3DCT-RT
BACKGROUND: The importance of robust proton treatment planning to mitigate the impact of uncertainty is well understood. However, its computational cost grows with the number of uncertainty scenarios, prolonging the treatment planning process.
PURPOSE: We developed a fast and scalable distributed optimization platform that parallelizes the robust proton treatment plan computation over the uncertainty scenarios.
METHODS: We modeled the robust proton treatment planning problem as a weighted least-squares problem. To solve it, we employed an optimization technique called the alternating direction method of multipliers with Barzilai-Borwein step size (ADMM-BB). We reformulated the problem in such a way as to split the main problem into smaller subproblems, one for each proton therapy uncertainty scenario. The subproblems can be solved in parallel, allowing the computational load to be distributed across multiple processors (e.g., CPU threads/cores). We evaluated ADMM-BB on four head-and-neck proton therapy patients, each with 13 scenarios accounting for 3 mm setup and 3.5% range uncertainties. We then compared the performance of ADMM-BB with projected gradient descent (PGD) applied to the same problem.
RESULTS: For each patient, ADMM-BB generated a robust proton treatment plan that satisfied all clinical criteria with comparable or better dosimetric quality than the plan generated by PGD. However, ADMM-BB's total runtime averaged about 6 to 7 times faster. This speedup increased with the number of scenarios.
CONCLUSIONS: ADMM-BB is a powerful distributed optimization method that leverages parallel processing platforms, such as multicore CPUs, GPUs, and cloud servers, to accelerate the computationally intensive work of robust proton treatment planning. This results in (1) a shorter treatment planning process and (2) the ability to consider more uncertainty scenarios, which improves plan quality.
A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography With Incomplete Data
Fu, Jian
Dong, Jianbing
Zhao, Feng
2019Journal Article, cited 0 times
CT Lymph Nodes
Machine Learning
Differential phase-contrast computed tomography (DPC-CT) is a powerful analysis tool for soft-tissue and low-atomic-number samples. Limited by the implementation conditions, DPC-CT with incomplete projections happens quite often. Conventional reconstruction algorithms face difficulty when given incomplete data. They usually involve complicated parameter selection operations, which are also sensitive to noise and are time-consuming. In this paper, we report a new deep learning reconstruction framework for incomplete data DPC-CT. It involves the tight coupling of the deep learning neural network and DPC-CT reconstruction algorithm in the domain of DPC projection sinograms. The estimated result is not an artifact caused by the incomplete data, but a complete phase-contrast projection sinogram. After training, this framework is determined and can be used to reconstruct the final DPC-CT images for a given incomplete projection sinogram. Taking the sparse-view, limited-view and missing-view DPC-CT as examples, this framework is validated and demonstrated with synthetic and experimental data sets. Compared with other methods, our framework can achieve the best imaging quality at a faster speed and with fewer parameters. This work supports the application of the state-of-the-art deep learning theory in the field of DPC-CT.
Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks
Fu, Ling
Ma, Jingchen
Chen, Yizhi
Larsson, Rasmus
Zhao, Jun
Journal of Shanghai Jiaotong University (Science)2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.
Radiogenomics based survival prediction of small-sample glioblastoma patients by multi-task DFFSP model
Fu, X.
Chen, C.
Chen, Z.
Yu, J.
Wang, L.
Biomed Tech (Berl)2024Journal Article, cited 0 times
Website
TCGA-GBM
In this paper, the multi-task dense-feature-fusion survival prediction (DFFSP) model is proposed to predict the three-year survival for glioblastoma (GBM) patients based on radiogenomics data. The contrast-enhanced T1-weighted (T1w) image, T2-weighted (T2w) image and copy number variation (CNV) is used as the input of the three branches of the DFFSP model. This model uses two image extraction modules consisting of residual blocks and one dense feature fusion module to make multi-scale fusion of T1w and T2w image features as backbone. Also, a gene feature extraction module is used to adaptively weight CNV fragments. Besides, a transfer learning module is introduced to solve the small sample problem and an image reconstruction module is adopted to make the model anatomy-aware under a multi-task framework. 256 sample pairs (T1w and corresponding T2w MRI slices) and 187 CNVs of 74 patients were used. The experimental results show that the proposed model can predict the three-year survival of GBM patients with the accuracy of 89.1 %, which is improved by 3.2 and 4.7 % compared with the model without genes and the model using last fusion strategy, respectively. This model could also classify the patients into high-risk and low-risk groups, which will effectively assist doctors in diagnosing GBM patients.
AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images
Fu, Yu
Dong, Shunjie
Niu, Meng
Xue, Le
Guo, Hanning
Huang, Yanyan
Xu, Yuanfan
Yu, Tianbai
Shi, Kuangyu
Yang, Qianqian
Shi, Yiyu
Zhang, Hong
Tian, Mei
Zhuo, Cheng
Medical Image Analysis2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Generative Adversarial Network (GAN)
Image Fusion
Radiomics
Computed Tomography (CT)
Positron Emission Tomography (PET)
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Technical Note: Automatic segmentation of CT images for ventral body composition analysis
Fu, Yabo
Ippolito, Joseph E.
Ludwig, Daniel R.
Nizamuddin, Rehan
Li, Harold H.
Yang, Deshan
Medical Physics2020Journal Article, cited 0 times
TCGA-KIRC
PURPOSE: Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments.
METHODS: A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity.
RESULTS: The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets.
CONCLUSION: A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.
Deep model with Siamese network for viable and necrotic tumor regions assessment in osteosarcoma
Fu, Yu
Xue, Peng
Ji, Huizhong
Cui, Wentao
Dong, Enqing
Medical Physics2020Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
PURPOSE: To achieve automatic classification of viable and necrotic tumor regions in osteosarcoma, most of the existing deep learning methods can only design a simple model to prevent overfitting on small datasets, which leads to the weak ability of extracting image features and low accuracy of the models. In order to solve the above problem, a deep model with Siamese network (DS-Net) was designed in this paper.
METHODS: The DS-Net constructed on the basis of full convolutional networks is composed of an auxiliary supervision network (ASN) and a classification network. The construction of the ASN based on the Siamese network aims to solve the problem of a small training set (the main bottleneck of deep learning in medical images). It uses paired data as the input and updates the network through combined labels. The classification network uses the features extracted by the ASN to perform accurate classification.
RESULTS: Pathological diagnosis is the most accurate method to identify osteosarcoma. However, due to intraclass variation and interclass similarity, it is challenging for pathologists to accurately identify osteosarcoma. Through the experiments on hematoxylin and eosin (H&E)-stained osteosarcoma histology slides, the DS-Net we constructed can achieve an average accuracy of 95.1%. Compared with existing methods, the DS-Net performs best in the test dataset.
CONCLUSIONS: The DS-Net we constructed can not only effectively realize the histological classification of osteosarcoma, but also be applicable to many other medical image classification tasks affected by small datasets.
Effect of segmentation dimension on radiomics analysis for MGMT promoter methylation status in gliomas
Patient Graph Deep Learning to Predict Breast Cancer Molecular Subtype
Furtney, Isaac
Bradley, Ray
Kabuka, Mansur R.
2023Journal Article, cited 0 times
ACRIN-6698
ISPY2
TCGA-BRCA
Breast cancer is a heterogeneous disease consisting of a diverse set of genomic mutations and clinical characteristics. The molecular subtypes of breast cancer are closely tied to prognosis and therapeutic treatment options. We investigate using deep graph learning on a collection of patient factors from multiple diagnostic disciplines to better represent breast cancer patient information and predict molecular subtype. Our method models breast cancer patient data into a multi-relational directed graph with extracted feature embeddings to directly represent patient information and diagnostic test results. We develop a radiographic image feature extraction pipeline to produce vector representation of breast cancer tumors in DCE-MRI and an autoencoder-based genomic variant embedding method to map variant assay results to a low-dimensional latent space. We leverage related-domain transfer learning to train and evaluate a Relational Graph Convolutional Network to predict the probabilities of molecular subtypes for individual breast cancer patient graphs. Our work found that utilizing information from multiple multimodal diagnostic disciplines improved the model's prediction results and produced more distinct learned feature representations for breast cancer patients. This research demonstrates the capabilities of graph neural networks and deep learning feature representation to perform multimodal data fusion and representation in the breast cancer domain.
Textural radiomic features and time-intensity curve data analysis by dynamic contrast-enhanced MRI for early prediction of breast cancer therapy response: preliminary data
Fusco, Roberta
Granata, Vincenza
Maio, Francesca
Sansone, Mario
Petrillo, Antonella
Eur Radiol Exp2020Journal Article, cited 1 times
Website
BREAST
QIN Breast DCE-MRI
QIN Breast
BACKGROUND: To investigate the potential of semiquantitative time-intensity curve parameters compared to textural radiomic features on arterial phase images by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for early prediction of breast cancer neoadjuvant therapy response. METHODS: A retrospective study of 45 patients subjected to DCE-MRI by public datasets containing examination performed prior to the start of treatment and after the treatment first cycle ('QIN Breast DCE-MRI' and 'QIN-Breast') was performed. In total, 11 semiquantitative parameters and 50 texture features were extracted. Non-parametric test, receiver operating characteristic analysis with area under the curve (ROC-AUC), Spearman correlation coefficient, and Kruskal-Wallis test with Bonferroni correction were applied. RESULTS: Fifteen patients with pathological complete response (pCR) and 30 patients with non-pCR were analysed. Significant differences in median values between pCR patients and non-pCR patients were found for entropy, long-run emphasis, and busyness among the textural features, for maximum signal difference, washout slope, washin slope, and standardised index of shape among the dynamic semiquantitative parameters. The standardised index of shape had the best results with a ROC-AUC of 0.93 to differentiate pCR versus non-pCR patients. CONCLUSIONS: The standardised index of shape could become a clinical tool to differentiate, in the early stages of treatment, responding to non-responding patients.
We propose a solution for BraTS22 challenge that builds on top of our previous submission—Optimized U-Net method. This year we focused on improving the model architecture and training schedule. The proposed method further improves scores on both our internal cross validation and challenge validation data. The validation mean dice scores are: ET 0.8381, TC 0.8802, WT 0.9292, and mean Hausdorff95: ET 14.460, TC 5.840, WT 3.594.
Optimized U-Net for Brain Tumor Segmentation
Futrega, Michał
Milesi, Alexandre
Marcinkiewicz, Michał
Ribalta, Pablo
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
U-Net
We propose an optimized U-Net architecture for a brain tumor segmentation task in the BraTS21 challenge. To find the optimal model architecture and the learning schedule, we have run an extensive ablation study to test: deep supervision loss, Focal loss, decoder attention, drop block, and residual connections. Additionally, we have searched for the optimal depth of the U-Net encoder, number of convolutional channels and post-processing strategy. Our method won the validation phase and took third place in the test phase. We have open-sourced the code to reproduce our BraTS21 submission at the NVIDIA Deep Learning Examples GitHub Repository (https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/Segmentation/nnUNet/notebooks/BraTS21.ipynb).
RMTF-Net: Residual Mix Transformer Fusion Net for 2D Brain Tumor Segmentation
Gai, D.
Zhang, J.
Xiao, Y.
Min, W.
Zhong, Y.
Zhong, Y.
Brain Sci2022Journal Article, cited 0 times
Segmentation
Convolutional Neural Network (CNN)
BraTS 2019
BraTS 2020
Radiomics
mix transformer
overlapping patch embedding mechanism
Due to the complexity of medical imaging techniques and the high heterogeneity of glioma surfaces, image segmentation of human gliomas is one of the most challenging tasks in medical image analysis. Current methods based on convolutional neural networks concentrate on feature extraction while ignoring the correlation between local and global. In this paper, we propose a residual mix transformer fusion net, namely RMTF-Net, for brain tumor segmentation. In the feature encoder, a residual mix transformer encoder including a mix transformer and a residual convolutional neural network (RCNN) is proposed. The mix transformer gives an overlapping patch embedding mechanism to cope with the loss of patch boundary information. Moreover, a parallel fusion strategy based on RCNN is utilized to obtain local-global balanced information. In the feature decoder, a global feature integration (GFI) module is applied, which can enrich the context with the global attention feature. Extensive experiments on brain tumor segmentation from LGG, BraTS2019 and BraTS2020 demonstrated that our proposed RMTF-Net is superior to existing state-of-art methods in subjective visual performance and objective evaluation.
CT-Scan Denoising Using a Charbonnier Loss Generative Adversarial Network
Gajera, Binit
Kapil, Siddhant Raj
Ziaei, Dorsa
Mangalagiri, Jayalakshmi
Siegel, Eliot
Chapman, David
IEEE Access2021Journal Article, cited 0 times
Phantom FDA
We propose a Generative Adversarial Network (GAN) optimized for noise reduction in CT-scans. The objective of CT scan denoising is to obtain higher quality imagery using a lower radiation exposure to the patient. Recent work in computer vision has shown that the use of Charbonnier distance as a term in the perceptual loss of a GAN can improve the performance of image reconstruction and video super-resolution. However, the use of a Charbonnier structural loss term has not yet been applied or evaluated for the purpose of CT scan denoising. Our proposed GAN makes use of a Wasserstein adversarial loss, a pretrained VGG19 perceptual loss, as well as a Charbonnier distance structural loss. We evaluate our approach using both applied Poisson noise distribution in order to simulate low-dose CT imagery, as well as using an anthropomorphic thoracic phantom at different exposure levels. Our evaluation criteria are Peek Signal to Noise (PSNR) as well as Structured Similarity (SSIM) of the denoised images, and we compare the results of our method versus recent state of the art deep denoising GANs. In addition, we report global noise through uniform soft tissue mediums. Our findings show that the incorporation of the Charbonnier Loss with the VGG-19 network improves the performance of the denoising as measured with the PSNR and SSIM, and that the method greatly reduces soft tissue noise to levels comparable to the NDCT scan.
Alternative Tool for the Diagnosis of Diseases Through Virtual Reality
Virtual reality (VR) presents objects or simulated scenes to reproduce situations in a way similar to the real thing. In medicine, processing and 3D reconstruction of medical images is an important step in VR. We propose a methodology for processing medical images, to segment organs, reconstruct structures in 3D and represent structures in a VR environment, in order to provide the specialist with an alternative tool for the analysis of medical images. We present a method of image segmentation based on area differentiation and other image processing techniques; the 3D reconstruction was by the 'isosurface' method. Different studies show the benefits of VR applied to clinical practice, adding its uses as an educational tool. A VR environment was created to be visualized with glasses for this purpose, this can be an alternative tool in the identification and visualization of COVID-19 affected lungs through medical image processing and subsequent 3D reconstruction.
Co-Designing a 3D Transformation Accelerator for Versal-Based Image Registration
Galfano, Paolo Salvatore
Sorrentino, Giuseppe
D'Arnese, Eleonora
Conficconi, Davide
2024Conference Paper, cited 0 times
CPTAC-LUAD
A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks
Galib, Shaikat M
Lee, Hyoung K
Guy, Christopher L
Riblett, Matthew J
Hugo, Geoffrey D
Med Phys2020Journal Article, cited 1 times
Website
4D-Lung
Deep Learning
Image registration
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.
Interpretable Medical Image Classification Using Prototype Learning and Privileged Information
Gallée, Luisa
Beer, Meinrad
Götz, Michael
2023Book Section, cited 0 times
LIDC-IDRI
Deep Learning
Computer Aided Diagnosis (CADx)
Algorithm Development
LUNG
Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information. Evaluating the proposed solution on the LIDC-IDRI dataset shows that it combines increased interpretability with above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6 % higher accuracy in predicting both malignancy (93.0 % ) and mean characteristic features of lung nodules. Simultaneously, the model provides case-based reasoning with prototype representations that allow visual validation of radiologist-defined attributes.
Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model
Gallée, Luisa
Lisson, Catharina Silvia
Lisson, Christoph Gerhard
Drees, Daniela
Weig, Felix
Vogele, Daniel
Beer, Meinrad
Götz, Michael
2024Book Section, cited 0 times
LIDC-IDRI
Due to the sensitive nature of medicine, it is particularly important and highly demanded that AI methods are explainable. This need has been recognised and there is great research interest in xAI solutions with medical applications. However, there is a lack of user-centred evaluation regarding the actual impact of the explanations. We evaluate attribute- and prototype-based explanations with the Proto-Caps model. This xAI model reasons the target classification with human-defined visual features of the target object in the form of scores and attribute-specific prototypes. The model thus provides a multimodal explanation that is intuitively understandable to humans thanks to predefined attributes. A user study involving six radiologists shows that the explanations are subjectivly perceived as helpful, as they reflect their decision-making process. The results of the model are considered a second opinion that radiologists can discuss using the model’s explanations. However, it was shown that the inclusion and increased magnitude of model explanations objectively can increase confidence in the model’s predictions when the model is incorrect. We can conclude that attribute scores and visual prototypes enhance confidence in the model. However, additional development and repeated user studies are needed to tailor the explanation to the respective use case.
In Silico Approach for the Definition of radiomiRNomic Signatures for Breast Cancer Differential Diagnosis
Gallivanone, F.
Cava, C.
Corsi, F.
Bertoli, G.
Castiglioni, I.
Int J Mol Sci2019Journal Article, cited 2 times
Website
TCGA-BRCA
Radiogenomics
Radiomics
Personalized medicine relies on the integration and consideration of specific characteristics of the patient, such as tumor phenotypic and genotypic profiling. BACKGROUND: Radiogenomics aim to integrate phenotypes from tumor imaging data with genomic data to discover genetic mechanisms underlying tumor development and phenotype. METHODS: We describe a computational approach that correlates phenotype from magnetic resonance imaging (MRI) of breast cancer (BC) lesions with microRNAs (miRNAs), mRNAs, and regulatory networks, developing a radiomiRNomic map. We validated our approach to the relationships between MRI and miRNA expression data derived from BC patients. We obtained 16 radiomic features quantifying the tumor phenotype. We integrated the features with miRNAs regulating a network of pathways specific for a distinct BC subtype. RESULTS: We found six miRNAs correlated with imaging features in Luminal A (miR-1537, -205, -335, -337, -452, and -99a), seven miRNAs (miR-142, -155, -190, -190b, -1910, -3617, and -429) in HER2+, and two miRNAs (miR-135b and -365-2) in Basal subtype. We demonstrate that the combination of correlated miRNAs and imaging features have better classification power of Luminal A versus the different BC subtypes than using miRNAs or imaging alone. CONCLUSION: Our computational approach could be used to identify new radiomiRNomic profiles of multi-omics biomarkers for BC differential diagnosis and prognosis.
An overview on Meta-learning approaches for Few-shot Weakly-supervised Segmentation
Gama, Pedro Henrique Targino
Oliveira, Hugo
dos Santos, Jefersson A.
Cesar, Roberto M.
Computers & Graphics2023Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Semantic segmentation is a difficult task in computer vision that have applications in many scenarios, often as a preprocessing step for a tool. Current solutions are based on Deep Neural Networks, which often require a large amount of data for learning a task. Aiming to alleviate the strenuous data-collecting/annotating labor, research fields have emerged in recent years. One of them is Meta-Learning, which tries to improve the generability of models to learn in a restricted amount of data. In this work, we extend a previous paper conducting a more extensive overview of the still under-explored problem of Few-Shot Weakly-supervised Semantic Segmentation. We refined the previous taxonomy and included the review of additional methods, including Few-Shot Segmentation methods that could be adapted to the weak supervision. The goal is to provide a simple organization of literature and highlight aspects observed in the current moment, and be a starting point to foment research on this problem with applications in areas like medical imaging, remote sensing, video segmentation, and others.
Video frame interpolation neural network for 3D tomography across different length scales
Gambini, L.
Gabbett, C.
Doolan, L.
Jones, L.
Coleman, J. N.
Gilligan, P.
Sanvito, S.
Nat Commun2024Journal Article, cited 0 times
Website
Pseudo-PHI-DICOM-Data
Image Enhancement
Graphene
Materials Science
Medical Research
Three-dimensional (3D) tomography is a powerful investigative tool for many scientific domains, going from materials science, to engineering, to medicine. Many factors may limit the 3D resolution, often spatially anisotropic, compromising the precision of the information retrievable. A neural network, designed for video-frame interpolation, is employed to enhance tomographic images, achieving cubic-voxel resolution. The method is applied to distinct domains: the investigation of the morphology of printed graphene nanosheets networks, obtained via focused ion beam-scanning electron microscope (FIB-SEM), magnetic resonance imaging of the human brain, and X-ray computed tomography scans of the abdomen. The accuracy of the 3D tomographic maps can be quantified through computer-vision metrics, but most importantly with the precision on the physical quantities retrievable from the reconstructions, in the case of FIB-SEM the porosity, tortuosity, and effective diffusivity. This work showcases a versatile image-augmentation strategy for optimizing 3D tomography acquisition conditions, while preserving the information content.
Extraction of pulmonary vessels and tumour from plain computed tomography sequence
RevPHiSeg: A Memory-Efficient Neural Network for Uncertainty Quantification in Medical Image Segmentation
Gantenbein, Marc
Erdil, Ertunc
Konukoglu, Ender
2020Book Section, cited 0 times
LIDC-IDRI
Quantifying segmentation uncertainty has become an important issue in medical image analysis due to the inherent ambiguity of anatomical structures and its pathologies. Recently, neural network-based uncertainty quantification methods have been successfully applied to various problems. One of the main limitations of the existing techniques is the high memory requirement during training; which limits their application to processing smaller field-of-views (FOVs) and/or using shallower architectures. In this paper, we investigate the effect of using reversible blocks for building memory-efficient neural network architectures for quantification of segmentation uncertainty. The reversible architecture achieves memory saving by exactly computing the activations from the outputs of the subsequent layers during backpropagation instead of storing the activations for each layer. We incorporate the reversible blocks into a recently proposed architecture called PHiSeg that is developed for uncertainty quantification in medical image segmentation. The reversible architecture, RevPHiSeg, allows training neural networks for quantifying segmentation uncertainty on GPUs with limited memory and processing larger FOVs. We perform experiments on the LIDC-IDRI dataset and an in-house prostate dataset, and present comparisons with PHiSeg. The results demonstrate that RevPHiSeg consumes ∼30%$${\sim }30\%$$ less memory compared to PHiSeg while achieving very similar segmentation accuracy.
Re-identification from histopathology images
Ganz, Jonathan
Ammeling, Jonas
Jabari, Samir
Breininger, Katharina
Aubreville, Marc
Medical Image Analysis2025Journal Article, cited 1 times
Website
CPTAC–LUAD
CPTAC-LSCC
Transformer based multiple instance learning for WSI breast cancer classification
Gao, Chengyang
Sun, Qiule
Zhu, Wen
Zhang, Lizhi
Zhang, Jianxin
Liu, Bin
Zhang, Junxing
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
SLN-Breast
Pathomics
Whole Slide Imaging (WSI)
Computer Aided Diagnosis (CADx)
Classification
Algorithm Development
The computer-aided diagnosis method based on deep learning provides pathologists with preliminary diagnostic opinions and improves their work efficiency. Inspired by the widespread use of transformers in computer vision, we try to explore their effectiveness and potential in classifying breast cancer tissues in WSIs, and propose a hybrid multiple instance learning method called HTransMIL. Specifically, its first stage is to select informative instances based on hierarchical Swin Transformer, which can capture global and local information of pathological images and is beneficial for obtaining accurate discriminative instances. The second stage aims to strengthen the correlation between selected instances via another transformer encoder consistently and produce powerful bag-level features by aggregating interactived instances for classification. Besides, visualization analysis is utilized to better understand the weakly supervised classification model for WSIs. The extensive evaluation results on a private and two public WSI breast cancer datasets demonstrate the effectiveness and competitiveness of HTransMIL. The code and models are publicly available at https://github.com/Chengyang852/Transformer-for-WSI-classification.
Cross-dimensional Medical Self-supervised Representation Learning Based on a Pseudo-3D Transformation
Gao, Fei
Wang, Siwen
Zhang, Fandong
Zhou, Hong-Yu
Wang, Yizhou
Wang, Churan
Yu, Gang
Yu, Yizhou
2024Book Section, cited 0 times
CT Images in COVID-19
Medical image analysis suffers from a shortage of data, whether annotated or not. This becomes even more pronounced when it comes to 3D medical images. Self-Supervised Learning (SSL) can partially ease this situation by using unlabeled data. However, most existing SSL methods can only make use of data in a single dimensionality (e.g. 2D or 3D), and are incapable of enlarging the training dataset by using data with differing dimensionalities jointly. In this paper, we propose a new cross-dimensional SSL framework based on a pseudo-3D transformation (CDSSL-P3D), that can leverage both 2D and 3D data for joint pre-training. Specifically, we introduce an image transformation based on the im2col algorithm, which converts 2D images into a format consistent with 3D data. This transformation enables seamless integration of 2D and 3D data, and facilitates cross-dimensional self-supervised learning for 3D medical image analysis. We run extensive experiments on 13 downstream tasks, including 2D and 3D classification and segmentation. The results indicate that our CDSSL-P3D achieves superior performance, outperforming other advanced SSL methods.
Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation
Gao, H.
Lyu, M.
Zhao, X.
Yang, F.
Bai, X.
Med Image Anal2023Journal Article, cited 0 times
Pancreas-CT
Humans
*Tomography
X-Ray Computed/methods
*Image Processing
Computer-Assisted/methods
CT image
Deep learning
Image segmentation
Three-dimensional organ segmentation
PyTorch
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
A multi-view feature decomposition deep learning method for lung cancer histology classification
Gao, Heng
Wang, Minghui
Li, Haichun
Liu, Zhaodi
Liang, Wei
Li, Ao
2023Conference Proceedings, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
Accurate classification of squamous cell carcinoma (SCC) and adenocarcinoma (ADC) using computed tomography (CT) images is of great significance to guide treatment for patients with non-small cell lung cancer (NSCLC). Although existing deep learning methods have made promising progress in this area, they do not fully exploit tumor information to learn discriminative representations. In this study, we propose a multi-view feature decomposition deep learning method for lung cancer histology classification. Different from existing multi-view methods that directly fuse features extracted from different views, we propose a feature decomposition module (FDM) to decompose the features of axial, coronal and sagittal views into common and specific features through an attention mechanism. To constrain this feature decomposition, a feature similarity loss is introduced to make common features obtained from different views to be similar to each other. Moreover, to assure the effectiveness of feature decomposition, we design a cross-reconstruction loss which enforces each view to be reconstructed according to its own specific feature and other view’s common features. After the above feature decomposition, comprehensive representations of tumors can be obtained by efficiently integrating common features to improve the classification performance. Experimental results demonstrate that our method outperforms other state-of-the-art methods.
Identification of clear cell renal cell carcinoma subtypes by integrating radiomics and transcriptomics
Objective This study aimed to delineate the clear cell renal cell carcinoma (ccRCC) intrinsic subtypes through unsupervised clustering of radiomics and transcriptomics data and to evaluate their associations with clinicopathological features, prognosis, and molecular characteristics. Methods Using a retrospective dual-center approach, we gathered transcriptomic and clinical data from ccRCC patients registered in The Cancer Genome Atlas and contrast-enhanced computed tomography images from The Cancer Imaging Archive and local databases. Following the segmentation of images, radiomics feature extraction, and feature preprocessing, we performed unsupervised clustering based on the “CancerSubtypes” package to identify distinct radiotranscriptomic subtypes, which were then correlated with clinical-pathological, prognostic, immune, and molecular characteristics. Results Clustering identified three subtypes, C1, C2, and C3, each of which displayed unique clinicopathological, prognostic, immune, and molecular distinctions. Notably, subtypes C1 and C3 were associated with poorer survival outcomes than subtype C2. Pathway analysis highlighted immune pathway activation in C1 and metabolic pathway prominence in C2. Gene mutation analysis identified VHL and PBRM1 as the most commonly mutated genes, with more mutated genes observed in the C3 subtype. Despite similar tumor mutation burdens, microsatellite instability, and RNA interference across subtypes, C1 and C3 demonstrated greater tumor immune dysfunction and rejection. In the validation cohort, the various subtypes showed comparable results in terms of clinicopathological features and prognosis to those observed in the training cohort, thus confirming the efficacy of our algorithm. Conclusion Unsupervised clustering based on radiotranscriptomics can identify the intrinsic subtypes of ccRCC, and radiotranscriptomic subtypes can characterize the prognosis and molecular features of tumors, enabling noninvasive tumor risk stratification.
Improvement of Image Classification by Multiple Optical Scattering
Gao, Xinyu
Li, Yi
Qiu, Yanqing
Mao, Bangning
Chen, Miaogen
Meng, Yanlong
Zhao, Chunliu
Kang, Juan
Guo, Yong
Shen, Changyu
2021Journal Article, cited 0 times
C-NMC 2019
Multiple optical scattering occurs when light propagates in a non-uniform medium. During the multiple scattering, images were distorted and the spatial information they carried became scrambled. However, the image information is not lost but presents in the form of speckle patterns (SPs). In this study, we built up an optical random scattering system based on an liquid crystal display (LCD) and an RGB laser source. We found that the image classification can be improved by the help of random scattering which is considered as a feedforward neural network to extracts features from image. Along with the ridge classification deployed on computer, we achieved excellent classification accuracy higher than 94, for a variety of data sets covering medical, agricultural, environmental protection and other fields. In addition, the proposed optical scattering system has the advantages of high speed, low power consumption, and miniaturization, which is suitable for deploying in edge computing applications.
A self-interpretable deep learning network for early prediction of pathologic complete response to neoadjuvant chemotherapy based on breast pre-treatment dynamic contrast-enhanced magnetic resonance imaging
Gao, Yu
Ding, Da-Wei
Zeng, Hui
2024Journal Article, cited 0 times
Duke-Breast-Cancer-MRI
ISPY1
ISPY1-Tumor-SEG-Radiomics
Accurate prediction of pathologic complete response to neoadjuvant chemotherapy non-invasively before treatment via dynamic contrast-enhanced magnetic resonance imaging is vital for developing a personalized therapy strategy. However, the application of deep learning in this domain is characterized by its black-box nature, largely relying on post-hoc analysis to interpret final decision-making. This reliance results in a lack of self-interpretability in the operational mechanisms of feature extraction, feature fusion, and decision-making. Moreover, these models have demonstrated unsatisfactory prediction performance due to insufficient feature modeling. To address these issues, we propose a self-interpretable deep learning network that can provide the intrinsic interpretability of feature extraction, multi-scale feature fusion, and final prediction. First, the interpretable perception module is designed to extract features both effectively and interpretably. Furthermore, the interpretable adaptive multi-scale feature fusion module is proposed to fuse multi-scale features. Finally, an end-to-end self-interpretable deep learning network is presented to predict pathologic complete response with self-interpretability. Validated on a multi-center pre-treatment dynamic contrast-enhanced magnetic resonance imaging dataset, our self-interpretable deep learning network outperforms state-of-the-art methods in both prediction performance and self-interpretability, improving the area under the receiver operating characteristic curve by at least 4.81% while providing both qualitative and quantitative self-interpretability. Our study demonstrates that our proposed self-interpretable deep learning network can extract key information from pre-treatment breast dynamic contrast-enhanced magnetic resonance imaging while enhancing both the prediction performance and the transparency of the model, thereby improving its trustworthiness in clinical settings.
Improving the Subtype Classification of Non-small Cell Lung Cancer by Elastic Deformation Based Machine Learning
Gao, Yang
Song, Fan
Zhang, Peng
Liu, Jian
Cui, Jingjing
Ma, Yingying
Zhang, Guanglei
Luo, Jianwen
J Digit Imaging2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
Machine learning
Non-small cell lung cancer (NSCLC)
Radiomics
Subtype classification
Non-invasive image-based machine learning models have been used to classify subtypes of non-small cell lung cancer (NSCLC). However, the classification performance is limited by the dataset size, because insufficient data cannot fully represent the characteristics of the tumor lesions. In this work, a data augmentation method named elastic deformation is proposed to artificially enlarge the image dataset of NSCLC patients with two subtypes (squamous cell carcinoma and large cell carcinoma) of 3158 images. Elastic deformation effectively expanded the dataset by generating new images, in which tumor lesions go through elastic shape transformation. To evaluate the proposed method, two classification models were trained on the original and augmented dataset, respectively. Using augmented dataset for training significantly increased classification metrics including area under the curve (AUC) values of receiver operating characteristics (ROC) curves, accuracy, sensitivity, specificity, and f1-score, thus improved the NSCLC subtype classification performance. These results suggest that elastic deformation could be an effective data augmentation method for NSCLC tumor lesion images, and building classification models with the help of elastic deformation has the potential to serve for clinical lung cancer diagnosis and treatment design.
Seeking multi-view commonality and peculiarity: A novel decoupling method for lung cancer subtype classification
Gao, Ziyu
Luo, Yin
Wang, Minghui
Cao, Chi
Jiang, Houzhou
Liang, Wei
Li, Ao
Expert Systems with Applications2025Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Multi-view learning
Decoupling
Histologic subtype classification
Non-Small Cell Lung Cancer (NSCLC)
In the management of non-small cell lung cancer (NSCLC), accurate and non-invasive classification of histological subtypes from computed tomography (CT) images is pivotal for devising appropriate treatment strategies. Despite encouraging progress in existing multi-view deep learning approaches, severe problems persist in effectively managing this crucial but challenging task, particularly concerning inter-view discrepancy and intra-view interference. To address these issues, this study presents a novel multi-view decoupling (MVD) method dedicated to seeking commonality and peculiarity across views using a divide-and-conquer strategy. Specifically, MVD employs an attention-based decoupling mechanism that simultaneously projects all views onto distinct view-invariant and view-specific subspaces, thereby generating both common and peculiar representation for each view. Moreover, a cross-view transformation loss is designed to successfully mitigate inter-view discrepancy in the view-invariant subspace, leveraging a unique view-to-view transformation perspective. Meanwhile, a cross-subtype discrimination loss is introduced to ensure that peculiar representations in view-specific subspaces exclusively capture subtype-irrelevant information thereby effectively eradicating intra-view interference via adversarial learning. MVD achieves an area under the receiver operating characteristic curve (AUC) of 0.838 and 0.805 on public and in-house NSCLC datasets respectively, consistently outperforming state-of-the-art approaches by a significant margin. In addition, extensive ablation experiments confirm that MVD effectively addresses the challenges of inter-view discrepancy and intra-view interference, establishing it as a valuable tool for enhanced accuracy and reliability in NSCLC histological subtype classification.
Imaging Biomarker Development for Lower Back Pain Using Machine Learning: How Image Analysis Can Help Back Pain
Gaonkar, B.
Cook, K.
Yoo, B.
Salehi, B.
Macyszyn, L.
Methods Mol Biol2022Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Degenerative disease
Image segmentation
Machine learning
Spine MRI
State-of-the-art diagnosis of radiculopathy relies on "highly subjective" radiologist interpretation of magnetic resonance imaging of the lower back. Currently, the treatment of lumbar radiculopathy and associated lower back pain lacks coherence due to an absence of reliable, objective diagnostic biomarkers. Using emerging machine learning techniques, the subjectivity of interpretation may be replaced by the objectivity of automated analysis. However, training computer vision methods requires a curated database of imaging data containing anatomical delineations vetted by a team of human experts. In this chapter, we outline our efforts to develop such a database of curated imaging data alongside the required delineations. We detail the processes involved in data acquisition and subsequent annotation. Then we explain how the resulting database can be utilized to develop a machine learning-based objective imaging biomarker. Finally, we present an explanation of how we validate our machine learning-based anatomy delineation algorithms. Ultimately, we hope to allow validated machine learning models to be used to generate objective biomarkers from imaging data-for clinical use to diagnose lumbar radiculopathy and guide associated treatment plans.
Patient-specific implants provide important advantages for patients and medical professionals. The state of the art of cranioplasty implant production is based on the bone structure reconstruction and use of patient’s own anatomical information for filling the bone defect. The present work proposes a two-dimensional investigation of which dataset results in the closest polynomial regression to a gold standard structure combining points of the bone defect region and points of the healthy contralateral skull hemisphere. The similarity measures used to compare datasets are the root mean square error (RMSE) and the Hausdorff distance. The objective is to use the most successful dataset in future development and testing of a semi-automatic methodology for cranial prosthesis modeling. The present methodology was implemented in Python scripts and uses five series of skull computed tomography images to generate phantoms with small, medium and large bone defects. Results from statistical tests and observations made from the mean RMSE and mean Hausdorff distance allow to determine that the dataset formed by the phantom contour points twice and the mirrored contour points is the one that significantly increases the similarity measures.
Method for Improved Image Reconstruction in Computed Tomography and Positron Emission Tomography, Based on Compressive Sensing with Prefiltering in the Frequency Domain
Garcia, Y.
Franco, C.
Miosso, C. J.
2022Book Section, cited 0 times
TCGA-LUAD
Computed tomography (CT) and positron emission tomography (PET) allow many types of diagnoses and medical analyses to be performed, as well as patient monitoring in different treatment scenarios. Therefore, they are among the most important medical imaging modalities, both in clinical applications and in scientific research. However, both methods lead to radiation exposure, associated to the X-rays, used in the CT case, and to the chemical contrast that inserts a radioactive isotope into the patient’s body, in the PET case. It is possible to reduce the amount of radiation needed to attain a specified quality in these imaging techniques by using compressive sensing (CS), which reduces the number of measurements required for signal and image reconstruction, compared to standard approaches such as filtered backprojection. In this paper, we propose and evaluate a new method for the reconstruction of CT and PET images based on CS with prefiltering in the frequency domain. We start by estimating frequency-domain measurements based on the acquired sinograms. Next, we perform a prefiltering in the frequency domain to favor the sparsity required by CS and improve the reconstruction of filtered versions of the image. Based on the reconstructed filtered images, a final composition stage leads to the complete image using the spectral information from the individual filtered versions. We compared the proposed method to the standard filtered backprojection technique, commonly used in CT and PET. The results suggest that the proposed method can lead to images with significantly higher signal-to-error ratios for a specified number of measurements, both for CT (p = 8.8324e-05) and PET (p = 4.7377e-09).
Simultaneous emission and attenuation reconstruction in time-of-flight PET using a reference object
Garcia-Perez, P.
Espana, S.
EJNMMI Phys2020Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Registration
Positron emission tomography (PET)
Phantom
BACKGROUND: Simultaneous reconstruction of emission and attenuation images in time-of-flight (TOF) positron emission tomography (PET) does not provide a unique solution. In this study, we propose to solve this limitation by including additional information given by a reference object with known attenuation placed outside the patient. Different configurations of the reference object were studied including geometry, material composition, and activity, and an optimal configuration was defined. In addition, this configuration was tested for different timing resolutions and noise levels. RESULTS: The proposed strategy was tested in 2D simulations obtained by forward projection of available PET/CT data and noise was included using Monte Carlo techniques. Obtained results suggest that the optimal configuration corresponds to a water cylinder inserted in the patient table and filled with activity. In that case, mean differences between reconstructed and true images were below 10%. However, better results can be obtained by increasing the activity of the reference object. CONCLUSION: This study shows promising results that might allow to obtain an accurate attenuation map from pure TOF-PET data without prior knowledge obtained from CT, MRI, or transmission scans.
An accessible deep learning tool for voxel-wise classification of brain malignancies from perfusion MRI
Garcia-Ruiz, A.
Pons-Escoda, A.
Grussu, F.
Naval-Baudin, P.
Monreal-Aguero, C.
Hermann, G.
Karunamuni, R.
Ligero, M.
Lopez-Rueda, A.
Oleaga, L.
Berbis, M. A.
Cabrera-Zubizarreta, A.
Martin-Noguerol, T.
Luna, A.
Seibert, T. M.
Majos, C.
Perez-Lopez, R.
Cell Rep Med2024Journal Article, cited 0 times
Website
IvyGAP
IvyGAP-Radiomics
Noninvasive differential diagnosis of brain tumors is currently based on the assessment of magnetic resonance imaging (MRI) coupled with dynamic susceptibility contrast (DSC). However, a definitive diagnosis often requires neurosurgical interventions that compromise patients' quality of life. We apply deep learning on DSC images from histology-confirmed patients with glioblastoma, metastasis, or lymphoma. The convolutional neural network trained on ∼50,000 voxels from 40 patients provides intratumor probability maps that yield clinical-grade diagnosis. Performance is tested in 400 additional cases and an external validation cohort of 128 patients. The tool reaches a three-way accuracy of 0.78, superior to the conventional MRI metrics cerebral blood volume (0.55) and percentage of signal recovery (0.59), showing high value as a support diagnostic tool. Our open-access software, Diagnosis In Susceptibility Contrast Enhancing Regions for Neuro-oncology (DISCERN), demonstrates its potential in aiding medical decisions for brain tumor diagnosis using standard-of-care MRI.
Data-driven rapid 4D cone-beam CT reconstruction for new generation linacs
Gardner, Mark
Dillon, Owen
Byrne, Hilary
Keall, Paul
O’Brien, Ricky
Physics in Medicine & Biology2024Journal Article, cited 1 times
Website
4D-Lung
CT
Glioma Segmentation and a Simple Accurate Model for Overall Survival Prediction
Gates, Evan
Pauloski, J. Gregory
Schellingerhout, Dawid
Fuentes, David
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation is a challenging task necessary for quantitative tumor analysis and diagnosis. We apply a multi-scale convolutional neural network based on the DeepMedic to segment glioma subvolumes provided in the 2018 MICCAI Brain Tumor Segmentation Challenge. We go on to extract intensity and shape features from the images and cross-validate machine learning models to predict overall survival. Using only the mean FLAIR intensity, nonenhancing tumor volume, and patient age we are able to predict patient overall survival with reasonable accuracy.
An efficient magnetic resonance image data quality screening dashboard
Gates, E. D. H.
Celaya, A.
Suki, D.
Schellingerhout, D.
Fuentes, D.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Magnetic Resonance Imaging (MRI)
Quality control
NIfTI
ITK
BRAIN
PURPOSE: Complex data processing and curation for artificial intelligence applications rely on high-quality data sets for training and analysis. Manually reviewing images and their associated annotations is a very laborious task and existing quality control tools for data review are generally limited to raw images only. The purpose of this work was to develop an imaging informatics dashboard for the easy and fast review of processed magnetic resonance (MR) imaging data sets; we demonstrated its ability in a large-scale data review. METHODS: We developed a custom R Shiny dashboard that displays key static snapshots of each imaging study and its annotations. A graphical interface allows the structured entry of review data and download of tabulated review results. We evaluated the dashboard using two large data sets: 1380 processed MR imaging studies from our institution and 285 studies from the 2018 MICCAI Brain Tumor Segmentation Challenge (BraTS). RESULTS: Studies were reviewed at an average rate of 100/h using the dashboard, 10 times faster than using existing data viewers. For data from our institution, 1181 of the 1380 (86%) studies were of acceptable quality. The most commonly identified failure modes were tumor segmentation (9.6% of cases) and image registration (4.6% of cases). Tumor segmentation without visible errors on the dashboard had much better agreement with reference tumor volume measurements (root-mean-square error 12.2 cm(3) ) than did segmentations with minor errors (20.5 cm(3) ) or failed segmentations (27.4 cm(3) ). In the BraTS data, 242 of 285 (85%) studies were acceptable quality after processing. Among the 43 cases that failed review, 14 had unacceptable raw image quality. CONCLUSION: Our dashboard provides a fast, effective tool for reviewing complex processed MR imaging data sets. It is freely available for download at https://github.com/EGates1/MRDQED.
A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions
Gatidis, Sergios
Hepp, Tobias
Früh, Marcel
La Fougère, Christian
Nikolaou, Konstantin
Pfannenberg, Christina
Schölkopf, Bernhard
Küstner, Thomas
Cyran, Clemens
Rubin, Daniel
Scientific Data2022Journal Article, cited 0 times
FDG-PET-CT-Lesions
Head-Neck-PET-CT
Lung-PET-CT-Dx
We describe a publicly available dataset of annotated Positron Emission Tomography/Computed Tomography (PET/CT) studies. 1014 whole body Fluorodeoxyglucose (FDG)-PET/CT datasets (501 studies of patients with malignant lymphoma, melanoma and non small cell lung cancer (NSCLC) and 513 studies without PET-positive malignant lesions (negative controls)) acquired between 2014 and 2018 were included. All examinations were acquired on a single, state-of-the-art PET/CT scanner. The imaging protocol consisted of a whole-body FDG-PET acquisition and a corresponding diagnostic CT scan. All FDG-avid lesions identified as malignant based on the clinical PET/CT report were manually segmented on PET images in a slice-per-slice (3D) manner. We provide the anonymized original DICOM files of all studies as well as the corresponding DICOM segmentation masks. In addition, we provide scripts for image processing and conversion to different file formats (NIfTI, mha, hdf5). Primary diagnosis, age and sex are provided as non-imaging information. We demonstrate how this dataset can be used for deep learning-based automated analysis of PET/CT data and provide the trained deep learning model.
PET-Disentangler: PET Lesion Segmentation via Disentangled Healthy and Disease Feature Representations
Positron emission tomography (PET) imaging is an invaluable tool in clinical settings as it captures the functional activity of both healthy anatomy and cancerous lesions. Developing automatic lesion detection methods for PET images is crucial since manual lesion segmentation is laborious and prone to inter- and intra-observer variability. We propose a 3D disentanglement method that learns robust disease features and predicts lesion segmentations by disentangling PET images into disease and normal healthy anatomical features. The proposed method, PET-Disentangler, uses a 3D UNet-like encoder-decoder architecture for feature disentanglement followed by simultaneous segmentation and image reconstruction. A critic network encourages the healthy latent features, which are disentangled from disease samples, to match the distribution of healthy samples and thus do not contain any lesion-related features. We train and evaluate PET-Disentangler on 3D PET images from the Cancer Imaging Archive (TCIA) whole-body FDG-PET/CT Dataset consisting of 1014 PET/CT scans, leveraging TotalSegmentator to obtain two anatomically aligned field-of views of the whole-body scans referred to as the upper and lower torso regions. Compared to non-disentanglement segmentation methods, our quantitative results on the upper torso region show PET-Disentangler has similar performance while having the added advantage of visualizing, via the pseudo-healthy image, how a healthy (lesion-free) image might look like. Our quantitative and qualitative results on the lower torso show enhanced performance from our method as PET-Disentangler reduces the chances of incorrectly declaring high tracer uptake regions as cancerous lesions, since such uptake pattern would be assigned to the disentangled normal component.
An Improved Mammogram Classification Approach Using Back Propagation Neural Network
Mammograms are generally contaminated by quantum noise, degrading their visual quality and thereby the performance of the classifier in Computer-Aided Diagnosis (CAD). Hence, enhancement of mammograms is necessary to improve the visual quality and detectability of the anomalies present in the breasts. In this paper, a sigmoid based non-linear function has been applied for contrast enhancement of mammograms. The enhanced mammograms are used to define the texture of the detected anomaly using Gray Level Co-occurrence Matrix (GLCM) features. Later, a Back Propagation Artificial Neural Network (BP-ANN) is used as a classification tool for segregating the mammogram into abnormal or normal. The proposed classifier approach has reported to be the one with considerably better accuracy in comparison to other existing approaches.
A resource for the assessment of lung nodule size estimation methods: database of thoracic CT scans of an anthropomorphic phantom
Gavrielides, Marios A
Kinnard, Lisa M
Myers, Kyle J
Peregoy, Jennifer
Pritchard, William F
Zeng, Rongping
Esparza, Juan
Karanian, John
Petrick, Nicholas
Optics express2010Journal Article, cited 50 times
Website
FDA-Phantom
LUNG
A number of interrelated factors can affect the precision and accuracy of lung nodule size estimation. To quantify the effect of these factors, we have been conducting phantom CT studies using an anthropomorphic thoracic phantom containing a vasculature insert to which synthetic nodules were inserted or attached. Ten repeat scans were acquired on different multi-detector scanners, using several sets of acquisition and reconstruction protocols and various nodule characteristics (size, shape, density, location). This study design enables both bias and variance analysis for the nodule size estimation task. The resulting database is in the process of becoming publicly available as a resource to facilitate the assessment of lung nodule size estimation methodologies and to enable comparisons between different methods regarding measurement error. This resource complements public databases of clinical data and will contribute towards the development of procedures that will maximize the utility of CT imaging for lung cancer screening and tumor therapy evaluation.
Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume
Gavrielides, Marios A
Zeng, Rongping
Myers, Kyle J
Sahiner, Berkman
Petrick, Nicholas
Academic Radiology2013Journal Article, cited 23 times
Website
Phantom FDA
RATIONALE AND OBJECTIVES: The aim of this study was to quantify the effect of overlapping reconstruction on the precision and accuracy of lung nodule volume estimates in a phantom computed tomographic (CT) study. MATERIALS AND METHODS: An anthropomorphic phantom was used with a vasculature insert on which synthetic lung nodules were attached. Repeated scans of the phantom were acquired using a 64-slice CT scanner. Overlapping and contiguous reconstructions were performed for a range of CT imaging parameters (exposure, slice thickness, pitch, reconstruction kernel) and a range of nodule characteristics (size, density). Nodule volume was estimated with a previously developed matched-filter algorithm. RESULTS: Absolute percentage bias across all nodule sizes (n = 2880) was significantly lower when overlapping reconstruction was used, with an absolute percentage bias of 6.6% (95% confidence interval [CI], 6.4-6.9), compared to 13.2% (95% CI, 12.7-13.8) for contiguous reconstruction. Overlapping reconstruction also showed a precision benefit, with a lower standard percentage error of 7.1% (95% CI, 6.9-7.2) compared with 15.3% (95% CI, 14.9-15.7) for contiguous reconstructions across all nodules. Both effects were more pronounced for the smaller, subcentimeter nodules. CONCLUSIONS: These results support the use of overlapping reconstruction to improve the quantitative assessment of nodule size with CT imaging.
Automatic Segmentation of Colon in 3D CT Images and Removal of Opacified Fluid Using Cascade Feed Forward Neural Network
Gayathri Devi, K
Radhakrishnan, R
Computational and Mathematical Methods in Medicine2015Journal Article, cited 5 times
Website
CT Colonography
Segmentation of colon and removal of opacified fluid for virtual colonoscopy
Gayathri, Devi K
Radhakrishnan, R
Rajamani, Kumar
Pattern Analysis and Applications2017Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Computed Tomography (CT)
Segmentation
Image denoising
Colorectal cancer (CRC) is the third most common type of cancer. The use of techniques such as flexible sigmoidoscopy and capsule endoscopy for the screening of colorectal cancer causes physical pain and hardship to the patients. Hence, to overcome the above disadvantages, computed tomography (CT) can be employed for the identification of polyps or growth, while screening for CRC. This proposed approach was implemented to improve the accuracy and to reduce the computation time of the accurate segmentation of the colon segments from the abdominal CT images which contain anatomical organs such as lungs, small bowels, large bowels (Colon), ribs, opacified fluid and bones. The segmentation is performed in two major steps. The first step segments the air-filled colon portions by placing suitable seed points using modified 3D seeded region growing which identify and match the similar voxels by 6-neighborhood connectivity technique. The segmentation of the opacified fluid portions is done using fuzzy connectedness approach enhanced with interval thresholding. The membership classes are defined and the voxels are categorized based on the class value. Interval thresholding is performed so that the bones and opacified fluid parts may be extracted. The bones are removed by the placement of seed points as the existence of the continuity of the bone region is more in the axial slices. The resultant image containing bones is subtracted from the threshold output to segment the opacified fluid segments in all the axial slices of a dataset. Finally, concatenation of the opacified fluid with the segmented colon is performed for the 3D rendering of the segmented colon. This method was implemented in 15 datasets downloaded from TCIA and in real-time dataset in both supine and prone position and the accuracy achieved was 98.73%.
Machine Learning Methods for Image Analysis in Medical Applications From Alzheimer’s Disease, Brain Tumors, to Assisted Living
Chenjie Ge
2020Thesis, cited 0 times
Thesis
Dissertation
Machine learning
Supervised
Convolutional Neural Network (CNN)
BraTS
Classification
Generative Adversarial Network (GAN)
ADNI
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer's disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications.
Deep semi-supervised learning for brain tumor classification
Ge, Chenjie
Gu, Irene Yu-Hua
Jakola, Asgeir Store
Yang, Jie
2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BackgroundThis paper addresses issues of brain tumor, glioma, classification from four modalities of Magnetic Resonance Image (MRI) scans (i.e., T1 weighted MRI, T1 weighted MRI with contrast-enhanced, T2 weighted MRI and FLAIR). Currently, many available glioma datasets often contain some unlabeled brain scans, and many datasets are moderate in size.MethodsWe propose to exploit deep semi-supervised learning to make full use of the unlabeled data. Deep CNN features were incorporated into a new graph-based semi-supervised learning framework for learning the labels of the unlabeled data, where a new 3D-2D consistent constraint is added to make consistent classifications for the 2D slices from the same 3D brain scan. A deep-learning classifier is then trained to classify different glioma types using both labeled and unlabeled data with estimated labels. To alleviate the overfitting caused by moderate-size datasets, synthetic MRIs generated by Generative Adversarial Networks (GANs) are added in the training of CNNs.ResultsThe proposed scheme has been tested on two glioma datasets, TCGA dataset for IDH-mutation prediction (molecular-based glioma subtype classification) and MICCAI dataset for glioma grading. Our results have shown good performance (with test accuracies 86.53% on TCGA dataset and 90.70% on MICCAI dataset).ConclusionsThe proposed scheme is effective for glioma IDH-mutation prediction and glioma grading, and its performance is comparable to the state-of-the-art.
Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
Ge, Chenjie
Gu, Irene Yu-Hua
Jakola, Asgeir Store
Yang, Jie
IEEE Access2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
Cross-Modality Augmentation of Brain Mr Images Using a Novel Pairwise Generative Adversarial Network for Enhanced Glioma Classification
Ge, Chenjie
Gu, Irene Yu-Hua
Store Jakola, Asgeir
Yang, Jie
2019Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain Magnetic Resonance Images (MRIs) are commonly used for tumor diagnosis. Machine learning for brain tumor characterization often uses MRIs from many modalities (e.g., T1-MRI, Enhanced-T1-MRI, T2-MRI and FLAIR). This paper tackles two issues that may impact brain tumor characterization performance from deep learning: insufficiently large training dataset, and incomplete collection of MRIs from different modalities. We propose a novel pairwise generative adversarial network (GAN) architecture for generating synthetic brain MRIs in missing modalities by using existing MRIs in other modalities. By improving the training dataset, we aim to mitigate the overfitting and improve the deep learning performance. Main contributions of the paper include: (a) propose a pairwise generative adversarial network (GAN) for brain image augmentation via cross-modality image generation; (b) propose a training strategy to enhance the glioma classification performance, where GAN-augmented images are used for pre-training, followed by refined-training using real brain MRIs; (c) demonstrate the proposed method through tests and comparisons of glioma classifiers that are trained from mixing real and GAN synthetic data, as well as from real data only. Experiments were conducted on an open TCGA dataset, containing 167 subjects for classifying IDH genotypes (mutation or wild-type). Test results from two experimental settings have both provided supports to the proposed method, where glioma classification performance has consistently improved by using mixed real and augmented data (test accuracy 81.03%, with 2.57% improvement).
SDCT-AuxNet(theta): DCT augmented stain deconvolutional CNN with auxiliary classifier for cancer diagnosis
Gehlot, Shiv
Gupta, Anubha
Gupta, Ritu
Medical Image Analysis2020Journal Article, cited 6 times
Website
C_NMC_2019 Dataset: ALL Challenge dataset of ISBI 2019
Convolutional Neural Network (CNN)
Deep Learning
Pathomics
Classification
Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell cancer across the globe. With the popularity of convolutional neural networks (CNNs), computer-aided diagnosis of cancer has attracted considerable attention. Such tools are easily deployable and are cost-effective. Hence, these can enable extensive coverage of cancer diagnostic facilities. However, the development of such a tool for ALL cancer was challenging so far due to the non-availability of a large training dataset. The visual similarity between the malignant and normal cells adds to the complexity of the problem. This paper discusses the recent release of a large dataset and presents a novel deep learning architecture for the classification of cell images of ALL cancer. The proposed architecture, namely, SDCT-AuxNet(theta) is a 2-module framework that utilizes a compact CNN as the main classifier in one module and a Kernel SVM as the auxiliary classifier in the other one. While CNN classifier uses features through bilinear-pooling, spectral-averaged features are used by the auxiliary classifier. Further, this CNN is trained on the stain deconvolved quantity images in the optical density domain instead of the conventional RGB images. A novel test strategy is proposed that exploits both the classifiers for decision making using the confidence scores of their predicted class labels. Elaborate experiments have been carried out on our recently released public dataset of 15114 images of ALL cancer and healthy cells to establish the validity of the proposed methodology that is also robust to subject-level variability. A weighted F1 score of 94.8% is obtained that is best so far on this challenging dataset.
A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis
Gehlot, S.
Gupta, A.
Gupta, R.
Med Image Anal2021Journal Article, cited 0 times
Website
C-NMC 2019 ALL Challenge dataset of ISBI 2019
Histopathology imaging features
Classification
Computer Aided Diagnosis (CADx)
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.
Ultra-Fast 3D GPGPU Region Extractions for Anatomy Segmentation
Region extractions are ubiquitous in any anatomy segmentation. Region growing is one such method. Starting from an initial seed point, it grows a region of interest until all valid voxels are checked, thereby resulting in an object segmentation. Although widely used, it is computationally expensive because of its sequential approach. In this paper, we present a parallel and high performance alternate for region growing using GPGPU capability. The idea is to approximate region growing requirements within an algorithm using a parallel connected-component labeling (CCL) solution. To showcase this, we selected a typical lung segmentation problem using region growing. In CPU, sequential approach consists of 3D region growing inside a mask, that is created after applying a threshold. In GPU, parallel alternative is to apply parallel CCL and select the biggest region of interest. We evaluated our approach on 45 clinical chest CT scans in LIDC data from TCIA repository. With respect to CPU, our CUDA based GPU facilitated an average performance improvement of 240× approximately. Speed up is so profound that it can be even applied to 4D lung segmentation at 6 fps.
Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging
Ger, Rachel B
Yang, Jinzhong
Ding, Yao
Jacobsen, Megan C
Cardenas, Carlos E
Fuller, Clifton D
Howell, Rebecca M
Li, Heng
Stafford, R Jason
Zhou, Shouhao
Medical Physics2018Journal Article, cited 0 times
Website
MRI-DIR
head and neck cancer
mri
T1-weighted
T2-weighted
porcine phantom
Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients
Ger, Rachel B
Zhou, Shouhao
Elgohari, Baher
Elhalawani, Hesham
Mackin, Dennis M
Meier, Joseph G
Nguyen, Callistus M
Anderson, Brian M
Gay, Casey
Ning, Jing
Fuller, Clifton D
Li, Heng
Howell, Rebecca M
Layman, Rick R
Mawlawi, Osama
Stafford, R Jason
Aerts, Hugo JWL
Court, Laurence E.
PLoS One2019Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.
Semi-automatic Brain Tumor Segmentation by Drawing Long Axes on Multi-plane Reformat
Gering, David
Sun, Kay
Avery, Aaron
Chylla, Roger
Vivekanandan, Ajeet
Kohli, Lisa
Knapp, Haley
Paschke, Brad
Young-Moxon, Brett
King, Nik
Mackie, Thomas
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Abstract
A semi-automatic image segmentation method, called SAMBAS, based on workflow familiar to clinical radiologists is described. The user initializes 3D segmentation by drawing a long axis on a multi-plane reformat (MPR). As the user draws, a 2D segmentation updates in real-time for interactive feedback. When necessary, additional long axes, short axes, or other editing operations may be drawn on one or more MPR planes. The method learns probability distributions from the drawing to perform the MPR segmentation, and in turn, it learns from the MPR segmentation to perform the 3D segmentation. As a preliminary experiment, a batch simulation was performed where long and short axes were automatically drawn on each of 285 multispectral MR brain scans of glioma patients in the 2018 BraTS Challenge training data. Average Dice coefficient for tumor core was 0.86, and the Hausdorff-95% distance was 4.4 mm. As another experiment, a convolution neural network was trained on the same data, and applied to the BraTS validation and test data. Its outputs, computed offline, were integrated into the interactive method. Ten volunteers used the interface on the BraTS validation and test data. On the 66 scans of the validation data, average Dice coefficient for core tumor improved from 0.76 with deep learning alone, to 0.82 as an interactive system.
Glioblastoma Multiforme: Exploratory Radiogenomic Analysis by Using Quantitative Image Features
Gevaert, Olivier
Mitchell, Lex A
Achrol, Achal S
Xu, Jiajing
Echegaray, Sebastian
Steinberg, Gary K
Cheshier, Samuel H
Napel, Sandy
Zaharchuk, Greg
Plevritis, Sylvia K
Radiology2014Journal Article, cited 151 times
Website
TCGA-GBM
Radiomics
Radiomic features
Radiogenomics
IDH mutation
Glioblastoma Multiforme (GBM)
VASARI
Computer Aided Detection (CADe)
Purpose: To derive quantitative image features from magnetic resonance (MR) images that characterize the radiographic phenotype of glioblastoma multiforme (GBM) lesions and to create radiogenomic maps associating these features with various molecular data.; Materials and Methods: Clinical, molecular, and MR imaging data for GBMs in 55 patients were obtained from the Cancer Genome Atlas and the Cancer Imaging Archive after local ethics committee and institutional review board approval. Regions of interest (ROIs) corresponding to enhancing necrotic portions of tumor and peritumoral edema were drawn, and quantitative image features were derived from these ROIs. Robust quantitative image features were defined on the basis of an intraclass correlation coefficient of 0.6 for a digital algorithmic modification and a test-retest analysis. The robust features were visualized by using hierarchic clustering and were correlated with survival by using Cox proportional hazards modeling. Next, these robust image features were correlated with manual radiologist annotations from the Visually Accessible Rembrandt Images (VASARI) feature set and GBM molecular subgroups by using nonparametric statistical tests. A bioinformatic algorithm was used to create gene expression modules, defined as a set of coexpressed genes together with a multivariate model of cancer driver genes predictive of the module's expression pattern. Modules were correlated with robust image features by using the Spearman correlation test to create radiogenomic maps and to link robust image features with molecular pathways.; Results: Eighteen image features passed the robustness analysis and were further analyzed for the three types of ROIs, for a total of 54 image features. Three enhancement features were significantly correlated with survival, 77 significant correlations were found between robust quantitative features and the VASARI feature set, and seven image features were correlated with molecular subgroups (P < .05 for all). A radiogenomics map was created to link image features with gene expression modules and allowed linkage of 56% (30 of 54) of the image features with biologic processes.; Conclusion: Radiogenomic approaches in GBM have the potential to predict clinical and molecular characteristics of tumors noninvasively.
Imaging-AMARETTO: An Imaging Genomics Software Tool to Interrogate Multiomics Networks for Relevance to Radiography and Histopathology Imaging Biomarkers of Clinical Outcomes
Gevaert, O.
Nabian, M.
Bakr, S.
Everaert, C.
Shinde, J.
Manukyan, A.
Liefeld, T.
Tabor, T.
Xu, J.
Lupberger, J.
Haas, B. J.
Baumert, T. F.
Hernaez, M.
Reich, M.
Quintana, F. J.
Uhlmann, E. J.
Krichevsky, A. M.
Mesirov, J. P.
Carey, V.
Pochet, N.
JCO Clin Cancer Inform2020Journal Article, cited 1 times
Website
TCGA-GBM
TCGA-LGG
VASARI
Ivy GAP
Radiomics
Radiogenomics
PURPOSE: The availability of increasing volumes of multiomics, imaging, and clinical data in complex diseases such as cancer opens opportunities for the formulation and development of computational imaging genomics methods that can link multiomics, imaging, and clinical data. METHODS: Here, we present the Imaging-AMARETTO algorithms and software tools to systematically interrogate regulatory networks derived from multiomics data within and across related patient studies for their relevance to radiography and histopathology imaging features predicting clinical outcomes. RESULTS: To demonstrate its utility, we applied Imaging-AMARETTO to integrate three patient studies of brain tumors, specifically, multiomics with radiography imaging data from The Cancer Genome Atlas (TCGA) glioblastoma multiforme (GBM) and low-grade glioma (LGG) cohorts and transcriptomics with histopathology imaging data from the Ivy Glioblastoma Atlas Project (IvyGAP) GBM cohort. Our results show that Imaging-AMARETTO recapitulates known key drivers of tumor-associated microglia and macrophage mechanisms, mediated by STAT3, AHR, and CCR2, and neurodevelopmental and stemness mechanisms, mediated by OLIG2. Imaging-AMARETTO provides interpretation of their underlying molecular mechanisms in light of imaging biomarkers of clinical outcomes and uncovers novel master drivers, THBS1 and MAP2, that establish relationships across these distinct mechanisms. CONCLUSION: Our network-based imaging genomics tools serve as hypothesis generators that facilitate the interrogation of known and uncovering of novel hypotheses for follow-up with experimental validation studies. We anticipate that our Imaging-AMARETTO imaging genomics tools will be useful to the community of biomedical researchers for applications to similar studies of cancer and other complex diseases with available multiomics, imaging, and clinical data.
Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results
Gevaert, Olivier
Xu, Jiajing
Hoang, Chuong D
Leung, Ann N
Xu, Yue
Quon, Andrew
Rubin, Daniel L
Napel, Sandy
Plevritis, Sylvia K
Radiology2012Journal Article, cited 187 times
Website
Radiogenomics
LUNG
PET/CT
Non Small Cell Lung Cancer (NSCLC)
Metagenomics/ methods
Microarray Analysis
PURPOSE: To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets. MATERIALS AND METHODS: A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available. RESULTS: There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance. CONCLUSION: This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.
Classification of COVID-19 and Nodule in CT Images using Deep Convolutional Neural Network
Distinguishing between coronavirus disease 2019 (COVID-19) and nodule as an early indicator of lung cancer in Computed Tomography (CT) images has been a challenge that radiologists have faced since COVID-19 was announced as a pandemic. The similarity between these two infections is the main reason that brings dilemmas for them and may lead to a misdiagnosis. As a result, manual classification is not as efficient as automated classification. This paper proposes an automated approach to classify COVID-19 infections from nodules in CT images. Convolutional Neural Networks (CNNs) have significantly meliorated automated image classification tasks, particularly for medical images. Accordingly, we propose a refined CNN-based architecture through modifications in the network layers to reduce complexity. Furthermore, to vanquish the lack of training data, data augmentation approaches are utilized. In our method, Multi Layer Perceptron (MLP) is obligated to categorize the feature vectors extracted from denoised input images by convolutional layers into two main classes of COVID-19 infections and nodules. To the best of our knowledge, other state-of-the-art methods can only classify one of the two classes listed above. Compared to the mentioned counterparts, our proposed method has a promising performance with an accuracy of 97.80%.
Automated Brain Tumour Segmentation Using Cascaded 3D Densely-Connected U-Net
Ghaffari, Mina
Sowmya, Arcot
Oliver, Ruth
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Dense network
Magnetic Resonance Imaging (MRI)
Accurate brain tumour segmentation is a crucial step towards improving disease diagnosis and proper treatment planning. In this paper, we propose a deep-learning based method to segment a brain tumour into its subregions: whole tumour, tumour core and enhancing tumour. The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture of Ronneberger et al. [17] with three main modifications: (i) a heavy encoder, light decoder structure using residual blocks (ii) employment of dense blocks instead of skip connections, and (iii) utilization of self-ensembling in the decoder part of the network. The network was trained and tested using two different approaches: a multitask framework to segment all tumour subregions at the same time, and a three-stage cascaded framework to segment one subregion at a time. An ensemble of the results from both frameworks was also computed. To address the class imbalance issue, appropriate patch extraction was employed in a pre-processing step. Connected component analysis was utilized in the post-processing step to reduce the false positive predictions. Experimental results on the BraTS20 validation dataset demonstrates that the proposed model achieved average Dice Scores of 0.90, 0.83, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.
T2-FDL: A robust sparse representation method using adaptive type-2 fuzzy dictionary learning for medical image classification
Ghasemi, Majid
Kelarestaghi, Manoochehr
Eshghi, Farshad
Sharifi, Arash
Expert Systems with Applications2020Journal Article, cited 0 times
Website
REMBRANDT
TCGA-LGG
BRAIN
Machine Learning
In this paper, a robust sparse representation for medical image classification is proposed based on the adaptive type-2 fuzzy learning (T2-FDL) system. In the proposed method, sparse coding and dictionary learning processes are executed iteratively until a near-optimal dictionary is obtained. The sparse coding step aiming at finding a combination of dictionary atoms to represent the input data efficiently, and the dictionary learning step rigorously adjusts a minimum set of dictionary items. The two-step operation helps create an adaptive sparse representation algorithm by involving the type-2 fuzzy sets in the design process of image classification. Since the existing image measurements are not made under the same conditions and with the same accuracy, the performance of medical diagnosis is always affected by noise and uncertainty. By introducing an adaptive type-2 fuzzy learning method, a better approximation in an environment with higher degrees of uncertainty and noise is achieved. The experiments are executed over two open-access brain tumor magnetic resonance image databases, REMBRANDT and TCGA-LGG, from The Cancer Imaging Archive (TCIA). The experimental results of a brain tumor classification task show that the proposed T2-FDL method can adequately minimize the negative effects of uncertainty in the input images. The results demonstrate the outperformance of T2-FDL compared to other important classification methods in the literature, in terms of accuracy, specificity, and sensitivity.
FDSR: A new fuzzy discriminative sparse representation method for medical image classification
Ghasemi, Majid
Kelarestaghi, Manoochehr
Eshghi, Farshad
Sharifi, Arash
Artif Intell Med2020Journal Article, cited 10 times
Website
REMBRANDT
TCGA-LGG
Algorithm Development
Databases
Factual
Humans
Magnetic Resonance Imaging (MRI)
Discriminative sparse representation
Fuzzy dictionary learning
Inter-class difference
Intra-class similarity
Medical image classification
Recent developments in medical image analysis techniques make them essential tools in medical diagnosis. Medical imaging is always involved with different kinds of uncertainties. Managing these uncertainties has motivated extensive research on medical image classification methods, particularly for the past decade. Despite being a powerful classification tool, the sparse representation suffers from the lack of sufficient discrimination and robustness, which are required to manage the uncertainty and noisiness in medical image classification issues. It is tried to overcome this deficiency by introducing a new fuzzy discriminative robust sparse representation classifier, which benefits from the fuzzy terms in its optimization function of the dictionary learning process. In this work, we present a new medical image classification approach, fuzzy discriminative sparse representation (FDSR). The proposed fuzzy terms increase the inter-class representation difference and the intra-class representation similarity. Also, an adaptive fuzzy dictionary learning approach is used to learn dictionary atoms. FDSR is applied on Magnetic Resonance Images (MRI) from three medical image databases. The comprehensive experimental results clearly show that our approach outperforms its series of rival techniques in terms of accuracy, sensitivity, specificity, and convergence speed.
Medical Imaging Segmentation Assessment via Bayesian Approaches to Fusion, Accuracy and Variability Estimation with Application to Head and Neck Cancer
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability.; ; ; The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
A Novel Domain Adaptation Framework for Medical Image Segmentation
Gholami, Amir
Subramanian, Shashank
Shenoy, Varun
Himthani, Naveen
Yue, Xiangyu
Zhao, Sicheng
Jin, Peter
Biros, George
Keutzer, Kurt
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
We propose a segmentation framework that uses deep neural networks and introduce two innovations. First, we describe a biophysics-based domain adaptation method. Second, we propose an automatic method to segment white matter, gray matter, glial matter and cerebrospinal fluid, in addition to tumorous tissue. Regarding our first innovation, we use a domain adaptation framework that combines a novel multispecies biophysical tumor growth model with a generative adversarial model to create realistic looking synthetic multimodal MR images with known segmentation. These images are used for the purpose of training time data augmentation. Regarding our second innovation, we propose an automatic approach to enrich available segmentation data by computing the segmentation for healthy tissues. This segmentation, which is done using diffeomorphic image registration between the BraTS training data and a set of pre-labeled atlases, provides more information for training and reduces the class imbalance problem. Our overall approach is not specific to any particular neural network and can be used in conjunction with existing solutions. We demonstrate the performance improvement using a 2D U-Net for the BraTS’18 segmentation challenge. Our biophysics based domain adaptation achieves better results, as compared to the existing state-of-the-art GAN model used to create synthetic data for training.
Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer
Gholizadeh-Ansari, M.
Alirezaie, J.
Babyn, P.
J Digit Imaging2019Journal Article, cited 1 times
Website
TCGA-BRCA
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.
Artificial Intelligence Using Open Source BI-RADS Data Exemplifying Potential Future Use
Ghosh, A.
J Am Coll Radiol2019Journal Article, cited 0 times
CBIS-DDSM
*Algorithms
*Artificial Intelligence
BREAST
Computer Aided Diagnosis (CADx)
Predictive Value of Tests
Artificial intelligence
BI-RADS
machine learning
Supervised training
radiologist-augmented workflow
OBJECTIVES: With much hype about artificial intelligence (AI) rendering radiologists redundant, a simple radiologist-augmented AI workflow is evaluated; the premise is that inclusion of a radiologist's opinion into an AI algorithm would make the algorithm achieve better accuracy than an algorithm trained on imaging parameters alone. Open-source BI-RADS data sets were evaluated to see whether inclusion of a radiologist's opinion (in the form of BI-RADS classification) in addition to image parameters improved the accuracy of prediction of histology using three machine learning algorithms vis-a-vis algorithms using image parameters alone. MATERIALS AND METHODS: BI-RADS data sets were obtained from the University of California, Irvine Machine Learning Repository (data set 1) and the Digital Database for Screening Mammography repository (data set 2); three machine learning algorithms were trained using 10-fold cross-validation. Two sets of models were trained: M1, using lesion shape, margin, density, and patient age for data set 1 and image texture parameters for data set 2, and M2, using the previous image parameters and the BI-RADS classification provided by radiologists. The area under the curve and the Gini coefficient for M1 and M2 were compared for the validation data set. RESULTS: The models using the radiologist-provided BI-RADS classification performed significantly better than the models not using them (P < .0001). CONCLUSION: AI and radiologist working together can achieve better results, helping in case-based decision making. Further evaluation of the metrics involved in predictor handling by AI algorithms will provide newer insights into imaging.
EMD-Based Binary Classification of Mammograms
Ghosh, Anirban
Ramakant, Pooja
Ranjan, Priya
Deshpande, Anuj
Janardhanan, Rajiv
2022Book Section, cited 0 times
CMMD
Mammography is an inexpensive and noninvasive imaging tool that is commonly used in detection of breast lesions. However, manual analysis of a mammogramic image can be both time intensive and prone to unwanted error. In recent times, there has been a lot of interest in using computer-aided techniques to classify medical images. The current study explores the efficacy of an Earth Mover’s Distance (EMD)-based mammographic image classification technique to identify the benign and the malignant lumps in the images. We further present a novel leader recognition (LR) technique which aids in the classification process to identify the most benign and malignant images from their respective cohort in the training set. The effect of image diversity in training sets on classification efficacy is also studied by considering training sets of different sizes. The proposed classification technique is found to identify malignant images with up to 80%$$80\%$$ sensitivity and also provides a maximum F1 score of 72.73%$$72.73\%$$.
Binary Classification of Mammograms Using Horizontal Visibility Graph
Ghosh, Anirban
Ranjan, Priya
Chilamkurthy, Naga Srinivasarao
Gulati, Richa
Janardhanan, Rajiv
Ramakant, Pooja
2023Book Section, cited 0 times
CMMD
Mammography
BREAST
Algorithm Development
Horizontal visibility graph (HVG)
Hamming-Ipsen-Mikhailov (HIM) network similarity
Classification
Breast carcinoma, the most common cancer in women across the world now accounts for almost 30% of new malignant tumor cases. Despite the high incidence rate, breast cancer mortality has been maintained under control thanks to recent advances in molecular biology technology and an enhanced level of complete diagnosis and standard therapy. The method strives to overcome the clinical dilemma of undetected and misdiagnosed breast cancer, resulting in a poor clinical prognosis. Early computer-aided detection by mammography is an important aspect of the plan. In most of the diagnostic strategies currently in vogue, undue importance has been given to one of the performance metrics instead of a more balanced result. In our present study, we aim to resolve this dogma by first converting the mammograms into their equivalent graphical representation and then finding the network similarity between two such generated graphs. Subsequently, we will also elaborate on the use of horizontal visibility graph (HVG) representation to classify images and use Hamming-Ipsen-Mikhailov (HIM) network similarity (distance) metric to develop novel triage mammograms according to the severity of the disease. Our HVG-HIM metric-based classification of mammograms had an accuracy of 88.37%, specificity of 92%, and sensitivity of 83.33%. We also clearly highlight the trade off between performance and processing time.
Brain tumor detection from MRI image: An approach
Ghosh, Debjyoti
Bandyopadhyay, Samir Kumar
International Journal of Applied Research2017Journal Article, cited 0 times
Website
Algorithm Development
REMBRANDT
BRAIN
Magnetic Resonance Imaging (MRI)
Segmentation
Computer Aided Detection (CADe)
A brain tumor is an abnormal growth of cells within the brain, which can be cancerous or noncancerous (benign). This paper detects different types of tumors and cancerous growth within the brain and other associated areas within the brain by using computerized methods on MRI images of a patient.; It is also possible to track the growth patterns of such tumors.
A Deep Learning Framework Integrating the Spectral and Spatial Features for Image-Assisted Medical Diagnostics
Ghosh, S.
Das, S.
Mallipeddi, R.
IEEE Access2021Journal Article, cited 0 times
CBIS-DDSM
Computer Aided Detection (CADe)
Radiomics
Image projection
Spectral analysis
COVID-19 detection
Medical imaging
class imbalance
deep learning
diagnostic solution
discrete cosine transform
discrete wavelet transform
saliency map
BREAST
Diabetic Retinopathy Detection
CHEST
The development of a computer-aided disease detection system to ease the long and arduous manual diagnostic process is an emerging research interest. Living through the recent outbreak of the COVID-19 virus, we propose a machine learning and computer vision algorithms-based automatic diagnostic solution for detecting the COVID-19 infection. Our proposed method applies to chest radiograph that uses readily available infrastructure. No studies in this direction have considered the spatial aspect of the medical images. This motivates us to investigate the role of spectral-domain information of medical images along with the spatial content towards improved disease detection ability. Successful integration of spatial and spectral features is demonstrated on the COVID-19 infection detection task. Our proposed method comprises three stages - Feature extraction, Dimensionality reduction via projection, and prediction. At first, images are transformed into spectral and spatio-spectral domains by using Discrete cosine transform (DCT) and Discrete Wavelet transform (DWT), two powerful image processing algorithms. Next, features from spatial, spectral, and spatio-spectral domains are projected into a lower dimension through the Convolutional Neural Network (CNN), and those three types of projected features are then fed to Multilayer Perceptron (MLP) for final prediction. The combination of the three types of features yielded superior performance than any of the features when used individually. This indicates the presence of complementary information in the spectral domain of the chest radiograph to characterize the considered medical condition. Moreover, saliency maps corresponding to classes representing different medical conditions demonstrate the reliability of the proposed method. The study is further extended to identify different medical conditions using diverse medical image datasets and shows the efficiency of leveraging the combined features. Altogether, the proposed method exhibits potential as a generalized and robust medical image-assisted diagnostic solution.
Tumor Segmentation in Brain MRI: U-Nets versus Feature Pyramid Network
Manifestations of brain tumors can trigger various psychiatric symptoms. Brain tumor detection can efficiently solve or reduce chances of occurrences of diseases, such as Alzheimer's disease, dementia-based disorders, multiple sclerosis and bipolar disorder. In this paper, we propose a segmentation-based approach to detect brain tumors in MRI 1 1 . We provide a comparative study between two different U-Net architectures (U-Net: baseline and U-Net: ResNeXt50 backbone) and a Feature Pyramid Network (FPN) that are trained/validated on the TCGA-LGG dataset of size 3, 929 images. U-Net architecture with ResNeXt50 backbone achieves the best Dice coefficient of 0.932, while baseline U-Net and FPN separately achieve Dice coefficients of 0.846 and 0.899, respectively. The results obtained from U-Net with ResNeXt50 backbone outperform previous works.
When the machine does not know measuring uncertainty in deep learning models of medical images
Recently, Deep learning (DL), which involves powerful black box predictors, has outperformed human experts in several medical diagnostic problems. However, these methods focus exclusively on improving the accuracy of point predictions without assessing their outputs’ quality and ignore the asymmetric cost involved in different types of misclassification errors. Neural networks also do not deliver confidence in predictions and suffer from over and under confidence, i.e. are not well calibrated. Knowing how much confidence there is in a prediction is essential for gaining clinicians’ trust in the technology. Calibrated uncertainty quantification is a challenging problem as no ground truth is available. To address this, we make two observations: (i) cost-sensitive deep neural networks with Dropweights models better quantify calibrated predictive uncertainty, and (ii) estimated uncertainty with point predictions in Deep Ensembles Bayesian Neural Networks with DropWeights can lead to a more informed decision and improve prediction quality. This dissertation focuses on quantifying uncertainty using concepts from cost-sensitive neural networks, calibration of confidence, and Dropweights ensemble method. First, we show how to improve predictive uncertainty by deep ensembles of neural networks with Dropweights learning an approximate distribution over its weights in medical image segmentation and its application in active learning. Second, we use the Jackknife resampling technique to correct bias in quantified uncertainty in image classification and propose metrics to measure uncertainty performance. The third part of the thesis is motivated by the discrepancy between the model predictive error and the objective in quantified uncertainty when costs for misclassification errors or unbalanced datasets are asymmetric. We develop cost-sensitive modifications of the neural networks in disease detection and propose metrics to measure the quality of quantified uncertainty. Finally, we leverage an adaptive binning strategy to measure uncertainty calibration error that directly corresponds to estimated uncertainty performance and address problematic evaluation methods. We evaluate the effectiveness of the tools on nuclei images segmentation, multi-class Brain MRI image classification, multi-level cell type-specific protein expression prediction in ImmunoHistoChemistry (IHC) images and cost-sensitive classification for Covid-19 detection from X-Rays and CT image dataset. Our approach is thoroughly validated by measuring the quality of uncertainty. It produces an equally good or better result and paves the way for the future that addresses the practical problems at the intersection of deep learning and Bayesian decision theory. In conclusion, our study highlights the opportunities and challenges of the application of estimated uncertainty in deep learning models of medical images, representing the confidence of the model’s prediction, and the uncertainty quality metrics show a significant improvement when using Deep Ensembles Bayesian Neural Networks with DropWeights.
Role of Imaging in the Era of Precision Medicine
Giardino, Angela
Gupta, Supriya
Olson, Emmi
Sepulveda, Karla
Lenchik, Leon
Ivanidze, Jana
Rakow-Penner, Rebecca
Patel, Midhir J
Subramaniam, Rathan M
Ganeshan, Dhakshinamoorthy
Academic Radiology2017Journal Article, cited 12 times
Website
Radiomics
TCGA-BRCA
TCGA-RCC
Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine.
Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks
Gibson, Eli
Giganti, Francesco
Hu, Yipeng
Bonmati, Ester
Bandula, Steve
Gurusamy, Kurinchi
Davidson, Brian R
Pereira, Stephen P
Clarkson, Matthew J
Barratt, Dean C
2017Conference Proceedings, cited 14 times
Website
Pancreas-CT
Algorithm Development
Segmentation
Deep learning
Computer Aided Detection (CADe)
NiftyNet: a deep-learning platform for medical imaging
Gibson, Eli
Li, Wenqi
Sudre, Carole
Fidon, Lucas
Shakir, Dzhoshkun I.
Wang, Guotai
Eaton-Rosen, Zach
Gray, Robert
Doel, Tom
Hu, Yipeng
Whyntie, Tom
Nachev, Parashkev
Modat, Marc
Barratt, Dean C.
Ourselin, Sébastien
Cardoso, M. Jorge
Vercauteren, Tom
Computer Methods and Programs in Biomedicine2018Journal Article, cited 678 times
Website
Pancreas-CT
BACKGROUND AND OBJECTIVES: Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon.
METHODS: The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default.
RESULTS: We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses.
CONCLUSIONS: The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
Quantitative CT assessment of emphysema and airways in relation to lung cancer risk
Gierada, David S
Guniganti, Preethi
Newman, Blake J
Dransfield, Mark T
Kvale, Paul A
Lynch, David A
Pilgram, Thomas K
Radiology2011Journal Article, cited 41 times
Website
NLST
Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination
Gierada, David S
Pinsky, Paul
Nath, Hrudaya
Chiles, Caroline
Duan, Fenghai
Aberle, Denise R
Journal of the National Cancer Institute2014Journal Article, cited 74 times
Website
National Lung Screening Trial (NLST)
LUNG
Computer Aided Detection (CADe)
Computed Tomography (CT)
Background Computed tomography (CT) screening for lung cancer has been associated with a high frequency of false positive results because of the high prevalence of indeterminate but usually benign small pulmonary nodules. The acceptability of reducing false-positive rates and diagnostic evaluations by increasing the nodule size threshold for a positive screen depends on the projected balance between benefits and risks.; Methods We examined data from the National Lung Screening Trial (NLST) to estimate screening CT performance and outcomes for scans with nodules above the 4 mm NLST threshold used to classify a CT screen as positive. Outcomes assessed included screening results, subsequent diagnostic tests performed, lung cancer histology and stage distribution, and lung cancer mortality. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated for the different nodule size thresholds. All statistical tests were two-sided.; Results In 64% of positive screens (11 598/18 141), the largest nodule was 7 mm or less in greatest transverse diameter. By increasing the threshold, the percentages of lung cancer diagnoses that would have been missed or delayed and false positives that would have been avoided progressively increased, for example from 1.0% and 15.8% at a 5 mm threshold to 10.5% and 65.8% at an 8 mm threshold, respectively. The projected reductions in postscreening follow-up CT scans and invasive procedures also increased as the threshold was raised. Differences across nodules sizes for lung cancer histology and stage distribution were small but statistically significant. There were no differences across nodule sizes in survival or mortality.; Conclusion Raising the nodule size threshold for a positive screen would substantially reduce false-positive CT screenings and medical resource utilization with a variable impact on screening outcomes.
Vessel extraction from breast MR
Gierlinger, Marco
Brandner, Dinah
Zagar, Bernhard G.
2021Conference Proceedings, cited 0 times
ISPY1/ACRIN 6657
BREAST
We present an extension of the previous work, where a multi-seed region growing algorithm was shown, that extracts segments from breast MRI. The algorithm of our extended work filters elongated segments from the segments derived by the MSRG algorithm to obtain vessel-like structures. This filter is a skeletonization-like algorithm that collects useful information about the segments' thickness, length, etc. A model is shown that scans through the solution space of the MSRG algorithm by adjusting its parameters and by providing shape information for the filter. We further elaborate on the usefulness of the algorithm to assist medical experts in their daignosis of diseases relevant to angiography.
Segmentation of elongated structures processed on breast MRI for the detection of vessels
Gierlinger, Marco
Brandner, Dinah M.
Zagar, Bernhard G.
2021Journal Article, cited 0 times
ISPY1
Abstract
The multi-seed region growing (MSRG) algorithm from previous work is extended to extract elongated segments from breast Magnetic Resonance Imaging (MRI) stacks. A model is created to adjust the MSRG parameters such that the elongated segments may reveal vessels that can support clinicians in their diagnosis of diseases or provide them with useful information before surgery during e. g. a neoadjuvant therapy. The model is a pipeline of tasks and contains user-defined parameters that influence the segmentation result. A crucial task of the model relies on a skeletonization-like algorithm that collects useful information about the segments’ thickness, length, etc. Length, thickness, and gradient information of the pixel intensity along the segment helps to determine whether the extracted segments have a tubular structure, which is assumed to be the case for vessels. In this work, we show how the results are derived and that the MSRG algorithm is capable of extracting vessel-like segments even from noisy MR images.
Deep Learning Architecture to Improve Edge Accuracy of Auto-Contouring for Head and Neck Radiotherapy
The manual delineation of the gross tumor volume (GTV) for Head and Neck Cancer (HNC) patients is an essential step in the radiotherapy treatment process. Methods to automate this process have the potential to decrease the amount of time it takes for a clinician to complete a plan, while also decreasing the inter-observer variability between clinicians. Deep learning (DL) methods have shown great promise in auto-segmentation problems. For HNC, we show that DL methods systematically fail at the axial edges of GTV where the segmentation is dependent on both information from the center of the tumor and nearby slices. These failures may decrease trust and usage of proposed Auto-Contouring Systems if not accounted for. In this paper we propose a modified version of the U-Net, a fully convolutional network for image segmentation, which can more accurately process dependence between slices to create a more robust GTV contour. We also show that it can outperform the current proposed methods that capture slice dependencies by leveraging 3D convolutions. Our method uses Convolutional Recurrent Neural Networks throughout the decoder section of the U-Net to capture both spatial and adjacent-slice information when considering a contour. To account for shifts in anatomical structures through adjacent CT slices, we allow an affine transformation to the adjacent feature space using Spatial Transformer Networks. Our proposed model increases accuracy at the edges by 12% inferiorly and 26% superiorly over a baseline 2D U-Net, which has no inherent way to capture information between adjacent slices.
Targeted Design Choices in Machine Learning Architectures Can Both Improve Model Performance and Support Joint Activity
Opaque models do not support Joint Activity and create brittle systems that fail rapidly when the model reaches the edges of its operating conditions. Instead, we should use models which are observable, directable, and predictable – qualities which are better suited by transparent or ‘explainable’ models. However, using explainable models has traditionally been seen as a trade-off in machine performance, ignoring the potential benefits to the performance of the human machine teams. While the cost to model performance is negligible when considering the cost to the human machine team, there is a benefit to machine learning that has increased accuracy or capabilities when designed appropriately to deal with failure. Increased accuracy can indicate better alignment with the world and the increased capability to generalize across a broader variety of cases. Increased capability does not always have to come at the cost of explainability, and this dissertation will discuss approaches to make traditionally opaque models more usable in human machine teaming architectures.
Liver-ultrasound based motion modelling to estimate 4D dose distributions for lung tumours in scanned proton therapy
Giger, Alina
Krieger, Miriam
Jud, Christoph
Duetschler, Alisha
Salomir, Rares
Bieri, Oliver
Bauman, Grzegorz
Nguyen, Damien
Weber, Damien C
Lomax, Antony J
Zhang, Ye
Cattin, Philippe C
Physics in Medicine and Biology2020Journal Article, cited 0 times
4D-Lung
Motion mitigation strategies are crucial for scanned particle therapy of mobile tumours in order to prevent geometrical target miss and interplay effects. We developed a patient-specific respiratory motion model based on simultaneously acquired time-resolved volumetric MRI and 2D abdominal ultrasound images. We present its effects on 4D pencil beam scanned treatment planning and simulated dose distributions. Given an ultrasound image of the liver and the diaphragm, principal component analysis and Gaussian process regression were applied to infer dense motion information of the lungs. 4D dose calculations for scanned proton therapy were performed using the estimated and the corresponding ground truth respiratory motion; the differences were compared by dose difference volume metrics. We performed this simulation study on 10 combined CT and 4DMRI data sets where the motion characteristics were extracted from 5 healthy volunteers and fused with the anatomical CT data of two lung cancer patients. Median geometrical estimation errors below 2 mm for all data sets and maximum dose differences of [Formula: see text] = 43.2% and [Formula: see text] = 16.3% were found. Moreover, it was shown that abdominal ultrasound imaging allows to monitor organ drift. This study demonstrated the feasibility of the proposed ultrasound-based motion modelling approach for its application in scanned proton therapy of lung tumours.
Machine Learning in Medical Imaging
Giger, M. L.
J Am Coll Radiol2018Journal Article, cited 157 times
Website
Radiomics
Machine learning
computer aided diagnosis (CADx)
computer-assisted decision support
Deep learning
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.
Radiomics: Images are more than pictures, they are data
Gillies, Robert J
Kinahan, Paul E
Hricak, Hedvig
Radiology2015Journal Article, cited 694 times
Website
Radiomics
Imaging features
BRAIN
LUNG
PROSTATE
BLADDER
BREAST
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.
Intuitive Error Space Exploration of Medical Image Data in Clinical Daily Routine
An Uncertainty-aware Workflow for Keyhole Surgery Planning using Hierarchical Image Semantics
Gillmann, Christina
Maack, Robin G. C.
Post, Tobias
Wischgoll, Thomas
Hagen, Hans
Visual Informatics2018Journal Article, cited 1 times
Website
TCGA-GBM
Surgical guidance
BRAIN
KNEE
Segmentation
Keyhole surgeries become increasingly important in clinical daily routine as they help minimizing the damage of a patient’s healthy tissue. The planning of keyhole surgeries is based on medical imaging and an important factor that influences the surgeries’ success. Due to the image reconstruction process, medical image data contains uncertainty that exacerbates the planning of a keyhole surgery. In this paper we present a visual workflow that helps clinicians to examine and compare different surgery paths as well as visualizing the patients’ affected tissue. The analysis is based on the concept of hierarchical image semantics, that segment the underlying image data with respect to the input images’ uncertainty and the users understanding of tissue composition. Users can define arbitrary surgery paths that they need to investigate further. The defined paths can be queried by a rating function to identify paths that fulfill user-defined properties. The workflow allows a visual inspection of the affected tissues and its substructures. Therefore, the workflow includes a linked view system indicating the three-dimensional location of selected surgery paths as well as how these paths affect the patients tissue. To show the effectiveness of the presented approach, we applied it to the planning of a keyhole surgery of a brain tumor removal and a kneecap surgery.
Graph Perceiver Network for Lung Tumor and Bronchial Premalignant Lesion Stratification from Histopathology
Gindra, R. H.
Zheng, Y.
Green, E. J.
Reid, M. E.
Mazzilli, S. A.
Merrick, D. T.
Burks, E. J.
Kolachalama, V. B.
Beane, J. E.
Am J Pathol2024Journal Article, cited 0 times
Website
CPTAC-LSCC
CPTAC-LUAD
H&E-stained slides
Whole Slide Imaging (WSI)
Algorithm Development
Multilayer Perceptron (MLP)
Pathomics
Imaging features
Bronchial premalignant lesions (PMLs) precede the development of invasive lung squamous cell carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. In this context, we present a novel computational approach, the Graph Perceiver Network, leveraging hematoxylin and eosin-stained whole slide images to stratify endobronchial biopsies of PMLs across a spectrum from normal to tumor lung tissues. The Graph Perceiver Network outperforms existing frameworks in classification accuracy predicting LUSC, lung adenocarcinoma, and nontumor (normal) lung tissue on The Cancer Genome Atlas and Clinical Proteomic Tumor Analysis Consortium datasets containing lung resection tissues while efficiently generating pathologist-aligned, class-specific heat maps. The network was further tested using endobronchial biopsies from two data cohorts, containing normal to carcinoma in situ histology, and it demonstrated a unique capability to differentiate carcinoma in situ lung squamous PMLs based on their progression status to invasive carcinoma. The network may have utility in stratifying PMLs for chemoprevention trials or more aggressive follow-up.
Interpretable Machine Learning Model for Locoregional Relapse Prediction in Oropharyngeal Cancers
Computed tomography has been widely used in medical diagnosis to generate accurate images of the body's internal organs. However, cancer risk is associated with high X-ray dose CT scans, limiting its applicability in medical diagnosis and telemedicine applications. CT scans acquired at low X-ray dose generate low-quality images with noise and streaking artifacts. Therefore we develop a deep learning-based CT image enhancement algorithm for improving the quality of low-dose CT images. Our algorithm uses a convolution neural network called DenseNet and Deconvolution network (DDnet) to remove noise and artifacts from the input image. To evaluate its advantages in medical diagnosis, we use DDnet to enhance chest CT scans of COVID-19 patients. We show that image enhancement can improve the accuracy of COVID-19 diagnosis (~5% improvement), using a framework consisting of AI-based tools. For training and inference of the image enhancement AI model, we use heterogeneous computing platform for accelerating the execution and decreasing the turnaround time. Specifically, we use multiple GPUs in distributed setup to exploit batch-level parallelism during training. We achieve approximately 7x speedup with 8 GPUs running in parallel compared to training DDnet on a single GPU. For inference, we implement DDnet using OpenCL and evaluate its performance on multi-core CPU, many-core GPU, and FPGA. Our OpenCL implementation is at least 2x faster than analogous PyTorch implementation on each platform and achieves comparable performance between CPU and FPGA, while FPGA operated at a much lower frequency.
Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration
Goerres, J.
Uneri, A.
Jacobson, M.
Ramsay, B.
De Silva, T.
Ketcha, M.
Han, R.
Manbachi, A.
Vogt, S.
Kleinszig, G.
Wolinsky, J. P.
Osgood, G.
Siewerdsen, J. H.
Phys Med Biol2017Journal Article, cited 4 times
Website
CT Lymph Nodes
Segmentation
Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4 degrees and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.
On the classification of simple and complex biological images using Krawtchouk moments and Generalized pseudo-Zernike moments: a case study with fly wing images and breast cancer mammograms
Goh, J. Y.
Khang, T. F.
PeerJ Comput Sci2021Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
mammography
Radiomic features
Radiomics
Algorithm Development
Random Forest
Image analysis
Machine Learning
In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations. Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n = 74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n = 1,151), we found that image features generated from the Generalized pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.
DeepCADe: A Deep Learning Architecture for the Detection of Lung Nodules in CT Scans
Early detection of lung nodules in thoracic Computed Tomography (CT) scans is of great importance for the successful diagnosis and treatment of lung cancer. Due to improvements in screening technologies, and an increased demand for their use, radiologists are required to analyze an ever increasing amount of image data, which can affect the quality of their diagnoses. Computer-Aided Detection (CADe) systems are designed to assist radiologists in this endeavor. In this thesis, we present DeepCADe, a novel CADe system for the detection of lung nodules in thoracic CT scans which produces improved results compared to the state-of-the-art in this field of research. CT scans are grayscale images, so the terms scans and images are used interchangeably in this work. DeepCADe was trained with the publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database, which contains 1018 thoracic CT scans with nodules of different shape and size, and is built on a Deep Convolutional Neural Network (DCNN), which is trained using the backpropagation algorithm to extract volumetric features from the input data and detect lung nodules in sub-volumes of CT images. Considering only lung nodules that have been annotated by at least three radiologists, DeepCADe achieves a 2.1% improvement in sensitivity (true positive rate) over the best result in the current published scientific literature, assuming an equal number of false positives (FPs) per scan. More specifically, it achieves a sensitivity of 89.6% with 4 FPs per scan, or a sensitivity of 92.8% with 10 FPs per scan. Furthermore, DeepCADe is validated on a larger number of lung nodules compared to other studies (Table 5.2). This increases the variation in the appearance of nodules and therefore makes their detection by a CADe system more challenging. We study the application of Deep Convolutional Neural Networks (DCNNs) for the detection of lung nodules in thoracic CT scans. We explore some of the meta parameters that affect the performance of such models, which include: 1. the network architecture, i.e. its structure in terms of convolution layers, fully-connected layers, pooling layers, and activation functions, 2. the receptive field of the network, which defines the dimensions of its input, i.e. how much of the CT scan is processed by the network in a single forward pass, 3. a threshold value, which affects the sliding window algorithm with which the network is used to detect nodules in complete CT scans, and 4. the agreement level, which is used to interpret the independent nodule annotations of four experienced radiologists. Finally, we visualize the shape and location of annotated lung nodules and compare them to the output of DeepCADe. This demonstrates the compactness and flexibility in shape of the nodule predictions made by our proposed CADe system. In addition to the 5-fold cross validation results presented in this thesis, these visual results support the applicability of our proposed CADe system in real-world medical practice.
Lung nodule detection in CT images using deep convolutional neural networks
Golan, Rotem
Jacob, Christian
Denzinger, Jörg
2016Conference Proceedings, cited 26 times
Website
LIDC-IDRI
Radiomics
Computer Aided Detection (CADe)
Computed Tomography (CT)
Early detection of lung nodules in thoracic Computed Tomography (CT) scans is of great importance for the successful diagnosis and treatment of lung cancer. Due to improvements in screening technologies, and an increased demand for their use, radiologists are required to analyze an ever increasing amount of image data, which can affect the quality of their diagnoses. Computer-Aided Detection (CADe) systems are designed to assist radiologists in this endeavor. Here, we present a CADe system for the detection of lung nodules in thoracic CT images. Our system is based on (1) the publicly available Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which contains 1018 thoracic CT scans with nodules of different shape and size, and (2) a deep Convolutional Neural Network (CNN), which is trained, using the back-propagation algorithm, to extract valuable volumetric features from the input data and detect lung nodules in sub-volumes of CT images. Considering only those test nodules that have been annotated by four radiologists, our CADe system achieves a sensitivity (true positive rate) of 78.9% with 20 false positives (FPs) per scan, or a sensitivity of 71.2% with 10 FPs per scan. This is achieved without using any segmentation or additional FP reduction procedures, both of which are commonly used in other CADe systems. Furthermore, our CADe system is validated on a larger number of lung nodules compared to other studies, which increases the variation in their appearance, and therefore, makes their detection by a CADe system more challenging.
Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning
Golla, A. K.
Tonnes, C.
Russ, T.
Bauer, D. F.
Froelich, M. F.
Diehl, S. J.
Schoenberg, S. O.
Keese, M.
Schad, L. R.
Zollner, F. G.
Rink, J. S.
Diagnostics (Basel)2021Journal Article, cited 0 times
Website
Pancreas-CT
Vasculature
abdominal aortic aneurysm
Computed Tomography (CT)
Deep Learning
Classification
deep convolutional neural network (DCNN)
Algorithm Development
Abdominal aortic aneurysms (AAA) may remain clinically silent until they enlarge and patients present with a potentially lethal rupture. This necessitates early detection and elective treatment. The goal of this study was to develop an easy-to-train algorithm which is capable of automated AAA screening in CT scans and can be applied to an intra-hospital environment. Three deep convolutional neural networks (ResNet, VGG-16 and AlexNet) were adapted for 3D classification and applied to a dataset consisting of 187 heterogenous CT scans. The 3D ResNet outperformed both other networks. Across the five folds of the first training dataset it achieved an accuracy of 0.856 and an area under the curve (AUC) of 0.926. Subsequently, the algorithms performance was verified on a second data set containing 106 scans, where it ran fully automated and resulted in an accuracy of 0.953 and an AUC of 0.971. A layer-wise relevance propagation (LRP) made the decision process interpretable and showed that the network correctly focused on the aortic lumen. In conclusion, the deep learning-based screening proved to be robust and showed high performance even on a heterogeneous multi-center data set. Integration into hospital workflow and its effect on aneurysm management would be an exciting topic of future research.
Pulmonary nodule segmentation in computed tomography with deep learning
Early detection of lung cancer is essential for treating the disease. Lung nodule segmentation systems can be used together with Computer-Aided Detection (CAD) systems, and help doctors diagnose and manage lung cancer. In this work, we create a lung nodule segmentation system based on deep learning. Deep learning is a sub-field of machine learning responsible for state-of-the-art results in several segmentation datasets such as the PASCAL VOC 2012. Our model is a modified 3D U-Net, trained on the LIDC-IDRI dataset, using the intersection over union (IOU) loss function. We show our model works for multiple types of lung nodules. Our model achieves state-of-the-art performance on the LIDC test set, using nodules annotated by at least 3 radiologists and with a consensus truth of 50%.
CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification
Goncharov, M.
Pisov, M.
Shevtsov, A.
Shirokikh, B.
Kurmukov, A.
Blokhin, I.
Chernina, V.
Solovev, A.
Gombolevskiy, V.
Morozov, S.
Belyaev, M.
Med Image Anal2021Journal Article, cited 83 times
Website
NSCLC-Radiomics
Radiomic features
Training
*COVID-19/diagnostic imaging
*Deep Learning
Humans
Pandemics
SARS-CoV-2
Tomography
X-Ray Computed
*Triage
Covid-19
LUNA16 Challenge
Computed Tomography (CT)
Convolutional Neural Network (CNN)
LUNG
ResNet50
The current COVID-19 pandemic overloads healthcare systems, including radiology departments. Though several deep learning approaches were developed to assist in CT analysis, nobody considered study triage directly as a computer science problem. We describe two basic setups: Identification of COVID-19 to prioritize studies of potentially infected patients to isolate them as early as possible; Severity quantification to highlight patients with severe COVID-19, thus direct them to a hospital or provide emergency medical care. We formalize these tasks as binary classification and estimation of affected lung percentage. Though similar problems were well-studied separately, we show that existing methods could provide reasonable quality only for one of these setups. We employ a multitask approach to consolidate both triage approaches and propose a convolutional neural network to leverage all available labels within a single model. In contrast with the related multitask approaches, we show the benefit from applying the classification layers to the most spatially detailed feature map at the upper part of U-Net instead of the less detailed latent representation at the bottom. We train our model on approximately 1500 publicly available CT studies and test it on the holdout dataset that consists of 123 chest CT studies of patients drawn from the same healthcare system, specifically 32 COVID-19 and 30 bacterial pneumonia cases, 30 cases with cancerous nodules, and 31 healthy controls. The proposed multitask model outperforms the other approaches and achieves ROC AUC scores of 0.87+/-0.01 vs. bacterial pneumonia, 0.93+/-0.01 vs. cancerous nodules, and 0.97+/-0.01 vs. healthy controls in Identification of COVID-19, and achieves 0.97+/-0.01 Spearman Correlation in Severity quantification. We have released our code and shared the annotated lesions masks for 32 CT images of patients with COVID-19 from the test dataset.
vox2vec: A Framework for Self-supervised Contrastive Learning of Voxel-Level Representations in Medical Images
Goncharov, Mikhail
Soboleva, Vera
Kurmukov, Anvar
Pisov, Maxim
Belyaev, Mikhail
2023Book Section, cited 0 times
MIDRC-RICORD-1A
NLST
This paper introduces vox2vec — a contrastive method for self-supervised learning (SSL) of voxel-level representations. vox2vec representations are modeled by a Feature Pyramid Network (FPN): a voxel representation is a concatenation of the corresponding feature vectors from different pyramid levels. The FPN is pre-trained to produce similar representations for the same voxel in different augmented contexts and distinctive representations for different voxels. This results in unified multi-scale representations that capture both global semantics (e.g., body part) and local semantics (e.g., different small organs or healthy versus tumor tissue). We use vox2vec to pre-train a FPN on more than 6500 publicly available computed tomography images. We evaluate the pre-trained representations by attaching simple heads on top of them and training the resulting models for 22 segmentation tasks. We show that vox2vec outperforms existing medical imaging SSL techniques in three evaluation setups: linear and non-linear probing and end-to-end fine-tuning. Moreover, a non-linear head trained on top of the frozen vox2vec representations achieves competitive performance with the FPN trained from scratch while having 50 times fewer trainable parameters. The code is available at https://github.com/mishgon/vox2vec.
Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules
Gong, Jing
Liu, Ji-Yu
Sun, Xi-Wen
Zheng, Bin
Nie, Sheng-Dong
Physics in Medicine and Biology2018Journal Article, cited 51 times
Website
NSCLC-Radiomics
This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p > 0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.
Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis
Gong, J.
Liu, J. Y.
Wang, L. J.
Sun, X. W.
Zheng, B.
Nie, S. D.
Physica Medica2018Journal Article, cited 4 times
Website
NSCLC-Radiomics
3D tensor filtering
CT image
Curvedness
Shape index
Algorithm Development
LUNG
Computer Aided Detection (CADe)
Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation
González, Camila
Gotkowski, Karol
Fuchs, Moritz
Bucher, Andreas
Dadras, Armin
Fischbach, Ricarda
Kaltenborn, Isabel Jasmin
Mukhopadhyay, Anirban
Medical Image Analysis2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Automatic segmentation of ground glass opacities and consolidations in chest computer tomography (CT) scans can potentially ease the burden of radiologists during times of high resource utilisation. However, deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution (OOD) data. We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space and seamlessly integrates into state-of-the-art segmentation pipelines. The simple approach can even augment pre-trained models with clinically relevant uncertainty quantification. We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications, namely segmentation of the hippocampus and the prostate. Our results show that the proposed method effectively detects far- and near-OOD samples across all explored scenarios.
3D Brain Tumor Segmentation and Survival Prediction Using Ensembles of Convolutional Neural Networks
González, S. Rosas
Zemmoura, I.
Tauber, C.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Radiomic features
Convolutional Neural Networks (CNNs) are the state of the art in many medical image applications, including brain tumor segmentation. However, no successful studies using CNNs have been reported for survival prediction in glioma patients. In this work, we present two different solutions: tumor segmentation and the other for survival prediction. We proposed using an ensemble of asymmetric U-Net like architectures to improve segmentation results in the enhancing tumor region and the use of a DenseNet model for survival prognosis. We quantitatively compare deep learning with classical regression and classification models based on radiomics features and growth tumor models features for survival prediction on the BraTS 2020 database, and we provide an insight into the limitations of these models to accurately predict survival. Our method's current performance on the BraTS 2020 test set is dice scores of 0.80, 0.87, and 0.80 for enhancing tumor, whole tumor, and tumor core, respectively, with an overall dice of 0.82. For the survival prediction task, we got a 0.57 accuracy. In addition, we proposed a voxel-wise uncertainty estimation of our segmentation method that can be used effectively to improve brain tumor segmentation.
Tumour volume analysis applied to imaging and histological examinations in breast cancer
PURPOSE: Response Evaluation Criteria in Solid Tumours (RECIST) determines partial response (PR) and progressive disease (PD) as a 30 % reduction and 20 % increase in the longest diameter (LD), respectively. Tumour volume analysis (TVA) utilises three diameters to calculate response parameters. PATIENTS AND METHODS: We conducted a pilot investigation of patients who underwent neoadjuvant breast cancer treatment and evaluation using RECIST with LD measurements and TVA with three diametric measurements, using the parameters PR (>30 % tumour regression), PD (>20 % tumour growth), and intermediate stable disease (SD). According to TVA, RECIST miscategorised 7 of 28 patients (25 %). We evaluated 145 patients who underwent baseline breast magnetic resonance imaging (MRI), neoadjuvant chemotherapy, presurgical MRI, and surgery and calculated LD and volume from all MRI examinations. RESULTS: Of the 173 patients, 157 had measurable disease at baseline and treatment completion, and 32 were miscategorised (20.4 %). The number of patients with a PR increased from 123 to 150 after TVA. The sensitivity of RECIST-measured responses (95 % confidence interval: 97-100 %) was 100 % for TVA. This altered the staging, as 32 of 157 (20.4 %) patients were allocated to another response group, with fewer cases of SD: 26 patients moved from SD to PR and 6 patients from SD to PD. CONCLUSION: Measuring a solid mass using LD is fundamentally flawed, as the lesser axes considerably affect the volume, leading to inaccurate response categorisation, with implications for patient management. TVA is a novel method that increases accuracy of tumour size measurement and response to therapy.
Local Binary Pattern-Based Texture Analysis to Predict IDH Genotypes of Glioma Cancer Using Supervised Machine Learning Classifiers
Nowadays, machine learning-based quantified assessment of glioma has recently gained more attention by researchers in the field of medical image analysis. Such analysis makes use of either hand-crafted radiographic features with radiomic-based methods or auto-extracted features using deep learning-based methods. Radiomic-based methods cover a wide spectrum of radiographic features including texture, shape, volume, intensity, histogram, etc. The objective of the paper is to demonstrate the discriminative role of textures for molecular categorization of glioma using supervised machine learning techniques. This work aims to make state-of-the-art machine learning solutions available for magnetic resonance imaging (MRI)-based genomic analysis of glioma as a simple and sufficient technique based on single feature type, i.e., textures. The potential of this work demonstrates importance of texture features using simple, computationally efficient local binary pattern (LBP) method for isocitrate dehydrogenase (IDH)-based discrimination of glioma as IDH mutant and IDH wild type. Further, such texture-based discriminative analysis alone can definitely facilitate an immediate recommendation for further diagnostic decisions and personalized treatment plans for glioma patients.
IDH-Based Radiogenomic Characterization of Glioma Using Local Ternary Pattern Descriptor Integrated with Radiographic Features and Random Forest Classifier
Gore, Sonal
Jagtap, Jayant
International Journal of Image and Graphics2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Random forest classifier
Radiogenomics
BRAIN
Magnetic Resonance Imaging (MRI)
Mutations in family of Isocitrate Dehydrogenase (IDH) gene occur early in oncogenesis, especially with glioma brain tumor. Molecular diagnostic of glioma using machine learning has grabbed attention to some extent from last couple of years. The development of molecular-level predictive approach carries great potential in radiogenomic field. But more focused efforts need to be put to develop such approaches. This study aims to develop an integrative genomic diagnostic method to assess the significant utility of textures combined with other radiographic and clinical features for IDH classification of glioma into IDH mutant and IDH wild type. Random forest classifier is used for classification of combined set of clinical features and radiographic features extracted from axial T2-weighted Magnetic Resonance Imaging (MRI) images of low- and high-grade glioma. Such radiogenomic analysis is performed on The Cancer Genome Atlas (TCGA) data of 74 patients of IDH mutant and 104 patients of IDH wild type. Texture features are extracted using uniform, rotation invariant Local Ternary Pattern (LTP) method. Other features such as shape, first-order statistics, image contrast-based, clinical data like age, histologic grade are combined with LTP features for IDH discrimination. Proposed random forest-assisted model achieved an accuracy of 85.89% with multivariate analysis of integrated set of feature descriptors using Glioblastoma and Low-Grade Glioma dataset available with The Cancer Imaging Archive (TCIA). Such an integrated feature analysis using LTP textures and other descriptors can effectively predict molecular class of glioma as IDH mutant and wild type.
Radiogenomic analysis: 1p/19q codeletion based subtyping of low-grade glioma by analysing advanced biomedical texture descriptors
Gore, Sonal
Jagtap, Jayant
Journal of King Saud University - Computer and Information Sciences2021Journal Article, cited 1 times
Website
LGG-1p19qDeletion
Gray-level co-occurrence matrix (GLCM)
Random Forest
Radiogenomics
BRAIN
Presurgical discrimination of 1p/19q codeletion status may have prognostic and diagnostic value for glioma patients for immediate personalized treatment. Artificial intelligence-based models have been proved as effective method to demonstrate computer aided diagnostic system for glioma cancer. An objective of study is to present an advanced biomedical texture descriptor to perform machine learning-assisted identification of 1p/19q codeletion status of low-grade glioma (LGG) cancer. An aim is to verify efficacy of textures, extracted using local binary pattern and derived from gray level co-occurrence matrix (GLCM). Proposed study used random forest-assisted radiomics model to analyse MRI images of 159 subjects. Four different advanced biomedical texture descriptors are proposed by experimenting different extensions of LBP method. These variants-(as variant I to IV) with 8-bit or 16-bit or 24-bit LBP codes are applied with different orientations in 5 × 5, 7 × 7 square-sized neighbourhood, which are recorded in LBP histograms. These histogram features are concatenated by GLCM-based textures including energy, correlation, contrast and homogeneity. Texture descriptors performed best with classification accuracy of 87.50% (AUC: 0.917, sensitivity: 95%, specificity: 75%, f1-score: 90.48%) using 8-bit LBP variant-I. 10-fold cross-validated accuracy of all four sets range from 65.62% to 87.50% using random forest classifier and mean-AUC range from 0.646 to 0.917.
MRI based genomic analysis of glioma using three pathway deep convolutional neural network for IDH classification
Gore, Sonal
Jagtap Jayant
Turkish Journal of Electrical Engineering & Computer Sciences2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Radiogenomics
BRAIN
Magnetic Resonance Imaging (MRI)
Pulmonary Lung Cancer Classification Using Deep Neural Networks
Goswami, Jagriti
Singh, Koushlendra Kumar
2023Book Section, cited 0 times
Lung-PET-CT-Dx
Deep Learning
Transfer learning
Classification
Algorithm Development
Computer Aided Diagnosis (CADx)
Lung cancer is the leading cause of cancer-related deaths globally. Computer-assisted detection (CAD) systems have previously been used for various disease diagnosis and hence can serve as an efficient tool for lung cancer diagnosis. In this paper, we study the problem of lung cancer classification using chest computed tomography (CT) scans and positron emission tomography–computed tomography (PET-CT). A subset of publicly available Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis (Lung-PET-CT-Dx) is used to train four different deep learning models using transfer learning for classifying three different types of lung cancer: Adenocarcinoma, Small Cell Carcinoma and Squamous Cell Carcinoma, by passing raw nodule patches to the network. The models are evaluated on metrics such as accuracy, precision, recall, F1-score and Cohen’s Kappa score. ROC curves and confusion matrices are also presented to provide a graphical representation of the models’ performance.
Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies
Götz, Michael
Maier-Hein, Klaus H
Scientific RepoRtS2020Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
Lung
Models
MITK Phenotyping
Gradient boosting
Random forest
LASSO
Conducting side experiments termed robustness experiments, to identify features that are stable with respect to rescans, annotation, or other confounding effects is an important element in radiomics research. However, the matter of how to include the finding of these experiments into the model building process still needs to be explored. Three different methods for incorporating prior knowledge into a radiomics modelling process were evaluated: the naive approach (ignoring feature quality), the most common approach consisting of removing unstable features, and a novel approach using data augmentation for information transfer (DAFIT). Multiple experiments were conducted using both synthetic and publicly available real lung imaging patient data. Ignoring additional information from side experiments resulted in significantly overestimated model performances meaning the estimated mean area under the curve achieved with a model was increased. Removing unstable features improved the performance estimation, while slightly decreasing the model performance, i.e. decreasing the area under curve achieved with the model. The proposed approach was superior both in terms of the estimation of the model performance and the actual model performance. Our experiments show that data augmentation can prevent biases in performance estimation and has several advantages over the plain omission of the unstable feature. The actual gain that can be obtained depends on the quality and applicability of the prior information on the features in the given domain. This will be an important topic of future research.
Privacy-Preserving Dashboard for F.A.I.R Head and Neck Cancer data supporting multi-centered collaborations
Research in modern healthcare requires vast volumes of data from various healthcare centers across the globe. It is not always feasible to centralize clinical data without compromising privacy. A tool addressing these issues and facilitating reuse of clinical data is the need of the hour. The Federated Learning approach, governed in a set of agreements such as the Personal Health Train (PHT) manages to tackle these concerns by distributing models to the data centers instead of the traditional approach of centralizing datasets. One of the prerequisites of PHT is using semantically interoperable datasets for the models to be able to find them. FAIR (Findable, Accessible, Interoperable, Reusable) principles help in building interoperable and reusable data by adding knowledge representation and providing descriptive metadata. However, the process of making data FAIR is not always easy and straight-forward. Our main objective is to disentangle this process by using domain and technical expertise and get data prepared for federated learning. This paper introduces applications that are easily deployable as Docker containers, which will automate parts of the aforementioned process and significantly simplify the task of creating FAIR clinical data. Our method bypasses the need for clinical researchers to have a high degree of technical skills. We demonstrate the FAIR-ification process by applying it to five Head and Neck cancer datasets (four public and one private). The PHT paradigm is explored by building a distributed visualization dashboard from the aggregated summaries of the FAIR-ified datasets. Using the PHT infrastructure for exchanging only statistical summaries or model coefficients allows researchers to explore data from multiple centers without breaching privacy.
Making head and neck cancer clinical data Findable-Accessible-Interoperable-Reusable to support multi-institutional collaboration and federated learning
Gouthamchand, Varsha
Choudhury, Ananya
Hoebers, Frank J. P.
Wesseling, Frederik W. R.
Welch, Mattea
Kim, Sejin
Kazmierska, Joanna
Dekker, Andre
Haibe-Kains, Benjamin
van Soest, Johan
Wee, Leonard
BJR|Artificial Intelligence2024Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Head-Neck-PET-CT
OPC-Radiomics
HNSCC
Federated learning
SPARQL
RDF
Models
Radiomics
Abstract; Objectives; Federated learning (FL) is a group of methodologies where statistical modelling can be performed without exchanging identifiable patient data between cooperating institutions. To realize its potential for AI development on clinical data, a number of bottlenecks need to be addressed. One of these is making data Findable-Accessible-Interoperable-Reusable (FAIR). The primary aim of this work is to show that tools making data FAIR allow consortia to collaborate on privacy-aware data exploration, data visualization, and training of models on each other’s original data.; ; Methods; We propose a “Schema-on-Read” FAIR-ification method that adapts for different (re)analyses without needing to change the underlying original data. The procedure involves (1) decoupling the contents of the data from its schema and database structure, (2) annotation with semantic ontologies as a metadata layer, and (3) readout using semantic queries. Open-source tools are given as Docker containers to help local investigators prepare their data on-premises.; ; Results; We created a federated privacy-preserving visualization dashboard for case mix exploration of 5 distributed datasets with no common schema at the point of origin. We demonstrated robust and flexible prognostication model development and validation, linking together different data sources—clinical risk factors and radiomics.; ; Conclusions; Our procedure leads to successful (re)use of data in FL-based consortia without the need to impose a common schema at every point of origin of data.; ; Advances in knowledge; This work supports the adoption of FL within the healthcare AI community by sharing means to make data more FAIR.
Lungcanary- Pioneering Early Lung Cancer Detection using Machine Learning Algorithms
Lung cancer, a malignant tumour originating in the lung cells, is notoriously challenging to detect early through conventional clinical procedures, which often require invasive techniques. This study introduces LungCanary, an innovative approach leveraging machine learning algorithms to address the global health challenge of early lung cancer detection. By employing advanced computational methods and utilizing transfer learning principles, LungCanary effectively extracts meaningful features from medical imaging data. The model incorporates multiple decision-making components to capture diverse data patterns, significantly enhancing diagnostic precision. Through rigorous experimentation, LungCanary demonstrated superior performance with an accuracy of 98.36%, a precision of 96%, and an error rate of just 1.64%. These findings highlight LungCanary's potential to outperform existing models, marking a breakthrough in the accuracy and reliability of lung cancer diagnostics. The significance of this research lies in its potential to revolutionize early detection methodologies, ultimately improving patient outcomes.
A Physiological-Informed Generative Model for Improving Breast Lesion Classification in Small DCE-MRI Datasets
Gravina, Michela
Maddaluno, Massimo
Marrone, Stefano
Sansone, Mario
Fusco, Roberta
Granata, Vincenza
Petrillo, Antonella
Sansone, Carlo
IEEE Journal of Biomedical and Health Informatics2024Journal Article, cited 0 times
Website
Advanced-MRI-Breast-Lesions
Spatial Decomposition For Robust Domain Adaptation In Prostate Cancer Detection
The utility of high-quality imaging of Prostate Cancer (PCa) using 3.0 Tesla MRI (versus 1.5 Tesla) is well established, yet a vast majority of MRI units across many countries are 1.5 Tesla. Recently, Deep Learning has been applied successfully to augment radiological interpretation of medical images. However, training such models requires very large amount of data, and often the models do not generalize well to data with different acquisition parameters. To address this, we introduce domain standardization, a novel method that enables image synthesis between domains by separating anatomy- and modality-related factors of images. Our results show an improved PCa classification with an AUC of 0.75 compared to traditional transfer learning methods. We envision domain standardization to be applied as a promising tool towards enhancing the interpretation of lower resolution MRI images, reducing the barriers of the potential uptake of deep models for jurisdictions with smaller populations.
Relationship between visceral adipose tissue and genetic mutations (VHL and KDM5C) in clear cell renal cell carcinoma
Greco, Federico
Mallio, Carlo Augusto
La radiologia medica2021Journal Article, cited 0 times
Website
TCGA-KIRC
renal cancer
The Radiogenomic Landscape of Clear Cell Renal Cell Carcinoma: Insights into Lipid Metabolism through Evaluation of ADFP Expression
Greco, Federico
Panunzio, Andrea
Bernetti, Caterina
Tafuri, Alessandro
Beomonte Zobel, Bruno
Mallio, Carlo Augusto
Diagnostics2024Journal Article, cited 2 times
Website
TCGA-KIRC
Exploring Tumor Heterogeneity: Radiogenomic Assessment of ADFP in Low WHO/ISUP Grade Clear Cell Renal Cell Carcinoma
Reading the Mind of a Machine: Hopes and Hypes of Artificial Intelligence for Clinical Oncology Imaging
Green, A.
Aznar, M.C.
Muirhead, R.
Osorio, E.M. Vasquez
2021Journal Article, cited 0 times
CT Images in COVID-19
Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique
Greenspan, Hayit
van Ginneken, Bram
Summers, Ronald M
IEEE Transactions on Medical Imaging2016Journal Article, cited 395 times
Website
Pancreas-CT
CT Lymph Nodes
Interoperable encoding and 3D printing of anatomical structures resulting from manual or automated delineation
Gregoir, Thibault
2023Thesis, cited 0 times
Thesis
Pancreatic-CT-CBCT-SEG
Segmentation
3D printing
ChatGPT
Computed Tomography (CT)
RTSTRUCT
Surface reconstruction
Interoperable encoding
Manual or automated delineation
The understanding and visualization of the human body have been instrumental in the progress of medical science. Over time, the shift from cumbersome and invasive methods to modern scanners highlights the significance of expertise in retrieving, utilizing, and comprehending the resulting data. 3D rendering and printing of organic structures offer promising applications such as surgical planning and medical education.; However, challenges arise as technological advancements generate increasingly vast amounts of data, necessitating seamless manipulation and transfer within the medical field. The goal of this master thesis is to explore interoperability in encoding 3D models and the ability to print those models resulting from 3D reconstruction on medical input data. This exploration will be done for models that were originally segmented by manual delineation or in an automated way. Different parts of this thematic were already explored in a specific way like for the surface reconstruction or the automatic segmentation. The idea here will be to combine the different aspects of this thesis in a single tool available and usable by everyone.
Towards Population-Based Histologic Stain Normalization of Glioblastoma
Grenko, Caleb M.
Viaene, Angela N.
Nasrallah, MacLean P.
Feldman, Michael D.
Akbari, Hamed
Bakas, Spyridon
Brainlesion2020Journal Article, cited 0 times
DICOM-Glioma-SEG
TCGA-GBM
Ivy GAP
H&E-stained slides
Pathomics
Glioblastoma (‘GBM’) is the most aggressive type of primary malignant adult brain tumor, with very heterogeneous radiographic, histologic, and molecular profiles. A growing body of advanced computational analyses are conducted towards further understanding the biology and variation in glioblastoma. To address the intrinsic heterogeneity among different computational studies, reference standards have been established to facilitate both radiographic and molecular analyses, e.g., anatomical atlas for image registration and housekeeping genes, respectively. However, there is an apparent lack of reference standards in the domain of digital pathology, where each independent study uses an arbitrarily chosen slide from their evaluation dataset for normalization purposes. In this study, we introduce a novel stain normalization approach based on a composite reference slide comprised of information from a large population of anatomically annotated hematoxylin and eosin (‘H&E’) whole-slide images from the Ivy Glioblastoma Atlas Project (‘IvyGAP’). Two board-certified neuropathologists manually reviewed and selected annotations in 509 slides, according to the World Health Organization definitions. We computed summary statistics from each of these approved annotations and weighted them based on their percent contribution to overall slide (‘PCOS’), to form a global histogram and stain vectors. Quantitative evaluation of pre- and post-normalization stain density statistics for each annotated region with PCOS>0.05% yielded a significant (largest p=0.001, two-sided Wilcoxon rank sum test) reduction of its intensity variation for both ‘H’ & ‘E’. Subject to further large-scale evaluation, our findings support the proposed approach as a potentially robust population-based reference for stain normalization.
Health Digital Twins with Clinical Decision Support and Medical Imaging
Grob, Moritz
Rappelsberger, Andrea
Adlassnig, Klaus-Peter
2024Book Section, cited 0 times
HCC-TACE-Seg
As the concept of digital twins as virtual representations of physical entities is rapidly gaining popularity in healthcare, this paper studies the feasibility of incorporating clinical decision support (CDS), clinical data, and medical imaging into health digital twins (HDTs). A HDT is visualized in a web application, health data are stored in a FHIR-based electronic health record, computed tomography images are stored in a rudimentary DICOM-based picture archiving and communication system. An Arden-Syntax-based CDS system consisting of an interpretation and an alert service is connected. The prototype focuses on interoperability of these components. The study confirms the feasibility of CDS integration into HDTs and provides insight into possibilities for further expansion.
LiverHccSeg: A publicly available multiphasic MRI dataset with liver and HCC tumor segmentations and inter-rater agreement analysis
Gross, M.
Arora, S.
Huber, S.
Kucukkaya, A. S.
Onofrey, J. A.
Data Brief2023Journal Article, cited 0 times
TCGA-LIHC
Benchmarking
Hepatocellular carcinoma
Imaging biomarkers
Inter-rater agreement
Inter-rater variability
Liver segmentation
Multiphasic contrast-enhanced magnetic resonance imaging
Tumor segmentation
LIVER
Magnetic Resonance Imaging (MRI)
Algorithm Development
Segmentation
Accurate segmentation of liver and tumor regions in medical imaging is crucial for the diagnosis, treatment, and monitoring of hepatocellular carcinoma (HCC) patients. However, manual segmentation is time-consuming and subject to inter- and intra-rater variability. Therefore, automated methods are necessary but require rigorous validation of high-quality segmentations based on a consensus of raters. To address the need for reliable and comprehensive data in this domain, we present LiverHccSeg, a dataset that provides liver and tumor segmentations on multiphasic contrast-enhanced magnetic resonance imaging from two board-approved abdominal radiologists, along with an analysis of inter-rater agreement. LiverHccSeg provides a curated resource for liver and HCC tumor segmentation tasks. The dataset includes a scientific reading and co-registered contrast-enhanced multiphasic magnetic resonance imaging (MRI) scans with corresponding manual segmentations by two board-approved abdominal radiologists and relevant metadata and offers researchers a comprehensive foundation for external validation, and benchmarking of liver and tumor segmentation algorithms. The dataset also provides an analysis of the agreement between the two sets of liver and tumor segmentations. Through the calculation of appropriate segmentation metrics, we provide insights into the consistency and variability in liver and tumor segmentations among the radiologists. A total of 17 cases were included for liver segmentation and 14 cases for HCC tumor segmentation. Liver segmentations demonstrates high segmentation agreement (mean Dice, 0.95 +/- 0.01 [standard deviation]) and HCC tumor segmentations showed higher variation (mean Dice, 0.85 +/- 0.16 [standard deviation]). The applications of LiverHccSeg can be manifold, ranging from testing machine learning algorithms on public external data to radiomic feature analyses. Leveraging the inter-rater agreement analysis within the dataset, researchers can investigate the impact of variability on segmentation performance and explore methods to enhance the accuracy and robustness of liver and tumor segmentation algorithms in HCC patients. By making this dataset publicly available, LiverHccSeg aims to foster collaborations, facilitate innovative solutions, and ultimately improve patient outcomes in the diagnosis and treatment of HCC.
Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy
Grossberg, Aaron J
Mohamed, Abdallah SR
El Halawani, Hesham
Bennett, William C
Smith, Kirk E
Nolan, Tracy S
Williams, Bowman
Chamchod, Sasikarn
Heukelom, Jolien
Kantor, Michael E
Scientific Data2018Journal Article, cited 0 times
Website
head and neck squamous cell carcinoma (HNSCC)
human papillomavirus
mri
pet ct
ct
dicom
Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma
Grossmann, Patrick
Gutman, David A
Dunn, William D
Holder, Chad A
Aerts, Hugo JWL
BMC Cancer2016Journal Article, cited 21 times
Website
TCGA-GBM
Radiomics
Magnetic Resonance Imaging (MRI)
Background; Glioblastoma (GBM) tumors exhibit strong phenotypic differences that can be quantified using magnetic resonance imaging (MRI), but the underlying biological drivers of these imaging phenotypes remain largely unknown. An Imaging-Genomics analysis was performed to reveal the mechanistic associations between MRI derived quantitative volumetric tumor phenotype features and molecular pathways.; ; Methods; One hundred fourty one patients with presurgery MRI and survival data were included in our analysis. Volumetric features were defined, including the necrotic core (NE), contrast-enhancement (CE), abnormal tumor volume assessed by post-contrast T1w (tumor bulk or TB), tumor-associated edema based on T2-FLAIR (ED), and total tumor volume (TV), as well as ratios of these tumor components. Based on gene expression where available (n = 91), pathway associations were assessed using a preranked gene set enrichment analysis. These results were put into context of molecular subtypes in GBM and prognostication.; ; Results; Volumetric features were significantly associated with diverse sets of biological processes (FDR < 0.05). While NE and TB were enriched for immune response pathways and apoptosis, CE was associated with signal transduction and protein folding processes. ED was mainly enriched for homeostasis and cell cycling pathways. ED was also the strongest predictor of molecular GBM subtypes (AUC = 0.61). CE was the strongest predictor of overall survival (C-index = 0.6; Noether test, p = 4x10−4).; ; Conclusion; GBM volumetric features extracted from MRI are significantly enriched for information about the biological state of a tumor that impacts patient outcomes. Clinical decision-support systems could exploit this information to develop personalized treatment strategies on the basis of noninvasive imaging.
Defining the biological and clinical basis of radiomics: towards clinical imaging biomarkers
Effect of artificial intelligence-aided differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists’ therapy management
Grosu, S.
Fabritius, M. P.
Winkelmann, M.
Puhr-Westerheide, D.
Ingenerf, M.
Maurus, S.
Graser, A.
Schulz, C.
Knosel, T.
Cyran, C. C.
Ricke, J.
Kazmierczak, P. M.
Ingrisch, M.
Wesp, P.
Eur Radiol2025Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Cancer screening
Machine learning
Polyps
OBJECTIVES: Adenomatous colorectal polyps require endoscopic resection, as opposed to non-adenomatous hyperplastic colorectal polyps. This study aims to evaluate the effect of artificial intelligence (AI)-assisted differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management. MATERIALS AND METHODS: Five board-certified radiologists evaluated CT colonography images with colorectal polyps of all sizes and morphologies retrospectively and decided whether the depicted polyps required endoscopic resection. After a primary unassisted reading based on current guidelines, a second reading with access to the classification of a radiomics-based random-forest AI-model labelling each polyp as "non-adenomatous" or "adenomatous" was performed. Performance was evaluated using polyp histopathology as the reference standard. RESULTS: 77 polyps in 59 patients comprising 118 polyp image series (47% supine position, 53% prone position) were evaluated unassisted and AI-assisted by five independent board-certified radiologists, resulting in a total of 1180 readings (subsequent polypectomy: yes or no). AI-assisted readings had higher accuracy (76% +/- 1% vs. 84% +/- 1%), sensitivity (78% +/- 6% vs. 85% +/- 1%), and specificity (73% +/- 8% vs. 82% +/- 2%) in selecting polyps eligible for polypectomy (p < 0.001). Inter-reader agreement was improved in the AI-assisted readings (Fleiss' kappa 0.69 vs. 0.92). CONCLUSION: AI-based characterisation of colorectal polyps at CT colonography as a second reader might enable a more precise selection of polyps eligible for subsequent endoscopic resection. However, further studies are needed to confirm this finding and histopathologic polyp evaluation is still mandatory. KEY POINTS: Question This is the first study evaluating the impact of AI-based polyp classification in CT colonography on radiologists' therapy management. Findings Compared with unassisted reading, AI-assisted reading had higher accuracy, sensitivity, and specificity in selecting polyps eligible for polypectomy. Clinical relevance Integrating an AI tool for colorectal polyp classification in CT colonography could further improve radiologists' therapy recommendations.
Machine Learning-based Differentiation of Benign and Premalignant Colorectal Polyps Detected with CT Colonography in an Asymptomatic Screening Population: A Proof-of-Concept Study
Grosu, S.
Wesp, P.
Graser, A.
Maurus, S.
Schulz, C.
Knosel, T.
Cyran, C. C.
Ricke, J.
Ingrisch, M.
Kazmierczak, P. M.
Radiology2021Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Colon
Machine Learning
Background CT colonography does not enable definite differentiation between benign and premalignant colorectal polyps. Purpose To perform machine learning-based differentiation of benign and premalignant colorectal polyps detected with CT colonography in an average-risk asymptomatic colorectal cancer screening sample with external validation using radiomics. Materials and Methods In this secondary analysis of a prospective trial, colorectal polyps of all size categories and morphologies were manually segmented on CT colonographic images and were classified as benign (hyperplastic polyp or regular mucosa) or premalignant (adenoma) according to the histopathologic reference standard. Quantitative image features characterizing shape (n = 14), gray level histogram statistics (n = 18), and image texture (n = 68) were extracted from segmentations after applying 22 image filters, resulting in 1906 feature-filter combinations. Based on these features, a random forest classification algorithm was trained to predict the individual polyp character. Diagnostic performance was validated in an external test set. Results The random forest model was fitted using a training set consisting of 107 colorectal polyps in 63 patients (mean age, 63 years +/- 8 [standard deviation]; 40 men) comprising 169 segmentations on CT colonographic images. The external test set included 77 polyps in 59 patients comprising 118 segmentations. Random forest analysis yielded an area under the receiver operating characteristic curve of 0.91 (95% CI: 0.85, 0.96), a sensitivity of 82% (65 of 79) (95% CI: 74%, 91%), and a specificity of 85% (33 of 39) (95% CI: 72%, 95%) in the external test set. In two subgroup analyses of the external test set, the area under the receiver operating characteristic curve was 0.87 in the size category of 6-9 mm and 0.90 in the size category of 10 mm or larger. The most important image feature for decision making (relative importance of 3.7%) was quantifying first-order gray level histogram statistics. Conclusion In this proof-of-concept study, machine learning-based image analysis enabled noninvasive differentiation of benign and premalignant colorectal polyps with CT colonography. (c) RSNA, 2021 Online supplemental material is available for this article.
Quantitative Computed Tomographic Descriptors Associate Tumor Shape Complexity and Intratumor Heterogeneity with Prognosis in Lung Adenocarcinoma
Grove, Olya
Berglund, Anders E
Schabath, Matthew B
Aerts, Hugo JWL
Dekker, Andre
Wang, Hua
Velazquez, Emmanuel Rios
Lambin, Philippe
Gu, Yuhua
Balagurunathan, Yoganand
Eikman, E.
Gatenby, Robert A
Eschrich, S
Gillies, Robert J
PLoS One2015Journal Article, cited 87 times
Website
Algorithm Development
LungCT-Diagnosis
LUNG
Segmentation
Classification
Two CT features were developed to quantitatively describe lung adenocarcinomas by scoring tumor shape complexity (feature 1: convexity) and intratumor density variation (feature 2: entropy ratio) in routinely obtained diagnostic CT scans. The developed quantitative features were analyzed in two independent cohorts (cohort 1: n = 61; cohort 2: n = 47) of patients diagnosed with primary lung adenocarcinoma, retrospectively curated to include imaging and clinical data. Preoperative chest CTs were segmented semi-automatically. Segmented tumor regions were further subdivided into core and boundary sub-regions, to quantify intensity variations across the tumor. Reproducibility of the features was evaluated in an independent test-retest dataset of 32 patients. The proposed metrics showed high degree of reproducibility in a repeated experiment (concordance, CCC>/=0.897; dynamic range, DR>/=0.92). Association with overall survival was evaluated by Cox proportional hazard regression, Kaplan-Meier survival curves, and the log-rank test. Both features were associated with overall survival (convexity: p = 0.008; entropy ratio: p = 0.04) in Cohort 1 but not in Cohort 2 (convexity: p = 0.7; entropy ratio: p = 0.8). In both cohorts, these features were found to be descriptive and demonstrated the link between imaging characteristics and patient survival in lung adenocarcinoma.
Brain Tumor Segmentation and Associated Uncertainty Evaluation Using Multi-sequences MRI Mixture Data Preprocessing
Groza, Vladimir
Tuchinov, Bair
Amelina, Evgeniya
Pavlovskiy, Evgeniy
Tolstokulakov, Nikolay
Amelin, Mikhail
Golushko, Sergey
Letyagin, Andrey
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Deep Learning
Magnetic Resonance Imaging (MRI)
The brain tumor segmentation is one of the crucial tasks nowadays among other directions and domains where daily clinical workflow requires to put a lot of efforts while studying computer tomography (CT) or structural magnetic resonance imaging (MRI) scans of patients with various pathologies. MRI is the most common method of primary detection, non-invasive diagnostics and a source of recommendations for further treatment of brain diseases. The brain is a complex structure, different areas of which have different functional significance.; ; In this paper, we extend the previous research work on the robust pre-processing methods which allow to consider all available information from MRI scans by the composition of T1, T1C, T2 and T2-Flair sequences in the unique input. Such approach enriches the input data for the segmentation process and helps to improve the accuracy of the segmentation and associated uncertainty evaluation performance.; ; Proposed in this paper method also demonstrates strong improvement on the segmentation problem. This conclusion was done with respect to Dice metric, Sensitivity and Specificity compare to identical training/validation procedure based only on any single sequence and regardless of the chosen neural network architecture.; ; Obtained results demonstrate significant performance improvement while combining three MRI sequences in the 3-channel RGB like image for considered tasks of brain tumor segmentation. In this work we provide the comparison of various gradient descent optimization methods and of the different backbone architectures.
Using Deep Learning for Pulmonary Nodule Detection & Diagnosis
Gruetzemacher, Richard
Gupta, Ashish
2016Conference Paper, cited 0 times
LIDC-IDRI
3D deep learning for detecting pulmonary nodules in CT scans
Gruetzemacher, Ross
Gupta, Ashish
Paradice, David
Journal of the American Medical Informatics Association2018Journal Article, cited 85 times
Website
LIDC-IDRI
Objective: To demonstrate and test the validity of a novel deep-learning-based system for the automated detection of pulmonary nodules.
Materials and Methods: The proposed system uses 2 3D deep learning models, 1 for each of the essential tasks of computer-aided nodule detection: candidate generation and false positive reduction. A total of 888 scans from the LIDC-IDRI dataset were used for training and evaluation.
Results: Results for candidate generation on the test data indicated a detection rate of 94.77% with 30.39 false positives per scan, while the test results for false positive reduction exhibited a sensitivity of 94.21% with 1.789 false positives per scan. The overall system detection rate on the test data was 89.29% with 1.789 false positives per scan.
Discussion: An extensive and rigorous validation was conducted to assess the performance of the proposed system. The system demonstrated a novel combination of 3D deep neural network architectures and demonstrates the use of deep learning for both candidate generation and false positive reduction to be evaluated with a substantial test dataset. The results strongly support the ability of deep learning pulmonary nodule detection systems to generalize to unseen data. The source code and trained model weights have been made available.
Conclusion: A novel deep-neural-network-based pulmonary nodule detection system is demonstrated and validated. The results provide comparison of the proposed deep-learning-based system over other similar systems based on performance.
Smooth extrapolation of unknown anatomy via statistical shape models
The aim of this thesis was to examine and enhance the scientific groundwork for translating deep learning (DL) algorithms for brain tumour segmentation into clinical decision support tools. Paper II describes a scoping review conducted to map the field of automatic brain lesion segmentation on magnetic resonance (MR) images according to a predefined and peer-reviewed study protocol (Paper I). Insufficient preprocessing description was identified as one factor hindering clinical implementation of the reviewed algorithms. A reproducibility and replicability analysis of two algorithms was described in Paper III. The two algorithms and their validation studies were previously assessed as reproducible. In this experimental investigation, the original validation results were reproduced and replicated for one algorithm. Analysing the reasons for failure to reproduce validation of the second algorithm led to a suggested update to a commonly-used reproducibility checklist; the importance of a thorough description of preprocessing was highlighted. In Paper IV, radiologists' perception of DL-generated brain tumour labels in tumour volume growth assessment was examined. Ten radiologists participated in a reading/questionnaire session of 20 MR examination cases. The readers were confident that the label-derived volume change is more accurate than their visual assessment, even when the inter-rater agreement on the label quality was poor. In Paper V, the broad theme of trust in artificial intelligence (AI) in radiology was explored. A semi-structured interview study with twenty-six AI implementation stakeholders was conducted. Four requirements of the implemented tools and procedures were identified that promote trust in AI: reliability, quality control, transparency, and inter-organisational compatibility. The findings indicate that current strategies to validate DL algorithms do not suffice to assess their accuracy in a clinical setting. Despite the recognition from radiologists that DL algorithms can improve the accuracy of tumour volume assessment, implementation strategies require more work and the involvement of multiple stakeholders.
Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data
Gsaxner, Christina
Roth, Peter M
Wallner, Jurgen
Egger, Jan
PLoS One2019Journal Article, cited 0 times
Website
RIDER Lung PET-CT
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.
Development and verification of radiomics framework for computed tomography image segmentation
Gu, Jiabing
Li, Baosheng
Shu, Huazhong
Zhu, Jian
Qiu, Qingtao
Bai, Tong
Medical Physics2022Journal Article, cited 0 times
Website
Credence Cartridge Radiomics Phantom CT Scans
PHANTOM
radiomics
Computed Tomography (CT)
CycleGAN denoising of extreme low-dose cardiac CT using wavelet-assisted noise disentanglement
Gu, J.
Yang, T. S.
Ye, J. C.
Yang, D. H.
Med Image Anal2021Journal Article, cited 1 times
Website
LDCT-and-Projection-data
Vasculature
Wavelet
cycleGAN
Deep Learning
Image denoising
Adversarial training
Coronary CT angiography
Cycle consistency
Low-dose CT
Unsupervised learning
Wavelet transform
In electrocardiography (ECG) gated cardiac CT angiography (CCTA), multiple images covering the entire cardiac cycle are taken continuously, so reduction of the accumulated radiation dose could be an important issue for patient safety. Although ECG-gated dose modulation (so-called ECG pulsing) is used to acquire many phases of CT images at a low dose, the reduction of the radiation dose introduces noise into the image reconstruction. To address this, we developed a high performance unsupervised deep learning method using noise disentanglement that can effectively learn the noise patterns even from extreme low dose CT images. For noise disentanglement, we use a wavelet transform to extract the high-frequency signals that contain the most noise. Since matched low-dose and high-dose cardiac CT data are impossible to obtain in practice, our neural network was trained in an unsupervised manner using cycleGAN for the extracted high frequency signals from the low-dose and unpaired high-dose CT images. Once the network is trained, denoised images are obtained by subtracting the estimated noise components from the input images. Image quality evaluation of the denoised images from only 4% dose CT images was performed by experienced radiologists for several anatomical structures. Visual grading analysis was conducted according to the sharpness level, noise level, and structural visibility. Also, the signal-to-noise ratio was calculated. The evaluation results showed that the quality of the images produced by the proposed method is much improved compared to low-dose CT images and to the baseline cycleGAN results. The proposed noise-disentangled cycleGAN with wavelet transform effectively removed noise from extreme low-dose CT images compared to the existing baseline algorithms. It can be an important denoising platform for low-dose CT.
Multi-View Radiomics Feature Fusion Reveals Distinct Immuno-Oncological Characteristics and Clinical Prognoses in Hepatocellular Carcinoma
Hepatocellular carcinoma (HCC) is one of the most prevalent malignancies worldwide, and the pronounced intra- and inter-tumor heterogeneity restricts clinical benefits. Dissecting molecular heterogeneity in HCC is commonly explored by endoscopic biopsy or surgical forceps, but invasive tissue sampling and possible complications limit the broadeer adoption. The radiomics framework is a promising non-invasive strategy for tumor heterogeneity decoding, and the linkage between radiomics and immuno-oncological characteristics is worth further in-depth study. In this study, we extracted multi-view imaging features from contrast-enhanced CT (CE-CT) scans of HCC patients, followed by developing a fused imaging feature subtyping (FIFS) model to identify two distinct radiomics subtypes. We observed two subtypes of patients with distinct texture-dominated radiomics profiles and prognostic outcomes, and the radiomics subtype identified by FIFS model was an independent prognostic factor. The heterogeneity was mainly attributed to inflammatory pathway activity and the tumor immune microenvironment. The predominant radiogenomics association was identified between texture-related features and immune-related pathways by integrating network analysis, and was validated in two independent cohorts. Collectively, this work described the close connections between multi-view radiomics features and immuno-oncological characteristics in HCC, and our integrative radiogenomics analysis strategy may provide clues to non-invasive inflammation-based risk stratification.
Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography
Gu, Y.
Lu, X.
Zhang, B.
Zhao, Y.
Yu, D.
Gao, L.
Cui, G.
Wu, L.
Zhou, T.
PLoS One2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Assisted Detection (CAD)
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.
Automatic Colorectal Segmentation with Convolutional Neural Network
Guachi, Lorena
Guachi, Robinson
Bini, Fabiano
Marinozzi, Franco
Computer-Aided Design and Applications2019Journal Article, cited 3 times
Website
CT-COLONOGRAPHY
Segmentation
Convolutional Neural Network (CNN)
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel.; Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.
Unpaired Cross-Modal Interaction Learning for COVID-19 Segmentation on Limited CT Images
Guan, Qingbiao
Xie, Yutong
Yang, Bing
Zhang, Jianpeng
Liao, Zhibin
Wu, Qi
Xia, Yong
2023Book Section, cited 0 times
COVID-19-AR
Segmentation
COVID-19
Automatic Segmentation
Algorithm Development
Computed Tomography (CT)
X-Rays
Accurate automated segmentation of infected regions in CT images is crucial for predicting COVID-19’s pathological stage and treatment response. Although deep learning has shown promise in medical image segmentation, the scarcity of pixel-level annotations due to their expense and time-consuming nature limits its application in COVID-19 segmentation. In this paper, we propose utilizing large-scale unpaired chest X-rays with classification labels as a means of compensating for the limited availability of densely annotated CT scans, aiming to learn robust representations for accurate COVID-19 segmentation. To achieve this, we design an Unpaired Cross-modal Interaction (UCI) learning framework. It comprises a multi-modal encoder, a knowledge condensation (KC) and knowledge-guided interaction (KI) module, and task-specific networks for final predictions. The encoder is built to capture optimal feature representations for both CT and X-ray images. To facilitate information interaction between unpaired cross-modal data, we propose the KC that introduces a momentum-updated prototype learning strategy to condense modality-specific knowledge. The condensed knowledge is fed into the KI module for interaction learning, enabling the UCI to capture critical features and relationships across modalities and enhance its representation ability for COVID-19 segmentation. The results on the public COVID-19 segmentation benchmark show that our UCI with the inclusion of chest X-rays can significantly improve segmentation performance, outperforming advanced segmentation approaches including nnUNet, CoTr, nnFormer, and Swin UNETR. Code is available at: https://github.com/GQBBBB/UCI.
Data Augmentation Based on DiscrimDiff for Histopathology Image Classification
Guan, Xianchao
Wang, Yifeng
Lin, Yiyang
Zhang, Yongbing
2024Book Section, cited 0 times
Osteosarcoma-Tumor-Assessment
data augmentation
Histopathology
Histopathological analysis is the present gold standard for cancer diagnosis. Accurate classification of histopathology images has great clinical significance and application value for assisting pathologists in diagnosis. However, the performance of histopathology image classification is greatly affected by data imbalance. To address this problem, we propose a novel data augmentation framework based on the diffusion model, DiscrimDiff, which expands the dataset by synthesizing images of rare classes. To compensate for the lack of discrimination ability of the diffusion model for synthesized images, we design a post-discrimination mechanism to provide image quality assurance for data augmentation. Our method significantly improves classification performance on multiple datasets. Furthermore, histomorphological features of different classes concerned by the diffusion model may provide guiding significance for pathologists in clinical diagnosis. Therefore, we visualize histomorphological features related to classification, which can be used to assist pathologist-in-training education and improve the understanding of histomorphology.
Glioma Grade Classification via Omics Imaging
Guarracino, Mario
Manzo, Mario
Manipur, Ichcha
Granata, Ilaria
Maddalena, Lucia
2020Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Radiogenomics
Imaging features
Classification
BRAIN
Omics imaging is an emerging interdisciplinary field concerned with the integration of data collected from biomedical images and omics experiments. Bringing together information coming from different sources, it permits to reveal hidden genotype-phenotype relationships, with the aim of better understanding the onset and progression of many diseases, and identifying new diagnostic and prognostic biomarkers. In this work, we present an omics imaging approach to the classification of different grades of gliomas, which are primary brain tumors arising from glial cells, as this is of critical clinical importance for making decisions regarding initial and subsequent treatment strategies. Imaging data come from analyses available in The Cancer Imaging Archive, while omics attributes are extracted by integrating metabolic models with transcriptomic data available from the Genomic Data Commons portal. We investigate the results of feature selection for the two types of data separately, as wel l as for the integrated data, providing hints on the most distinctive ones that can be exploited as biomarkers for glioma grading. Moreover, we show how the integrated data can provide additional clinical information as compared to the two types of data separately, leading to higher performance. We believe our results can be valuable to clinical tests in practice.
A Multi-View Deep Evidential Learning Approach for Mammogram Density Classification
Gudhe, Naga Raju
Mazen, Sudah
Sund, Reijo
Kosma, Veli-Matti
Behravan, Hamid
Mannermaa, Arto
IEEE Access2024Journal Article, cited 0 times
Website
CBIS-DDSM
CMMD
Mammography
Classification
Image normalization
Convolutional Neural Network (CNN)
ResNet-101
ResNet-50
ResNet18
DenseNet
EfficientNet-B3
EfficientNet
Artificial intelligence algorithms, specifically deep learning, can assist radiologists by automating mammogram density assessment. However, trust in such algorithms must be established before they are widely adopted in clinical settings. In this study, we present an evidential deep learning approach called MV-DEFEAT, incorporating the strength of Dempster Shafer evidential theory and subjective logic, for the mammogram density classification task. The framework combines evidence from multiple mammograms’ views to mimic a radiologist decision making process. In this study, we utilized four open-source datasets, namely VinDr-Mammo, DDSM, CMMD, and VTB, to mitigate inherent biases and provide a diverse representation of the data. Our experimental findings demonstrate MV-DEFEAT’s superior performance in terms of weighted macro-average area under the receiver operating curves (AUCs) compared to the state-of-the-art multi-view deep learning model, referred to as MVDL. MV-DEFEAT yields a relative improvement of 12.57%, 14.51%, 19.9%, and 22.53%, on the VTB, VinDr-Mammo, CMMD, and DDSM datasets, respectively, for the mammogram density classification task. Additionally, for BIRADS classification and the classification of mammograms as benign or malignant, MV-DEFEAT exhibits substantial enhancements compared to MVDL, with a relative improvement of 31.46% and 50.78% on the DDSM and VinDr-Mammo datasets, respectively. These results underscore the efficacy of our approach. Through meticulous curation of diverse datasets and comprehensive comparative analyses, we ensure the robustness and reliability of our findings, thereby enhancing trust to adopt MV-DEFEAT framework for various mammogram assessment tasks in clinical settings.
FFCAEs : An efficient feature fusion framework using cascaded autoencoders for the identification of gliomas
Gudigar, Anjan
Raghavendra, U.
Rao, Tejaswi N.
Samanth, Jyothi
Rajinikanth, Venkatesan
Satapathy, Suresh Chandra
Ciaccio, Edward J.
Wai Yee, Chan
Acharya, U. Rajendra
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
BRAIN
Computer Aided Diagnosis (CADx)
Computer Aided Detection (CADe)
Intracranial tumors arise from constituents of the brain and its meninges. Glioblastoma (GBM) is the most common adult primary intracranial neoplasm and is categorized as high-grade astrocytoma according to the World Health Organization (WHO). The survival rate for 5 and 10 years after diagnosis is under 10%, contributing to its grave prognosis. Early detection of GBM enables early intervention, prognostication, and treatment monitoring. Computer-aided diagnostics (CAD) is a computerized process that helps to differentiate between GBM and low-grade gliomas (LGG), using the perceptible analysis of magnetic resonance (MR) of the brain. This study proposes a framework consisting of a feature fusion algorithm with cascaded autoencoders (CAEs), referred to as FFCAEs. Here we utilized two CAEs and extracted the relevant features from multiple CAEs. Inspired by the existing work on fusion algorithms, the obtained features are then fused by using a novel fusion algorithm. Finally, the resultant fused features are classified with the Softmax classifier to arrive at an average classification accuracy of 96.7%, which is 2.45% more than the previously best-performing model. The method is shown to be efficacious thus, it can be useful as a utility program for doctors.
External validation of a CT-based radiomics signature in oropharyngeal cancer: Assessing sources of variation
Guevorguian, P.
Chinnery, T.
Lang, P.
Nichols, A.
Mattonen, S. A.
Radiother Oncol2022Journal Article, cited 0 times
OPC-Radiomics
Computed Tomography (CT)
Machine learning
Oropharyngeal cancer
Overall survival
Radiomics
Validation
BACKGROUND AND PURPOSE: Radiomics is a high-throughput approach that allows for quantitative analysis of imaging data for prognostic applications. Medical images are used in oropharyngeal cancer (OPC) diagnosis and treatment planning and these images may contain prognostic information allowing for treatment personalization. However, the lack of validated models has been a barrier to the translation of radiomic research to the clinic. We hypothesize that a previously developed radiomics model for risk stratification in OPC can be validated in a local dataset. MATERIALS AND METHODS: The radiomics signature predicting overall survival incorporates features derived from the primary gross tumor volume of OPC patients treated with radiation +/- chemotherapy at a single institution (n = 343). Model fit, calibration, discrimination, and utility were evaluated. The signature was compared with a clinical model using overall stage and a model incorporating both radiomics and clinical data. A model detecting dental artifacts on computed tomography images was also validated. RESULTS: The radiomics signature had a Concordance index (C-index) of 0.66 comparable to the clinical model's C-index of 0.65. The combined model significantly outperformed (C-index of 0.69, p = 0.024) the clinical model, suggesting that radiomics provides added value. The dental artifact model demonstrated strong ability in detecting dental artifacts with an area under the curve of 0.87. CONCLUSION: This work demonstrates model performance comparable to previous validation work and provides a framework for future independent and multi-center validation efforts. With sufficient validation, radiomic models have the potential to improve traditional systems of risk stratification, treatment personalization and patient outcomes.
User-centered design and evaluation of interactive segmentation methods for medical images
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation.; ; Titre traduit; ; Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales; Résumé traduit; ; La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.
User-guided graph reduction for fast image segmentation
Graph-based segmentation methods such as the random walker (RW) are known to be computationally expensive. For high resolution images, user interaction with the algorithm is significantly affected. This paper introduces a novel seeding approach for graph-based segmentation that reduces computation time. Instead of marking foreground and background pixels, the user roughly marks the object boundary forming separate regions. The image pixels are then grouped into a hierarchy of increasingly large layers based on their distance from these markings. Next, foreground and background seeds are automatically generated according to the hierarchical layers of each region. The highest layers of the hierarchy are ignored leading to a significant graph reduction. Finally, validation experiments based on multiple automatically generated input seeds were carried out on a variety of medical images. Results show a significant gain in time for high resolution images using the new approach.
A generalized graph reduction framework for interactive segmentation of large images
Gueziri, Houssem-Eddine
McGuffin, Michael J
Laporte, Catherine
Computer Vision and Image Understanding2016Journal Article, cited 5 times
Website
Algorithm Development
Segmentation
Computer Aided Detection (CADe)
The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into "layers" (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation. (C) 2016 Elsevier Inc. All rights reserved.
AIR-Net: A novel multi-task learning method with auxiliary image reconstruction for predicting EGFR mutation status on CT images of NSCLC patients
Gui, D.
Song, Q.
Song, B.
Li, H.
Wang, M.
Min, X.
Li, A.
Comput Biol Med2022Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Auxiliary image reconstruction
EGFR mutation status prediction
Multi-task learning
Non-small cell lung cancer
LUNG
Automated and accurate EGFR mutation status prediction using computed tomography (CT) imagery is of great value for tailoring optimal treatments to non-small cell lung cancer (NSCLC) patients. However, existing deep learning based methods usually adopt a single task learning strategy to design and train EGFR mutation status prediction models with limited training data, which may be insufficient to learn distinguishable representations for promoting prediction performance. In this paper, a novel multi-task learning method named AIR-Net is proposed to precisely predict EGFR mutation status on CT images. First, an auxiliary image reconstruction task is effectively integrated with EGFR mutation status prediction, aiming at providing extra supervision at the training phase. Particularly, we adequately employ multi-level information in a shared encoder to generate more comprehensive representations of tumors. Second, a powerful feature consistency loss is further introduced to constrain semantic consistency of original and reconstructed images, which contributes to enhanced image reconstruction and offers more effective regularization to AIR-Net during training. Performance analysis of AIR-Net indicates that auxiliary image reconstruction plays an essential role in identifying EGFR mutation status. Furthermore, extensive experimental results demonstrate that our method achieves favorable performance against other competitive prediction methods. All the results executed in this study suggest that the effectiveness and superiority of AIR-Net in precisely predicting EGFR mutation status of NSCLC.
Combining Multistaged Filters and Modified Segmentation Network for Improving Lung Nodules Classification
Gunawan, R.
Tran, Y.
Zheng, J.
Nguyen, H.
Carrigan, A.
Mills, M. K.
Chai, R.
IEEE J Biomed Health Inform2024Journal Article, cited 0 times
Website
LungCT-Diagnosis
LDCT-and-Projection-data
Advancements in computational technology have led to a shift towards automated detection processes in lung cancer screening, particularly through nodule segmentation techniques. These techniques employ thresholding to distinguish between soft and firm tissues, including cancerous nodules. The challenge of accurately detecting nodules close to critical lung structures such as blood vessels, bronchi, and the pleura highlights the necessity for more sophisticated methods to enhance diagnostic accuracy. This paper proposed combined processing filters for data preparation before using one of the modified Convolutional Neural Networks (CNNs) as the classifier. With refined filters, the nodule targets are solid, semi-solid, and ground glass, ranging from low-stage cancer (cancer screening data) to high-stage cancer. Furthermore, two additional works were added to address juxta-pleural nodules while the pre-processing end and classification are done in a 3-dimensional domain in opposition to the usual image classification. The accuracy output indicates that even using a simple Segmentation Network if modified correctly, can improve the classification result compared to the other eight models. The proposed sequence total accuracy reached 99.7%, with 99.71% cancer class accuracy and 99.82% non-cancer accuracy, much higher than any previous research, which can improve the detection efforts of the radiologist.
Image Recovery from Synthetic Noise Artifacts in CT Scans Using Modified U-Net
Gunawan, Rudy
Tran, Yvonne
Zheng, Jinchuan
Nguyen, Hung
Chai, Rifai
Sensors (Basel)2022Journal Article, cited 0 times
Website
NLST
LDCT-and-Projection-data
Algorithm Development
Computed Tomography (CT)
Image denoising
LUNG
*Artifacts
*Image Processing
Computer-Assisted/methods
Radiation Dosage
Signal-To-Noise Ratio
Tomography
X-Ray Computed/methods
Computed Tomography (CT) is commonly used for cancer screening as it utilizes low radiation for the scan. One problem with low-dose scans is the noise artifacts associated with low photon count that can lead to a reduced success rate of cancer detection during radiologist assessment. The noise had to be removed to restore detail clarity. We propose a noise removal method using a new model Convolutional Neural Network (CNN). Even though the network training time is long, the result is better than other CNN models in quality score and visual observation. The proposed CNN model uses a stacked modified U-Net with a specific number of feature maps per layer to improve the image quality, observable on an average PSNR quality score improvement out of 174 images. The next best model has 0.54 points lower in the average score. The score difference is less than 1 point, but the image result is closer to the full-dose scan image. We used separate testing data to clarify that the model can handle different noise densities. Besides comparing the CNN configuration, we discuss the denoising quality of CNN compared to classical denoising in which the noise characteristics affect quality.
COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 From Chest CT Images Through Bigger, More Diverse Learning
Gunraj, Hayden
Sabri, Ali
Koff, David
Wong, Alexander
2022Journal Article, cited 0 times
CT Images in COVID-19
LIDC-IDRI
The COVID-19 pandemic continues to rage on, with multiple waves causing substantial harm to health and economies around the world. Motivated by the use of computed tomography (CT) imaging at clinical institutes around the world as an effective complementary screening method to RT-PCR testing, we introduced COVID-Net CT, a deep neural network tailored for detection of COVID-19 cases from chest CT images, along with a large curated benchmark dataset comprising 1,489 patient cases as part of the open-source COVID-Net initiative. However, one potential limiting factor is restricted data quantity and diversity given the single nation patient cohort used in the study. To address this limitation, in this study we introduce enhanced deep neural networks for COVID-19 detection from chest CT images which are trained using a large, diverse, multinational patient cohort. We accomplish this through the introduction of two new CT benchmark datasets, the largest of which comprises a multinational cohort of 4,501 patients from at least 16 countries. To the best of our knowledge, this represents the largest, most diverse multinational cohort for COVID-19 CT images in open-access form. Additionally, we introduce a novel lightweight neural network architecture called COVID-Net CT S, which is significantly smaller and faster than the previously introduced COVID-Net CT architecture. We leverage explainability to investigate the decision-making behavior of the trained models and ensure that decisions are based on relevant indicators, with the results for select cases reviewed and reported on by two board-certified radiologists with over 10 and 30 years of experience, respectively. The best-performing deep neural network in this study achieved accuracy, COVID-19 sensitivity, positive predictive value, specificity, and negative predictive value of 99.0%/99.1%/98.0%/99.4%/99.7%, respectively. Moreover, explainability-driven performance validation shows consistency with radiologist interpretation by leveraging correct, clinically relevant critical factors. The results are promising and suggest the strong potential of deep neural networks as an effective tool for computer-aided COVID-19 assessment. While not a production-ready solution, we hope the open-source, open-access release of COVID-Net CT-2 and the associated benchmark datasets will continue to enable researchers, clinicians, and citizen data scientists alike to build upon them.
A Fast Nearest Neighbor Search Scheme Over Outsourced Encrypted Medical Images
Medical imaging is crucial for medical diagnosis, and the sensitive nature of medical images necessitates rigorous security and privacy solutions to be in place. In a cloud-based medical system for Healthcare Industry 4.0, medical images should be encrypted prior to being outsourced. However, processing queries over encrypted data without first executing the decryption operation is challenging and impractical at present. In this paper, we propose a secure and efficient scheme to find the exact nearest neighbor over encrypted medical images. Instead of calculating the Euclidean distance, we reject candidates by computing the lower bound of the Euclidean distance that is related to the mean and standard deviation of data. Unlike most existing schemes, our scheme can obtain the exact nearest neighbor rather than an approximate result. We, then, evaluate our proposed approach to demonstrate its utility.
Cascaded Global Context Convolutional Neural Network for Brain Tumor Segmentation
A cascade of global context convolutional neural networks is proposed to segment multi-modality MR images with brain tumor into three subregions: enhancing tumor, whole tumor and tumor core. Each network is a modification of the 3D U-Net consisting of residual connection, group normalization and deep supervision. In addition, we apply Global Context (GC) block to capture long-range dependency and inter-channel dependency. We use a combination of logarithmic Dice loss and weighted cross entropy loss to focus on less accurate voxels and improve the accuracy. Experiments with BraTS 2019 validation set show the proposed method achieved average Dice scores of 0.77338, 0.90712, 0.83911 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2019 testing set were 0.79303, 0.87962, 0.82887 for enhancing tumor, whole tumor and tumor core, respectively.
Parallel CNN‐deep learning clinical‐imaging signature for assessing pathologic grade and prognosis of soft tissue sarcoma patients
Guo, Jia
Li, Yi‐ming
Guo, Hongwei
Hao, Da‐peng
Xu, Jing‐xu
Huang, Chen‐cui
Han, Hua‐wei
Hou, Feng
Yang, Shi‐feng
Cui, Jian‐ling
Journal of Magnetic Resonance Imaging2024Journal Article, cited 1 times
Website
Soft-tissue-Sarcoma
Texture synthesis for generating realistic-looking bronchoscopic videos
Guo, L.
Nahm, W.
Int J Comput Assist Radiol Surg2023Journal Article, cited 2 times
Website
CPTAC-LSCC
Bronchoscopy
Endoscopy
Generative Adversarial Network (GAN)
Synthetic data generation
Synthetic images
Texture synthesis
Video augmentation
PURPOSE: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. METHODS: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. RESULTS: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. CONCLUSIONS: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.
An Innovative Model for Detecting Brain Tumors and Glioblastoma Multiforme Disease Patterns
In this article, an innovative model is proposed for detecting brain tumors and glioblastoma multiforme disease patterns (DBT-GBM) in medical imaging. The DBT-GBM model mainly includes five steps, the image conversion in the L* component of the L*a*b* space, an image sample region selection, calculation of the average values of colors, image pixel classification using the minimum distance classifier and the segmentation operation. In the approach, the minimum distance classifier is used to classify each pixel by calculating the Euclidean distance between that pixel and each color marker of the pattern. In the experiments, the authors implement the DBT-GBM model into real-time data, the samples of three anatomic sections of a T1w 3D MRI (axial, sagittal and coronal cross-sections) on the GBM-3D-Slicer datasets and the CBTC datasets. The implementation results show that the proposed DBT-GBM robustly detects the GBM disease patterns and cancer nuclei (involving the omics indicative of brain tumors pathologically) in medical imaging, leading to improved segmentation performance in comparison.
Application of Artificial Intelligence Machine Learning Models in Cancer Diagnosis
Guo, Qizhen
2025Journal Article, cited 0 times
LIDC-IDRI
In the medical domain, artificial intelligence (AI) pertains to the utilization of
machine learning models and algorithms for the analysis of intricate medical data. This paper
demonstrates the application of learning models in cancer diagnosis through two innovative
approaches: the Hybrid U-Net model and an improved MobileNet-V3 model, as well as a novel
Multiple Instance Learning (MIL) model. The combination of these models has achieved
significant improvements in cancer diagnosis accuracy, interpretability, and performance. The
integration of the Hybrid U-Net with MobileNet-V3 effectively enhances image segmentation
and feature extraction efficiency, particularly in analysis of medical imaging tasks such as skin
cancer detection. Through hyperparameter optimization, these models perform exceptionally
well with complex data, significantly improving performance metrics such as accuracy, PPV,
NPV, and AUC. On the other hand, the MIL model, with its innovative attention mechanism,
has shown considerable promise in improving the diagnosis of lung cancer by providing
greater interpretability and aiding clinical understanding. However, despite the impressive
performance of these models, the research also highlights limitations such as small sample
sizes and reliance on existing segmentation methods, which affect the stability and broad
applicability of the models.
Comparison of performances of conventional and deep learning-based methods in segmentation of lung vessels and registration of chest radiographs
Guo, W.
Gu, X.
Fang, Q.
Li, Q.
Radiol Phys Technol2020Journal Article, cited 0 times
Website
LIDC-IDRI
Convolutional Neural Network (CNN)
Deep learning
Image registration
Segmentation
LUNG
Vasculature
Conventional machine learning-based methods have been effective in assisting physicians in making accurate decisions and utilized in computer-aided diagnosis for more than 30 years. Recently, deep learning-based methods, and convolutional neural networks in particular, have rapidly become preferred options in medical image analysis because of their state-of-the-art performance. However, the performances of conventional and deep learning-based methods cannot be compared reliably because of their evaluations on different datasets. Hence, we developed both conventional and deep learning-based methods for lung vessel segmentation and chest radiograph registration, and subsequently compared their performances on the same datasets. The results strongly indicated the superiority of deep learning-based methods over their conventional counterparts.
Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data
Guo, Wentian
Li, Hui
Zhu, Yitan
Lan, Li
Yang, Shengjie
Drukker, Karen
Morris, Elizabeth
Burnside, Elizabeth
Whitman, Gary
Giger, Maryellen L
Ji, Y.
TCGA Breast Phenotype Research Group
Journal of Medical Imaging2015Journal Article, cited 57 times
Website
TCGA-BRCA
Breast
Radiogenomics
Genomic and radiomic imaging profiles of invasive breast carcinomas from The Cancer Genome Atlas and The Cancer Imaging Archive were integrated and a comprehensive analysis was conducted to predict clinical outcomes using the radiogenomic features. Variable selection via LASSO and logistic regression were used to select the most-predictive radiogenomic features for the clinical phenotypes, including pathological stage, lymph node metastasis, and status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). Cross-validation with receiver operating characteristic (ROC) analysis was performed and the area under the ROC curve (AUC) was employed as the prediction metric. Higher AUCs were obtained in the prediction of pathological stage, ER, and PR status than for lymph node metastasis and HER2 status. Overall, the prediction performances by genomics alone, radiomics alone, and combined radiogenomics features showed statistically significant correlations with clinical outcomes; however, improvement on the prediction performance by combining genomics and radiomics data was not found to be statistically significant, most likely due to the small sample size of 91 cancer cases with 38 radiomic features and 144 genomic features.
Domain Knowledge Based Brain Tumor Segmentation and Overall Survival Prediction
Guo, Xiaoqing
Yang, Chen
Lam, Pak Lun
Woo, Peter Y. M.
Yuan, Yixuan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Gradient boosting
3d convolutional neural network (CNN)
Automatically segmenting sub-regions of gliomas (necrosis, edema and enhancing tumor) and accurately predicting overall survival (OS) time from multimodal MRI sequences have important clinical significance in diagnosis, prognosis and treatment of gliomas. However, due to the high degree variations of heterogeneous appearance and individual physical state, the segmentation of sub-regions and OS prediction are very challenging. To deal with these challenges, we utilize a 3D dilated multi-fiber network (DMFNet) with weighted dice loss for brain tumor segmentation, which incorporates prior volume statistic knowledge and obtains a balance between small and large objects in MRI scans. For OS prediction, we propose a DenseNet based 3D neural network with position encoding convolutional layer (PECL) to extract meaningful features from T1 contrast MRI, T2 MRI and previously segmented sub-regions. Both labeled data and unlabeled data are utilized to prevent over-fitting for semi-supervised learning. Those learned deep features along with handcrafted features (such as ages, volume of tumor) and position encoding segmentation features are fed to a Gradient Boosting Decision Tree (GBDT) to predict a specific OS day.
Brain Tumor Segmentation Based on Attention Mechanism and Multi-model Fusion
Guo, Xutao
Yang, Chushu
Ma, Ting
Zhou, Pengzheng
Lu, Shangfeng
Ji, Nan
Li, Deling
Wang, Tong
Lv, Haiyan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
U-Net
Brain tumor are uncontrollable and abnormal cells in the brain. The incidence and mortality of brain tumors are very high. Among them, gliomas are the most common primary malignant tumors with different degrees of invasion. The segmentation of brain tumors is a prerequisite for disease diagnosis, surgical planning and prognosis. According to the characteristics of brain tumor data, we designed a multi-model fusion brain tumor automatic segmentation algorithm based on attention mechanism [1]. Our network architecture is slightly modified based on 3D U-Net [2]. At the same time, the attention mechanism was added to the 3D U-Net model. According to the patch size and attention mechanism in the training process, four independent networks are designed. Here, we use 64 × 64 × 64 and 128 × 128 × 128 patch sizes to train different sub-networks. Finally, the results of the four models in the label layer are combined to get the final segmentation results. This multi model fusion method can effectively improve the robustness of the algorithm. At the same time, the attention method can improve the feature extraction ability of the network and improve the segmentation accuracy. Our experimental study on the newly released brats data set (brats 2019) shows that our method accurately describes brain tumors.
Segmentation of glioblastomas via 3D FusionNet
Guo, X.
Zhang, B.
Peng, Y.
Chen, F.
Li, W.
Front Oncol2024Journal Article, cited 0 times
Website
UPENN-GBM
3D deep learning model
Magnetic Resonance Imaging (MRI)
SegNet
U-net
brain tumor segmentation
INTRODUCTION: This study presented an end-to-end 3D deep learning model for the automatic segmentation of brain tumors. METHODS: The MRI data used in this study were obtained from a cohort of 630 GBM patients from the University of Pennsylvania Health System (UPENN-GBM). Data augmentation techniques such as flip and rotations were employed to further increase the sample size of the training set. The segmentation performance of models was evaluated by recall, precision, dice score, Lesion False Positive Rate (LFPR), Average Volume Difference (AVD) and Average Symmetric Surface Distance (ASSD). RESULTS: When applying FLAIR, T1, ceT1, and T2 MRI modalities, FusionNet-A and FusionNet-C the best-performing model overall, with FusionNet-A particularly excelling in the enhancing tumor areas, while FusionNet-C demonstrates strong performance in the necrotic core and peritumoral edema regions. FusionNet-A excels in the enhancing tumor areas across all metrics (0.75 for recall, 0.83 for precision and 0.74 for dice scores) and also performs well in the peritumoral edema regions (0.77 for recall, 0.77 for precision and 0.75 for dice scores). Combinations including FLAIR and ceT1 tend to have better segmentation performance, especially for necrotic core regions. Using only FLAIR achieves a recall of 0.73 for peritumoral edema regions. Visualization results also indicate that our model generally achieves segmentation results similar to the ground truth. DISCUSSION: FusionNet combines the benefits of U-Net and SegNet, outperforming the tumor segmentation performance of both. Although our model effectively segments brain tumors with competitive accuracy, we plan to extend the framework to achieve even better segmentation performance.
Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network
Guo, Zhe
Guo, Ning
Gong, Kuang
Zhong, Shun’an
Li, Quanzheng
Physics in Medicine and Biology2019Journal Article, cited 0 times
Head-Neck-PET-CT
Deep Learning
Head and Neck
PET-CT
In radiation therapy, the accurate delineation of gross tumor volume (GTV) is crucial for treatment planning. However, it is challenging for head and neck cancer (HNC) due to the morphology complexity of various organs in the head, low targets to background contrast and potential artifacts on conventional planning CT images. Thus, manual delineation of GTV on anatomical images is extremely time consuming and suffers from inter-observer variability that leads to planning uncertainty. With the wide use of PET/CT imaging in oncology, complementary functional and anatomical information can be utilized for tumor contouring and bring a significant advantage for radiation therapy planning. In this study, by taking advantage of multi-modality PET and CT images, we propose an automatic GTV segmentation framework based on deep learning for HNC. The backbone of this segmentation framework is based on 3D convolution with dense connections which enables a better information propagation and takes full advantage of the features extracted from multi-modality input images. We evaluate our proposed framework on a dataset including 250 HNC patients. Each patient receives both planning CT and PET/CT imaging before radiation therapy (RT). Manually delineated GTV contours by radiation oncologists are used as ground truth in this study. To further investigate the advantage of our proposed Dense-Net framework, we also compared with the framework using 3D U-Net which is the state-of-the-art in segmentation tasks. Meanwhile, for each frame, the performance comparison between single modality input (PET or CT image) and multi-modality input (both PET/CT) is conducted. Dice coefficient, mean surface distance (MSD), 95th-percentile Hausdorff distance (HD95) and displacement of mass centroid (DMC) are calculated for quantitative evaluation. The dataset is split into train (140 patients), validation (35 patients) and test (75 patients) groups to optimize the network. Based on the results on independent test group, our proposed multi-modality Dense-Net (Dice 0.73) shows better performance than the compared network (Dice 0.71). Furthermore, the proposed Dense-Net structure has less trainable parameters than the 3D U-Net, which reduces the prediction variability. In conclusion, our proposed multi-modality Dense-Net can enable satisfied GTV segmentation for HNC using multi-modality images and yield superior performance than conventional methods. Our proposed method provides an automatic, fast and consistent solution for GTV segmentation and shows potentials to be generally applied for radiation therapy planning of a variety of cancer (e.g. lung, sarcoma, liver and so on).
Novel computer‐aided lung cancer detection based on convolutional neural network‐based and feature‐based classifiers using metaheuristics
Guo, Z. Q.
Xu, L. A.
Si, Y. J.
Razmjooy, N.
International Journal of Imaging Systems and Technology2021Journal Article, cited 1 times
Website
LungCT-Diagnosis
Computer Aided Diagnosis (CADx)
optimization
Classification
Algorithm Development
This study proposes a lung cancer diagnosis system based on computed tomography (CT) scan images for the detection of the disease. The proposed method uses a sequential approach to achieve this goal. Consequently, two well-organized classifiers, the convolutional neural network (CNN) and feature-based methodology, have been used. In the first step, the CNN classifier is optimized using a newly designed optimization method called the improved Harris hawk optimizer. This method is applied to the dataset, and the classification is commenced. If the disease cannot be detected via this method, the results are conveyed to the second classifier, that is, the feature-based method. This classifier, including Haralick and LBP features, is subsequently applied to the received dataset from the CNN classifier. Finally, if the feature-based method also does not detect cancer, the case study is healthy; otherwise, the case study is cancerous.
Efficient Transfer Learning using Pre-trained Models on CT/MRI
The medical imaging field has unique obstacles to face when performing computer vision classification tasks. The retrieval of the data, be it CT scans or MRI, is not only expensive but also limited due to the lack of publicly available labeled data. In spite of this, clinicians often need this medical imaging data to perform diagnosis and recommendations for treatment. This motivates the use of efficient transfer learning techniques to not only condense the complexity of the data as it is often volumetric, but also to achieve better results faster through the use of established machine learning techniques like transfer learning, fine-tuning, and shallow deep learning. In this paper, we introduce a three-step process to perform classification using CT scans and MRI data. The process makes use of fine-tuning to align the pretrained model with the target class, feature extraction to preserve learned information for downstream classification tasks, and shallow deep learning to perform subsequent training. Experiments are done to compare the performance of the proposed methodology as well as the time cost trade offs for using our technique compared to other baseline methods. Through these experiments we find that our proposed method outperforms all other baselines while achieving a substantial speed up in overall training time.
Multi-branch Learning Framework with Different Receptive Fields Ensemble for Brain Tumor Segmentation
Guohua, Cheng
Mengyan, Luo
Linyang, He
Lingqiang, Mo
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Segmentation of brain tumors from 3D magnetic resonance images (MRIs) is one of key elements for diagnosis and treatment. Most segmentation methods depend on manual segmentation which is time consuming and subjective. In this paper, we propose a robust method for automatic segmentation of brain tumors image, the complementarity between models and training programs with different structures was fully exploited. Due to significant size difference among brain tumors, the model with single receptive field is not robust. To solve this problem, we propose our own method: i) a cascade model with a 3D U-Net like architecture which provides small receptive field focus on local details. ii) a 3D U-Net model combines VAE module which provides large receptive field focus on global information. iii) redesigned Multi-Branch Network with Cascade Attention Network, which provides different receptive field for different types of brain tumors, this allows to scale differences between various brain tumors and make full use of the prior knowledge of the task. The ensemble of all these models further improves the overall performance on the BraTS2019 [10] image segmentation. We evaluate the proposed methods on the validation DataSet of the BraTS2019 segmentation challenge and achieved dice coefficients of 0.91, 0.83 and 0.79 for the whole tumor, tumor core and enhanced tumor core respectively. Our experiments indicate that the proposed methods have a promising potential in the field of brain tumor segmentation.
A tool for lung nodules analysis based on segmentation and morphological operation
Brain Tumor Detection using Curvelet Transform and Support Vector Machine
Gupta, Bhawna
Tiwari, Shamik
International Journal of Computer Science and Mobile Computing2014Journal Article, cited 8 times
Website
Hierarchical deep multi-modal network for medical visual question answering
Gupta, Deepak
Suman, Swati
Ekbal, Asif
Expert Systems with Applications2021Journal Article, cited 0 times
Head-Neck-PET-CT
LGG-1p19qDeletion
MRI-DIR
NSCLC Radiogenomics
Visual Question Answering in Medical domain (VQA-Med) plays an important role in providing medical assistance to the end-users. These users are expected to raise either a straightforward question with a Yes/No answer or a challenging question that requires a detailed and descriptive answer. The existing techniques in VQA-Med fail to distinguish between the different question types sometimes complicates the simpler problems, or over-simplifies the complicated ones. It is certainly true that for different question types, several distinct systems can lead to confusion and discomfort for the end-users. To address this issue, we propose a hierarchical deep multi-modal network that analyzes and classifies end-user questions/queries and then incorporates a query-specific approach for answer prediction. We refer our proposed approach as Hierarchical Question Segregation based Visual Question Answering, in short HQS-VQA. Our contributions are three-fold, viz. firstly, we propose a question segregation (QS) technique for VQA-Med; secondly, we integrate the QS model to the hierarchical deep multi-modal neural network to generate proper answers to the queries related to medical images; and thirdly, we study the impact of QS in Medical-VQA by comparing the performance of the proposed model with QS and a model without QS. We evaluate the performance of our proposed model on two benchmark datasets, viz. RAD and CLEF18. Experimental results show that our proposed HQS-VQA technique outperforms the baseline models with significant margins. We also conduct a detailed quantitative and qualitative analysis of the obtained results and discover potential causes of errors and their solutions.
C-NMC: B-lineage acute lymphoblastic leukaemia: A blood cancer dataset
Gupta, Ritu
Gehlot, Shiv
Gupta, Anubha
Medical Engineering & Physics2022Journal Article, cited 0 times
Website
C-NMC 2019
Leukemia
Computer Aided Diagnosis (CADx)
Jenner-Giemsa stain
histopathology imaging features
Classification
Development of computer-aided cancer diagnostic tools is an active research area owing to the advancements in deep-learning domain. Such technological solutions provide affordable and easily deployable diagnostic tools. Leukaemia, or blood cancer, is one of the leading cancers causing more than 0.3 million deaths every year. In order to aid the development of such an AI-enabled tool, we collected and curated a microscopic image dataset, namely C-NMC, of more than 15000 cancer cell images at a very high resolution of B-Lineage Acute Lymphoblastic Leukaemia (B-ALL). The dataset is prepared at the subject-level and contains images of both healthy and cancer patients. So far, this is the largest (as well as curated) dataset on B-ALL cancer in the public domain. C-NMC is available at The Cancer Imaging Archive (TCIA), USA and can be helpful for the research community worldwide for the development of B-ALL cancer diagnostic tools. This dataset was utilized in an international medical imaging challenge held at ISBI 2019 conference in Venice, Italy. In this paper, we present a detailed description and challenges of this dataset. We also present benchmarking results of all the methods applied so far on this dataset.
Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images
Gupta, Suneet
Porwal, Rabins
International Journal of Biomedical Imaging2016Journal Article, cited 10 times
Website
BRAIN
BREAST
Image Enhancement/methods
Medical imaging systems often produce images that require enhancement, such as improving the image contrast as they are poor in contrast. Therefore, they must be enhanced before they are examined by medical professionals. This is necessary for proper diagnosis and subsequent treatment. We do have various enhancement algorithms which enhance the medical images to different extents. We also have various quantitative metrics or measures which evaluate the quality of an image. This paper suggests the most appropriate measures for two of the medical images, namely, brain cancer images and breast cancer images.
The REMBRANDT study, a large collection of genomic data from brain cancer patients
Gusev, Yuriy
Bhuvaneshwar, Krithika
Song, Lei
Zenklusen, Jean-Claude
Fine, Howard
Madhavan, Subha
Scientific Data2018Journal Article, cited 1 times
Website
REMBRANDT
brain cancer
GDMI
G-DOC
Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data
Gutman, David A
Cobb, Jake
Somanna, Dhananjaya
Park, Yuna
Wang, Fusheng
Kurc, Tahsin
Saltz, Joel H
Brat, Daniel J
Cooper, Lee AD
Kong, Jun
Journal of the American Medical Informatics Association2013Journal Article, cited 70 times
Website
TCGA-GBM
TCGA-BRCA
Digital pathology
Data integration
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.;
MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set
Gutman, David A
Cooper, Lee A D
Hwang, Scott N
Holder, Chad A
Gao, Jingjing
Aurora, Tarun D
Dunn, William D Jr
Scarpace, Lisa
Mikkelsen, Tom
Jain, Rajan
Wintermark, Max
Jilwan, Manal
Raghavan, Prashant
Huang, Erich
Clifford, Robert J
Mongkolwat, Pattanasak
Kleper, Vladimir
Freymann, John
Kirby, Justin
Zinn, Pascal O
Moreno, Carlos S
Jaffe, Carl
Colen, Rivka
Rubin, Daniel L
Saltz, Joel
Flanders, Adam
Brat, Daniel J
Radiology2013Journal Article, cited 217 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Glioblastoma Multiforme (GBM)
BRAIN
PURPOSE: To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival. MATERIALS AND METHODS: Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff alpha statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test. RESULTS: Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01). CONCLUSION: This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.
Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer
Gutman, David A
Dunn Jr, William D
Cobb, Jake
Stoner, Richard M
Kalpathy-Cramer, Jayashree
Erickson, Bradley
Frontiers in Neuroinformatics2014Journal Article, cited 12 times
Website
Algorithm Development
XNAT
DICOM
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.
Somatic mutations associated with MRI-derived volumetric features in glioblastoma
Gutman, David A
Dunn Jr, William D
Grossmann, Patrick
Cooper, Lee AD
Holder, Chad A
Ligon, Keith L
Alexander, Brian M
Aerts, Hugo JWL
Neuroradiology2015Journal Article, cited 45 times
Website
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
INTRODUCTION: MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). METHODS: Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. RESULTS: Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. CONCLUSION: MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine.
X2V: 3D Organ Volume Reconstruction From a Planar X-Ray Image With Neural Implicit Methods
Guven, Gokce
Ates, Hasan F.
Ugurdag, H. Fatih
IEEE Access2024Journal Article, cited 0 times
Website
National Lung Screening Trial (NLST)
Organ segmentation
Algorithm Development
In this work, an innovative approach is proposed for three-dimensional (3D) organ volume reconstruction from a single planar X-ray, namely X2V network. Such capability holds pivotal clinical potential, especially in real-time image-guided radiotherapy, computer-aided surgery, and patient follow-up sessions. Traditional methods for 3D volume reconstruction from X-rays often require the utilization of statistical 3D organ templates, which are employed in 2D/3D registration. However, these methods may not accurately account for the variation in organ shapes across different subjects. Our X2V model overcomes this problem by leveraging neural implicit representation. A vision transformer model is integrated as an encoder network, specifically designed to direct and enhance attention to particular regions within the X-ray image. The reconstructed meshes exhibit a similar topology to the ground truth organ volume, demonstrating the ability of X2V in accurately capturing the 3D structure from a 2D image. The effectiveness of X2V is evaluated on lung X-rays using several metrics, including volumetric Intersection over Union (IoU). X2V outperforms the state-of-the-art method in the literature for lungs (DeepOrganNet) by about 7-9% achieving IoU’s between 0.892-0.942 versus DeepOrganNet’s IoU of 0.815-0.888.
OPTIMISING DELINEATION ACCURACY OF TUMOURS IN PET FOR RADIOTHERAPY PLANNING USING BLIND DECONVOLUTION
Guvenis, A
Koc, A
Radiation Protection Dosimetry2015Journal Article, cited 3 times
Website
Algorithm Development
Computer Assisted Detection (CAD)
Segmentation
Positron Emission Tomography (PET)
Phantom
Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error (p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy.
A serial image analysis architecture with positron emission tomography using machine learning combined for the detection of lung cancer
Guzman Ortiz, S.
Hurtado Ortiz, R.
Jara Gavilanes, A.
Avila Faican, R.
Parra Zambrano, B.
Rev Esp Med Nucl Imagen Mol (Engl Ed)2024Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Positron Emission Tomography (PET)
Lung cancer
Machine learning
INTRODUCTION AND OBJECTIVES: Lung cancer is the second type of cancer with the second highest incidence rate and the first with the highest mortality rate in the world. Machine learning through the analysis of imaging tests such as positron emission tomography/computed tomography (PET/CT) has become a fundamental tool for the early and accurate detection of cancer. The objective of this study was to propose an image analysis architecture (PET/CT) ordered in phases through the application of ensemble or combined machine learning methods for the early detection of lung cancer by analyzing PET/CT images. MATERIAL AND METHODS: A retrospective observational study was conducted utilizing a public dataset entitled "A large-scale CT and PET/CT dataset for lung cancer diagnosis." Various imaging modalities, including CT, PET, and fused PET/CT images, were employed. The architecture or framework of this study comprised the following phases: 1. Image loading or collection, 2. Image selection, 3. Image transformation, and 4. Balancing the frequency distribution of image classes. Predictive models for lung cancer detection using PET/CT images included: a) the Stacking model, which used Random Forest and Support Vector Machine (SVM) as base models and complemented them with a logistic regression model, and b) the Boosting model, which employed the Adaptive Boosting (AdaBoost) model for comparison with the Stacking model. Quality metrics used for evaluation included accuracy, precision, recall, and F1-score. RESULTS: This study showed a general performance of 94% with the Stacking method and a general performance of 77% with the Boosting method. CONCLUSIONS: The Stacking method proved to be a model with high performance and quality for lung cancer detection when analyzing PET/CT images.
Una arquitectura de análisis de imágenes seriadas con la tomografía por emisión de positrones mediante la aplicación de machine learning combinado para la detección del cáncer de pulmón
Guzmán Ortiz, S.
Hurtado Ortiz, R.
Jara Gavilanes, A.
Ávila Faican, R.
Parra Zambrano, B.
2024Journal Article, cited 0 times
Lung-PET-CT-Dx
Introducción y objetivos El cáncer de pulmón es el segundo tipo de cáncer con mayor tasa de incidencia y el primero de mortalidad en el mundo. El aprendizaje automático o machine learning a través del análisis de pruebas de imagen como la tomografía por emisión de positrones/tomografía computarizada (PET/TC) se ha convertido en una herramienta fundamental para la detección temprana y precisa del cáncer. El objetivo de este estudio fue plantear una arquitectura de análisis de imágenes (PET/TC) ordenada en fases mediante la aplicación de métodos de machine learning combinado o ensemble para la detección temprana del cáncer de pulmón analizando imágenes PET/TC. Material y métodos Se realizó un estudio retrospectivo de observación. Se utilizó una base de datos pública, «A large-scale CT and PET/CT dataset for lung cancer diagnosis», en la que se usaron modalidades de exploraciones: TC, PET e imágenes fusionas PET/TC. Las fases de la arquitectura o marco de trabajo de este estudio fueron: 1)carga o recolección de imágenes; 2)selección de imágenes; 3)transformación de imágenes, y 4)balanceo de distribución de frecuencia de clases de imágenes. Los modelos predictivos para la detección del cáncer de pulmón analizando imágenes PET/TC fueron: a)el modelo Stacking, el cual utilizó como modelos base al modelo Random Forest y el modelo Máquina de Soporte Vectorial (Support Vector Machine [SVM]) para luego complementarlo con una regresión logística como modelo final, y b)el modelo Boosting, el cual utilizó el modelo Adaptive Boosting (AdaBoost) para ser comparado con el modelo Stacking. Las medidas de calidad utilizadas fueron: precisión, exactitud, exhaustividad, valor-F1. Resultados En este estudio se pudo apreciar un rendimiento general del 94% con el método de Stacking y un rendimiento general del 77% con el método de Boosting. Conclusiones El método Stacking demostró ser un modelo con un gran rendimiento y calidad para la detección del cáncer de pulmón al analizar imágenes PET/TC. Introduction and objectives Lung cancer is the second type of cancer with the second highest incidence rate and the first with the highest mortality rate in the world. Machine learning through the analysis of imaging tests such as positron emission tomography/computed tomography (PET/CT) has become a fundamental tool for the early and accurate detection of cancer. The objective of this study was to propose an image analysis architecture (PET/CT) ordered in phases through the application of ensemble or combined machine learning methods for the early detection of lung cancer by analyzing PET/CT images. Material and methods A retrospective observational study was conducted utilizing a public dataset titled «A large-scale CT and PET/CT dataset for lung cancer diagnosis.» Various imaging modalities, including CT, PET, and fused PET/CT images, were employed. The architecture or framework of this study comprised the following phases: 1.image loading or collection; 2.image selection; 3.image transformation, and 4.balancing the frequency distribution of image classes. Predictive models for lung cancer detection using PET/CT images included: a)the Stacking model, which used Random Forest and Support Vector Machine (SVM) as base models and complemented them with a Logistic Regression model, and b)the Boosting model, which employed the Adaptive Boosting (AdaBoost) model for comparison with the Stacking model. Quality metrics used for evaluation included accuracy, precision, recall, and F1-score. Results This study showed a general performance of 94% with the Stacking method and a general performance of 77% with the Boosting method. Conclusions The Stacking method proved to be a model with high performance and quality for lung cancer detection when analyzing PET/CT images.
A Fully Automatic Procedure for Brain Tumor Segmentation from Multi-Spectral MRI Records Using Ensemble Learning and Atlas-Based Data Enhancement
Győrfi, Ágnes
Szilágyi, László
Kovács, Levente
Applied Sciences2021Journal Article, cited 0 times
BraTS 2015
BraTS 2019
Algorithm Development
Magnetic Resonance Imaging (MRI)
Supervised training
The accurate and reliable segmentation of gliomas from magnetic resonance image (MRI) data has an important role in diagnosis, intervention planning, and monitoring the tumor’s evolution during and after therapy. Segmentation has serious anatomical obstacles like the great variety of the tumor’s location, size, shape, and appearance and the modified position of normal tissues. Other phenomena like intensity inhomogeneity and the lack of standard intensity scale in MRI data represent further difficulties. This paper proposes a fully automatic brain tumor segmentation procedure that attempts to handle all the above problems. Having its foundations on the MRI data provided by the MICCAI Brain Tumor Segmentation (BraTS) Challenges, the procedure consists of three main phases. The first pre-processing phase prepares the MRI data to be suitable for supervised classification, by attempting to fix missing data, suppressing the intensity inhomogeneity, normalizing the histogram of observed data channels, generating additional morphological, gradient-based, and Gabor-wavelet features, and optionally applying atlas-based data enhancement. The second phase accomplishes the main classification process using ensembles of binary decision trees and provides an initial, intermediary labeling for each pixel of test records. The last phase reevaluates these intermediary labels using a random forest classifier, then deploys a spatial region growing-based structural validation of suspected tumors, thus achieving a high-quality final segmentation result. The accuracy of the procedure is evaluated using the multi-spectral MRI records of the BraTS 2015 and BraTS 2019 training data sets. The procedure achieves high-quality segmentation results, characterized by average Dice similarity scores of up to 86%.
Radiomics feature reproducibility under inter-rater variability in segmentations of CT images
Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.
Multicenter PET image harmonization using generative adversarial networks
Haberl, D.
Spielvogel, C. P.
Jiang, Z.
Orlhac, F.
Iommi, D.
Carrio, I.
Buvat, I.
Haug, A. R.
Papp, L.
Eur J Nucl Med Mol Imaging2024Journal Article, cited 0 times
Website
FDG-PET-CT-Lesions
Head-Neck-PET-CT
Deep learning
Generative adversarial networks
Harmonization
Multicenter
Quantitative PET
PURPOSE: To improve reproducibility and predictive performance of PET radiomic features in multicentric studies by cycle-consistent generative adversarial network (GAN) harmonization approaches. METHODS: GAN-harmonization was developed to harmonize whole-body PET scans to perform image style and texture translation between different centers and scanners. GAN-harmonization was evaluated by application to two retrospectively collected open datasets and different tasks. First, GAN-harmonization was performed on a dual-center lung cancer cohort (127 female, 138 male) where the reproducibility of radiomic features in healthy liver tissue was evaluated. Second, GAN-harmonization was applied to a head and neck cancer cohort (43 female, 154 male) acquired from three centers. Here, the clinical impact of GAN-harmonization was analyzed by predicting the development of distant metastases using a logistic regression model incorporating first-order statistics and texture features from baseline (18)F-FDG PET before and after harmonization. RESULTS: Image quality remained high (structural similarity: left kidney >/= 0.800, right kidney >/= 0.806, liver >/= 0.780, lung >/= 0.838, spleen >/= 0.793, whole-body >/= 0.832) after image harmonization across all utilized datasets. Using GAN-harmonization, inter-site reproducibility of radiomic features in healthy liver tissue increased at least by >/= 5 +/- 14% (first-order), >/= 16 +/- 7% (GLCM), >/= 19 +/- 5% (GLRLM), >/= 16 +/- 8% (GLSZM), >/= 17 +/- 6% (GLDM), and >/= 23 +/- 14% (NGTDM). In the head and neck cancer cohort, the outcome prediction improved from AUC 0.68 (95% CI 0.66-0.71) to AUC 0.73 (0.71-0.75) by application of GAN-harmonization. CONCLUSIONS: GANs are capable of performing image harmonization and increase reproducibility and predictive performance of radiomic features derived from different centers and scanners.
Overall survival prediction for high-grade glioma patients using mathematical modeling of tumor cell infiltration
Häger, Wille
Toma-Dașu, Iuliana
Astaraki, Mehdi
Lazzeroni, Marta
Physica Medica2023Journal Article, cited 0 times
BraTS-TCGA-GBM
PURPOSE: This study aimed at applying a mathematical framework for the prediction of high-grade gliomas (HGGs) cell invasion into normal tissues for guiding the clinical target delineation, and at investigating the possibility of using tumor infiltration maps for patient overall survival (OS) prediction.
MATERIAL & METHODS: A model describing tumor infiltration into normal tissue was applied to 93 HGG cases. Tumor infiltration maps and corresponding isocontours with different cell densities were produced. ROC curves were used to seek correlations between the patient OS and the volume encompassed by a particular isocontour. Area-Under-the-Curve (AUC) values were used to determine the isocontour having the highest predictive ability. The optimal cut-off volume, having the highest sensitivity and specificity, for each isocontour was used to divide the patients in two groups for a Kaplan-Meier survival analysis.
RESULTS: The highest AUC value was obtained for the isocontour of cell densities 1000 cells/mm3 and 2000 cells/mm3, equal to 0.77 (p < 0.05). Correlation with the GTV yielded an AUC of 0.73 (p < 0.05). The Kaplan-Meier survival analysis using the 1000 cells/mm3 isocontour and the ROC optimal cut-off volume for patient group selection rendered a hazard ratio (HR) of 2.7 (p < 0.05), while the GTV rendered a HR = 1.6 (p < 0.05).
CONCLUSION: The simulated tumor cell invasion is a stronger predictor of overall survival than the segmented GTV, indicating the importance of using mathematical models for cell invasion to assist in the definition of the target for HGG patients.
PET/CT radiomics signature of human papilloma virus association in oropharyngeal squamous cell carcinoma
Haider, S. P.
Mahajan, A.
Zeevi, T.
Baumeister, P.
Reichel, C.
Sharaf, K.
Forghani, R.
Kucukkaya, A. S.
Kann, B. H.
Judson, B. L.
Prasad, M. L.
Burtness, B.
Payabvash, S.
Eur J Nucl Med Mol Imaging2020Journal Article, cited 1 times
Website
Head-Neck-PET-CT
Radiomics
PURPOSE: To devise, validate, and externally test PET/CT radiomics signatures for human papillomavirus (HPV) association in primary tumors and metastatic cervical lymph nodes of oropharyngeal squamous cell carcinoma (OPSCC). METHODS: We analyzed 435 primary tumors (326 for training, 109 for validation) and 741 metastatic cervical lymph nodes (518 for training, 223 for validation) using FDG-PET and non-contrast CT from a multi-institutional and multi-national cohort. Utilizing 1037 radiomics features per imaging modality and per lesion, we trained, optimized, and independently validated machine-learning classifiers for prediction of HPV association in primary tumors, lymph nodes, and combined "virtual" volumes of interest (VOI). PET-based models were additionally validated in an external cohort. RESULTS: Single-modality PET and CT final models yielded similar classification performance without significant difference in independent validation; however, models combining PET and CT features outperformed single-modality PET- or CT-based models, with receiver operating characteristic area under the curve (AUC) of 0.78, and 0.77 for prediction of HPV association using primary tumor lesion features, in cross-validation and independent validation, respectively. In the external PET-only validation dataset, final models achieved an AUC of 0.83 for a virtual VOI combining primary tumor and lymph nodes, and an AUC of 0.73 for a virtual VOI combining all lymph nodes. CONCLUSION: We found that PET-based radiomics signatures yielded similar classification performance to CT-based models, with potential added value from combining PET- and CT-based radiomics for prediction of HPV status. While our results are promising, radiomics signatures may not yet substitute tissue sampling for clinical decision-making.
Prediction of post-radiotherapy locoregional progression in HPV-associated oropharyngeal squamous cell carcinoma using machine-learning analysis of baseline PET/CT radiomics
Haider, S. P.
Sharaf, K.
Zeevi, T.
Baumeister, P.
Reichel, C.
Forghani, R.
Kann, B. H.
Petukhova, A.
Judson, B. L.
Prasad, M. L.
Liu, C.
Burtness, B.
Mahajan, A.
Payabvash, S.
Transl Oncol2020Journal Article, cited 0 times
Website
HEAD AND NECK
PET/CT
Radiomics
Head-Neck-PET-CT
HNSCC
Locoregional failure remains a therapeutic challenge in oropharyngeal squamous cell carcinoma (OPSCC). We aimed to devise novel objective imaging biomarkers for prediction of locoregional progression in HPV-associated OPSCC. Following manual lesion delineation, 1037 PET and 1037 CT radiomic features were extracted from each primary tumor and metastatic cervical lymph node on baseline PET/CT scans. Applying random forest machine-learning algorithms, we generated radiomic models for censoring-aware locoregional progression prognostication (evaluated by Harrell's C-index) and risk stratification (evaluated in Kaplan-Meier analysis). A total of 190 patients were included; an optimized model yielded a median (interquartile range) C-index of 0.76 (0.66-0.81; p=0.01) in prognostication of locoregional progression, using combined PET/CT radiomic features from primary tumors. Radiomics-based risk stratification reliably identified patients at risk for locoregional progression within 2-, 3-, 4-, and 5-year follow-up intervals, with log-rank p-values of p=0.003, p=0.001, p=0.02, p=0.006 in Kaplan-Meier analysis, respectively. Our results suggest PET/CT radiomic biomarkers can predict post-radiotherapy locoregional progression in HPV-associated OPSCC. Pending validation in large, independent cohorts, such objective biomarkers may improve patient selection for treatment de-intensification trials in this prognostically favorable OPSCC entity, and eventually facilitate personalized therapy.
Potential Added Value of PET/CT Radiomics for Survival Prognostication beyond AJCC 8th Edition Staging in Oropharyngeal Squamous Cell Carcinoma
Haider, S. P.
Zeevi, T.
Baumeister, P.
Reichel, C.
Sharaf, K.
Forghani, R.
Kann, B. H.
Judson, B. L.
Prasad, M. L.
Burtness, B.
Mahajan, A.
Payabvash, S.
Cancers (Basel)2020Journal Article, cited 2 times
Website
Head-Neck-PET-CT
HNSCC
Accurate risk-stratification can facilitate precision therapy in oropharyngeal squamous cell carcinoma (OPSCC). We explored the potential added value of baseline positron emission tomography (PET)/computed tomography (CT) radiomic features for prognostication and risk stratification of OPSCC beyond the American Joint Committee on Cancer (AJCC) 8th edition staging scheme. Using institutional and publicly available datasets, we included OPSCC patients with known human papillomavirus (HPV) status, without baseline distant metastasis and treated with curative intent. We extracted 1037 PET and 1037 CT radiomic features quantifying lesion shape, imaging intensity, and texture patterns from primary tumors and metastatic cervical lymph nodes. Utilizing random forest algorithms, we devised novel machine-learning models for OPSCC progression-free survival (PFS) and overall survival (OS) using "radiomics" features, "AJCC" variables, and the "combined" set as input. We designed both single- (PET or CT) and combined-modality (PET/CT) models. Harrell's C-index quantified survival model performance; risk stratification was evaluated in Kaplan-Meier analysis. A total of 311 patients were included. In HPV-associated OPSCC, the best "radiomics" model achieved an average C-index +/- standard deviation of 0.62 +/- 0.05 (p = 0.02) for PFS prediction, compared to 0.54 +/- 0.06 (p = 0.32) utilizing "AJCC" variables. Radiomics-based risk-stratification of HPV-associated OPSCC was significant for PFS and OS. Similar trends were observed in HPV-negative OPSCC. In conclusion, radiomics imaging features extracted from pre-treatment PET/CT may provide complimentary information to the current AJCC staging scheme for survival prognostication and risk-stratification of HPV-associated OPSCC.
Impact of (18)F-FDG PET Intensity Normalization on Radiomic Features of Oropharyngeal Squamous Cell Carcinomas and Machine Learning-Generated Biomarkers
Haider, S. P.
Zeevi, T.
Sharaf, K.
Gross, M.
Mahajan, A.
Kann, B. H.
Judson, B. L.
Prasad, M. L.
Burtness, B.
Aboian, M.
Canis, M.
Reichel, C. A.
Baumeister, P.
Payabvash, S.
J Nucl Med2024Journal Article, cited 0 times
Website
TCGA-HNSC
HNSCC
Head-Neck-PET-CT
Radiomics-Tumor-Phenotypes
Humans
*Machine Learning
*Oropharyngeal Neoplasms/diagnostic imaging
*Fluorodeoxyglucose F18
Male
Female
Middle Aged
Positron-Emission Tomography/methods
Image Processing
Computer-Assisted/methods
Aged
Carcinoma
Squamous Cell/diagnostic imaging
Biomarkers
Tumor/metabolism
Reproducibility of Results
Radiomics
Pet
Suv
machine learning
normalization
We aimed to investigate the effects of (18)F-FDG PET voxel intensity normalization on radiomic features of oropharyngeal squamous cell carcinoma (OPSCC) and machine learning-generated radiomic biomarkers. Methods: We extracted 1,037 (18)F-FDG PET radiomic features quantifying the shape, intensity, and texture of 430 OPSCC primary tumors. The reproducibility of individual features across 3 intensity-normalized images (body-weight SUV, reference tissue activity ratio to lentiform nucleus of brain and cerebellum) and the raw PET data was assessed using an intraclass correlation coefficient (ICC). We investigated the effects of intensity normalization on the features' utility in predicting the human papillomavirus (HPV) status of OPSCCs in univariate logistic regression, receiver-operating-characteristic analysis, and extreme-gradient-boosting (XGBoost) machine-learning classifiers. Results: Of 1,037 features, a high (ICC >/= 0.90), medium (0.90 > ICC >/= 0.75), and low (ICC < 0.75) degree of reproducibility across normalization methods was attained in 356 (34.3%), 608 (58.6%), and 73 (7%) features, respectively. In univariate analysis, features from the PET normalized to the lentiform nucleus had the strongest association with HPV status, with 865 of 1,037 (83.4%) significant features after multiple testing corrections and a median area under the receiver-operating-characteristic curve (AUC) of 0.65 (interquartile range, 0.62-0.68). Similar tendencies were observed in XGBoost models, with the lentiform nucleus-normalized model achieving the numerically highest average AUC of 0.72 (SD, 0.07) in the cross validation within the training cohort. The model generalized well to the validation cohorts, attaining an AUC of 0.73 (95% CI, 0.60-0.85) in independent validation and 0.76 (95% CI, 0.58-0.95) in external validation. The AUCs of the XGBoost models were not significantly different. Conclusion: Only one third of the features demonstrated a high degree of reproducibility across intensity-normalization techniques, making uniform normalization a prerequisite for interindividual comparability of radiomic markers. The choice of normalization technique may affect the radiomic features' predictive value with respect to HPV. Our results show trends that normalization to the lentiform nucleus may improve model performance, although more evidence is needed to draw a firm conclusion.
Time-to-event overall survival prediction in glioblastoma multiforme patients using magnetic resonance imaging radiomics
Hajianfar, G.
Haddadi Avval, A.
Hosseini, S. A.
Nazari, M.
Oveisi, M.
Shiri, I.
Zaidi, H.
Radiol Med2023Journal Article, cited 0 times
TCGA-GBM
Glioblastoma
Magnetic Resonance Imaging (MRI)
Machine learning
Overall survival
Radiomics
PURPOSE: Glioblastoma Multiforme (GBM) represents the predominant aggressive primary tumor of the brain with short overall survival (OS) time. We aim to assess the potential of radiomic features in predicting the time-to-event OS of patients with GBM using machine learning (ML) algorithms. MATERIALS AND METHODS: One hundred nineteen patients with GBM, who had T1-weighted contrast-enhanced and T2-FLAIR MRI sequences, along with clinical data and survival time, were enrolled. Image preprocessing methods included 64 bin discretization, Laplacian of Gaussian (LOG) filters with three Sigma values and eight variations of Wavelet Transform. Images were then segmented, followed by the extraction of 1212 radiomic features. Seven feature selection (FS) methods and six time-to-event ML algorithms were utilized. The combination of preprocessing, FS, and ML algorithms (12 x 7 x 6 = 504 models) was evaluated by multivariate analysis. RESULTS: Our multivariate analysis showed that the best prognostic FS/ML combinations are the Mutual Information (MI)/Cox Boost, MI/Generalized Linear Model Boosting (GLMB) and MI/Generalized Linear Model Network (GLMN), all of which were done via the LOG (Sigma = 1 mm) preprocessing method (C-index = 0.77). The LOG filter with Sigma = 1 mm preprocessing method, MI, GLMB and GLMN achieved significantly higher C-indices than other preprocessing, FS, and ML methods (all p values < 0.05, mean C-indices of 0.65, 0.70, and 0.64, respectively). CONCLUSION: ML algorithms are capable of predicting the time-to-event OS of patients using MRI-based radiomic and clinical features. MRI-based radiomics analysis in combination with clinical variables might appear promising in assisting clinicians in the survival prediction of patients with GBM. Further research is needed to establish the applicability of radiomics in the management of GBM in the clinic.
Impact of harmonization on the reproducibility of MRI radiomic features when using different scanners, acquisition parameters, and image pre-processing techniques: a phantom study
Hajianfar, G.
Hosseini, S. A.
Bagherieh, S.
Oveisi, M.
Shiri, I.
Zaidi, H.
Med Biol Eng Comput2024Journal Article, cited 0 times
RIDER PHANTOM MRI
Harmonization
Magnetic Resonance Imaging (MRI)
Pre-processing
Radiomics
Robustness
Reproducibility
This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis's (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.
Multi-faceted computational assessment of risk and progression in oligodendroglioma implicates NOTCH and PI3K pathways
Halani, Sameer H
Yousefi, Safoora
Vega, Jose Velazquez
Rossi, Michael R
Zhao, Zheng
Amrollahi, Fatemeh
Holder, Chad A
Baxter-Stoltzfus, Amelia
Eschbacher, Jennifer
Griffith, Brent
NPJ precision oncology2018Journal Article, cited 0 times
Website
TCGA-LGG
oligodendroglioma
NOTCH1
PIK3
Convolutional 3D to 2D Patch Conversion for Pixel-Wise Glioma Segmentation in MRI Scans
Hamghalam, Mohammad
Lei, Baiying
Wang, Tianfu
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
BraTS 2019
Challenge
convolutional Neural Network (CNN)
BRAIN
Magnetic Resonance Imaging (MRI)
Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS’19) demonstrate that our proposed method can efficiently segment the tumor regions.
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
Hamghalam, Mohammad
Lei, Baiying
Wang, Tianfu
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Generative Adversarial Network (GAN)
Convolutional Neural Network (CNN)
Segmentation
Regression model
The magnetic resonance (MR) analysis of brain tumors is widely used for diagnosis and examination of tumor subregions. The overlapping area among the intensity distribution of healthy, enhancing, non-enhancing, and edema regions makes the automatic segmentation a challenging task. Here, we show that a convolutional neural network trained on high-contrast images can transform the intensity distribution of brain lesions in its internal subregions. Specifically, a generative adversarial network (GAN) is extended to synthesize high-contrast images. A comparison of these synthetic images and real images of brain tumor tissue in MR scans showed significant segmentation improvement and decreased the number of real channels for segmentation. The synthetic images are used as a substitute for real channels and can bypass real modalities in the multimodal brain tumor segmentation framework. Segmentation results on BraTS 2019 dataset demonstrate that our proposed approach can efficiently segment the tumor areas. In the end, we predict patient survival time based on volumetric features of the tumor subregions as well as the age of each case through several regression models.
A computational model for texture analysis in images with a reaction-diffusion based filter
Hamid, Lefraich
Fahim, Houda
Zirhem, Mariam
Alaa, Nour Eddine
Journal of Mathematical Modeling2021Journal Article, cited 0 times
Website
TCGA-SARC
RIDER NEURO MRI
Texture analysis
Model
Radiomics
As one of the most important tasks in image processing, texture analysis is related to a class of mathematical models that characterize the spatial variations of an image.; In this paper, in order to extract features of interest, we propose a reaction diffusion based model which uses the variational approach. In the first place, we describe the mathematical model, then, aiming to simulate the latter accurately, we suggest an efficient numerical scheme.; Thereafter, we compare our method to literature findings. Finally, we conclude our analysis by a number of experimental results showing the robustness and the performance of our algorithm.
Glioma Classification Using Multimodal Radiology and Histology Data
Hamidinekoo, Azam
Pieciak, Tomasz
Afzali, Maryam
Akanyeti, Otar
Yuan, Yinyin
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Digital Pathology
Magnetic Resonance Imaging (MRI)
Gliomas are brain tumours with a high mortality rate. There are various grades and sub-types of this tumour, and the treatment procedure varies accordingly. Clinicians and oncologists diagnose and categorise these tumours based on visual inspection of radiology and histology data. However, this process can be time-consuming and subjective. The computer-assisted methods can help clinicians to make better and faster decisions. In this paper, we propose a pipeline for automatic classification of gliomas into three sub-types: oligodendroglioma, astrocytoma, and glioblastoma, using both radiology and histopathology images. The proposed approach implements distinct classification models for radiographic and histologic modalities and combines them through an ensemble method. The classification algorithm initially carries out tile-level (for histology) and slice-level (for radiology) classification via a deep learning method, then tile/slice-level latent features are combined for a whole-slide and whole-volume sub-type prediction. The classification algorithm was evaluated using the data set provided in the CPM-RadPath 2020 challenge. The proposed pipeline achieved the F1-Score of 0.886, Cohen’s Kappa score of 0.811 and Balance accuracy of 0.860. The ability of the proposed model for end-to-end learning of diverse features enables it to give a comparable prediction of glioma tumour sub-types.
4D dosimetric-blood flow model: impact of prolonged fraction delivery times of IMRT on the dose to the circulating lymphocytes
Hammi, A.
Phys Med Biol2023Journal Article, cited 0 times
GLIS-RT
Humans
*Radiotherapy
Intensity-Modulated
Hemodynamics
Lymphocytes
Radiometry
Simulation
intensity-modulated radiotherapy (IMRT)
blood flow model
cerebral vasculature
circulating lymphocytes
patient-specific modeling
To investigate the impact of prolonged fraction delivery of modern intensity-modulated radiotherapy (IMRT) on the accumulated dose to the circulating blood during the course of fractionated radiation therapy. We have developed a 4D dosimetric blood flow model (d-BFM) capable of continuously simulating the blood flow through the entire body of the cancer patient and scoring the accumulated dose to blood particles (BPs). We developed a semi-automatic approach that enables us to map the tortuous blood vessels of the surficial brain of individual patients directly from standard magnetic resonance imaging data of the patient. For the rest of the body, we developed a fully-fledged dynamic blood flow transfer model according to the International Commission on Radiological Protection human reference. We proposed a methodology enabling us to design a personalized d-BFM, such it can be tailored for individual patients by adopting intra- and inter-subject variations. The entire circulatory model tracks over 43 million BPs and has a time resolution of $\unicode{x02206}t$ = 10−3 s. A dynamic dose delivery model was implemented to emulate the spatial and temporal time-varying pattern of the dose rate during the step-and-shoot mode of IMRT. We evaluated how different configurations of the dose rate delivery, and a time prolongation of fraction delivery may impact the dose received by the circulating blood (CB).Our calculations indicate that prolonging the fraction treatment time from 7 to 18 min will augment the blood volume receiving any dose ${V}_{D\gt 0\mathrm{Gy}}$ from 36.1% to 81.5% during one single fraction. The results indicate that increasing the segment number has only a negligible effect on the irradiated blood volume, when the fraction time is kept identical. We developed a novel concept of customized 4D d-BFM that can be tailored to the hemodynamics of individual patients to quantify dose to the CB during fractionated radiotherapy. The prolonged fraction delivery and the variability of the instantaneous dose rate have a significant impact on the accumulated dose distribution during IMRT treatments. This impact should be considered during IMRT treatments design to reduce RT-induced immunosuppressive effects.
Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use
Hamzaoui, D.
Montagne, S.
Renard-Penna, R.
Ayache, N.
Delingette, H.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
PROSTATEx
Deep learning
inter-rater variability
Magnetic Resonance Imaging (MRI)
PROSTATE
Automatic Segmentation
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 +/- 2.85 for the whole gland (WG), 91.00 +/- 4.34 for the transition zone (TZ), and 79.08 +/- 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p-value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 +/- 2.94 for WG, 86.84 +/- 4.33 for TZ, and 78.40 +/- 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Semi-supervised learning for an improved diagnosis of COVID-19 in CT images
Han, C. H.
Kim, M.
Kwak, J. T.
PLoS One2021Journal Article, cited 0 times
Website
LCTSC
COVID-19
Lung CT Segmentation Challenge 2017
RIDER Collections
SPIE-AAPM Lung CT Challenge
Deep Learning
Computer Aided Diagnosis (CADx)
Coronavirus disease 2019 (COVID-19) has been spread out all over the world. Although a real-time reverse-transcription polymerase chain reaction (RT-PCR) test has been used as a primary diagnostic tool for COVID-19, the utility of CT based diagnostic tools have been suggested to improve the diagnostic accuracy and reliability. Herein we propose a semi-supervised deep neural network for an improved detection of COVID-19. The proposed method utilizes CT images in a supervised and unsupervised manner to improve the accuracy and robustness of COVID-19 diagnosis. Both labeled and unlabeled CT images are employed. Labeled CT images are used for supervised learning. Unlabeled CT images are utilized for unsupervised learning in a way that the feature representations are invariant to perturbations in CT images. To systematically evaluate the proposed method, two COVID-19 CT datasets and three public CT datasets with no COVID-19 CT images are employed. In distinguishing COVID-19 from non-COVID-19 CT images, the proposed method achieves an overall accuracy of 99.83%, sensitivity of 0.9286, specificity of 0.9832, and positive predictive value (PPV) of 0.9192. The results are consistent between the COVID-19 challenge dataset and the public CT datasets. For discriminating between COVID-19 and common pneumonia CT images, the proposed method obtains 97.32% accuracy, 0.9971 sensitivity, 0.9598 specificity, and 0.9326 PPV. Moreover, the comparative experiments with respect to supervised learning and training strategies demonstrate that the proposed method is able to improve the diagnostic accuracy and robustness without exhaustive labeling. The proposed semi-supervised method, exploiting both supervised and unsupervised learning, facilitates an accurate and reliable diagnosis for COVID-19, leading to an improved patient care and management.
Vector quantization-based automatic detection of pulmonary nodules in thoracic CT images
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel CADe system for lung nodule detection based on a vector quantization (VQ) approach. Compared to existing CADe systems, the extraction of lungs from the chest CT image is fully automatic, and the detection and segmentation of initial nodule candidates (INCs) within the lung volume is fast and accurate due to the self-adaptive nature of VQ algorithm. False positives in the detected INCs are reduced by rule-based pruning in combination with a feature-based support vector machine classifier. We validate the proposed approach on 60 CT scans from a publicly available database. Preliminary results show that our CADe system is effective to detect nodules with a sensitivity of 90.53 % at a specificity level of 86.00%.
A novel computer-aided detection system for pulmonary nodule identification in CT images
; Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
Multimodal Brain Image Analysis and Survival Prediction Using Neuromorphic Attention-Based Neural Networks
Han, Il Song
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2020
2D and 3D radiomics features
Challenge
BRAIN
Segmentation
Algorithm Development
Accurate analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning, and the recent development using deep neural networks becomes of great clinical importance because of its effective and accurate performance. The 3D nature of multimodal MRI demands the large scale memory and computation, while the variety of 3D U-net is widely adopted for medical image segmentation. In this study, 2D U-net is applied to the tumor segmentation and survival period prediction, inspired by the neuromorphic neural network. The new method introduces the neuromorphic saliency map for enhancing the image analysis. By mimicking the visual cortex and implementing the neuromorphic preprocessing, the map of attention and saliency is generated and applied to improve the accurate and fast medical image analysis performance. Through the BraTS 2020 challenge, the performance of the renewed neuromorphic algorithm is evaluated and an overall review is conducted on the previous neuromorphic processing and other approach. The overall survival prediction accuracy is 55.2% for the validation data, and 43% for the test data.;
Locoregional Recurrence Prediction Using a Deep Neural Network of Radiological and Radiotherapy Images
Han, K.
Joung, J. F.
Han, M.
Sung, W.
Kang, Y. N.
J Pers Med2022Journal Article, cited 1 times
Website
Head-Neck-PET-CT
Deep learning
Radiation therapy (RT) is an important and potentially curative modality for head and neck squamous cell carcinoma (HNSCC). Locoregional recurrence (LR) of HNSCC after RT is ranging from 15% to 50% depending on the primary site and stage. In addition, the 5-year survival rate of patients with LR is low. To classify high-risk patients who might develop LR, a deep learning model for predicting LR needs to be established. In this work, 157 patients with HNSCC who underwent RT were analyzed. Based on the National Cancer Institute's multi-institutional TCIA data set containing FDG-PET/CT/dose, a 3D deep learning model was proposed to predict LR without time-consuming segmentation or feature extraction. Our model achieved an averaged area under the curve (AUC) of 0.856. Adding clinical factors into the model improved the AUC to an average of 0.892 with the highest AUC of up to 0.974. The 3D deep learning model could perform individualized risk quantification of LR in patients with HNSCC without time-consuming tumor segmentation.
MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks
Glioblastoma Multiforme (GBM), a malignant brain tumor, is among the most lethal of all cancers. Temozolomide is the primary chemotherapy treatment for patients diagnosed with GBM. The methylation status of the promoter or the enhancer regions of the O6− methylguanine methyltransferase (MGMT) gene may impact the efficacy and sensitivity of temozolomide, and hence may affect overall patient survival. Microscopic genetic changes may manifest as macroscopic morphological changes in the brain tumors that can be detected using magnetic resonance imaging (MRI), which can serve as noninvasive biomarkers for determining methylation of MGMT regulatory regions. In this research, we use a compendium of brain MRI scans of GBM patients collected from The Cancer Imaging Archive (TCIA) combined with methylation data from The Cancer Genome Atlas (TCGA) to predict the methylation state of the MGMT regulatory regions in these patients. Our approach relies on a bi-directional convolutional recurrent neural network architecture (CRNN) that leverages the spatial aspects of these 3-dimensional MRI scans. Our CRNN obtains an accuracy of 67% on the validation data and 62% on the test data, with precision and recall both at 67%, suggesting the existence of MRI features that may complement existing markers for GBM patient stratification and prognosis. We have additionally presented our model via a novel neural network visualization platform, which we have developed to improve interpretability of deep learning MRI-based classification models.
Low‐dose CT denoising via convolutional neural network with an observer loss function
Han, Minah
Shim, Hyunjung
Baek, Jongduk
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Convolutional neural network (CNN)-based denoising is an effective method for reducing complex computed tomography (CT) noise. However, the image blur induced by denoising processes is a major concern. The main source of image blur is the pixel-level loss (e.g., mean squared error [MSE] and mean absolute error [MAE]) used to train a CNN denoiser. To reduce the image blur, feature-level loss is utilized to train a CNN denoiser. A CNN denoiser trained using visual geometry group (VGG) loss can preserve the small structures, edges, and texture of the image.However, VGG loss, derived from an ImageNet-pretrained image classifier, is not optimal for training a CNN denoiser for CT images. ImageNet contains natural RGB images, so the features extracted by the ImageNet-pretrained model cannot represent the characteristics of CT images that are highly correlated with diagnosis. Furthermore, a CNN denoiser trained with VGG loss causes bias in CT number. Therefore, we propose to use a binary classification network trained using CT images as a feature extractor and newly define the feature-level loss as observer loss.
METHODS: As obtaining labeled CT images for training classification network is difficult, we create labels by inserting simulated lesions. We conduct two separate classification tasks, signal-known-exactly (SKE) and signal-known-statistically (SKS), and define the corresponding feature-level losses as SKE loss and SKS loss, respectively. We use SKE loss and SKS loss to train CNN denoiser.
RESULTS: Compared to pixel-level losses, a CNN denoiser trained using observer loss (i.e., SKE loss and SKS loss) is effective in preserving structure, edge, and texture. Observer loss also resolves the bias in CT number, which is a problem of VGG loss. Comparing observer losses using SKE and SKS tasks, SKS yields images having a more similar noise structure to reference images.
CONCLUSIONS: Using observer loss for training CNN denoiser is effective to preserve structure, edge, and texture in denoised images and prevent the CT number bias. In particular, when using SKS loss, denoised images having a similar noise structure to reference images are generated.
Utilization of an attentive map to preserve anatomical features for training convolutional neural‐network‐based low‐dose CT denoiser
Han, Minah
Shim, Hyunjung
Baek, Jongduk
Medical Physics2023Journal Article, cited 0 times
LDCT-and-Projection-data
BACKGROUND: The purpose of a convolutional neural network (CNN)-based denoiser is to increase the diagnostic accuracy of low-dose computed tomography (LDCT) imaging. To increase diagnostic accuracy, there is a need for a method that reflects the features related to diagnosis during the denoising process.
PURPOSE: To provide a training strategy for LDCT denoisers that relies more on diagnostic task-related features to improve diagnostic accuracy.
METHODS: An attentive map derived from a lesion classifier (i.e., determining lesion-present or not) is created to represent the extent to which each pixel influences the decision by the lesion classifier. This is used as a weight to emphasize important parts of the image. The proposed training method consists of two steps. In the first one, the initial parameters of the CNN denoiser are trained using LDCT and normal-dose CT image pairs via supervised learning. In the second one, the learned parameters are readjusted using the attentive map to restore the fine details of the image.
RESULTS: Structural details and the contrast are better preserved in images generated by using the denoiser trained via the proposed method than in those generated by conventional denoisers. The proposed denoiser also yields higher lesion detectability and localization accuracy than conventional denoisers.
CONCLUSIONS: A denoiser trained using the proposed method preserves the small structures and the contrast in the denoised images better than without it. Specifically, using the attentive map improves the lesion detectability and localization accuracy of the denoiser.
Deep Transfer Learning and Radiomics Feature Prediction of Survival of Patients with High-Grade Gliomas
Han, W.
Qin, L.
Bay, C.
Chen, X.
Yu, K. H.
Miskin, N.
Li, A.
Xu, X.
Young, G.
AJNR Am J Neuroradiol2020Journal Article, cited 16 times
Website
GBM
BRAIN
TCGA
Radiomics
BACKGROUND AND PURPOSE: Patient survival in high-grade glioma remains poor, despite the recent developments in cancer treatment. As new chemo-, targeted molecular, and immune therapies emerge and show promising results in clinical trials, image-based methods for early prediction of treatment response are needed. Deep learning models that incorporate radiomics features promise to extract information from brain MR imaging that correlates with response and prognosis. We report initial production of a combined deep learning and radiomics model to predict overall survival in a clinically heterogeneous cohort of patients with high-grade gliomas. MATERIALS AND METHODS: Fifty patients with high-grade gliomas from our hospital and 128 patients with high-grade glioma from The Cancer Genome Atlas were included. For each patient, we calculated 348 hand-crafted radiomics features and 8192 deep features generated by a pretrained convolutional neural network. We then applied feature selection and Elastic Net-Cox modeling to differentiate patients into long- and short-term survivors. RESULTS: In the 50 patients with high-grade gliomas from our institution, the combined feature analysis framework classified the patients into long- and short-term survivor groups with a log-rank test P value < .001. In the 128 patients from The Cancer Genome Atlas, the framework classified patients into long- and short-term survivors with a log-rank test P value of .014. For the mixed cohort of 50 patients from our institution and 58 patients from The Cancer Genome Atlas, it yielded a log-rank test P value of .035. CONCLUSIONS: A deep learning model combining deep and radiomics features can dichotomize patients with high-grade gliomas into long- and short-term survivors.
Neuromorphic Neural Network for Multimodal Brain Image Segmentation and Overall Survival Analysis
Han, Woo-Sup
Han, Il Song
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Image analysis of brain tumors is one of key elements for clinical decision, while manual segmentation is time consuming and known to be subjective to clinicians or radiologists. In this paper, we examined the neuromorphic convolutional neural network on this task of multimodal images, using a down-up resizing network structure. The controlled rectifier neuron function was incorporated in neuromorphic neural network, for introducing the efficiency of segmentation and saliency map generation used in noisy image processing of X-ray CT data and dark road video data. The neuromorphic neural network is proposed to the brain imaging analytic, based on the visual cortex-inspired deep neural network developed for 3 dimensional tooth segmentation and robust visual object detection. Experiment results illustrated the effectiveness and feasibility of our proposed method with flexible requirements of clinical diagnostic decision data, from segmentation to overall survival analysis. The survival prediction was 71% accuracy for the data with true result and 50.6% accuracy of predicting survival days for the individual challenge data without any clinical diagnostic data.
Multimodal Brain Image Segmentation and Analysis with Neuromorphic Attention-Based Learning
Automated image analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning of the disease, because manual practices of segmenting tumors are time consuming, expensive and can be subject to clinician diagnostic error. We propose a novel neuromorphic attention-based learner (NABL) model to train the deep neural network for tumor segmentation, which is with challenges of typically small datasets and the difficulty of exact segmentation class determination. The core idea is to introduce the neuromorphic attention to guide the learning process of deep neural network architecture, providing the highlighted region of interest for tumor segmentation. The neuromorphic convolution filters mimicking visual cortex neurons are adopted for the neuromorphic attention generation, transferred from the pre-trained neuromorphic convolutional neural networks(CNNs) for adversarial imagery environments. Our pre-trained neuromorphic CNN has the feature extraction ability applicable to brain MRI data, verified by the overall survival prediction without the tumor segmentation training at Brain Tumor Segmentation (BraTS) Challenge 2018. NABL provides us with an affordable solution of more accurate and faster image analysis of brain tumor segmentation, by incorporating the typical encoder-decoder U-net architecture of CNN. Experiment results illustrated the effectiveness and feasibility of our proposed method with flexible requirements of clinical diagnostic decision data, from segmentation to overall survival prediction. The overall survival prediction accuracy is 55% for predicting overall survival period in days, based on the BraTS 2019 validation dataset, while 48.6% based on the BraTS 2019 test dataset.
Exploration of a noninvasive radiomics classifier for breast cancer tumor microenvironment categorization and prognostic outcome prediction
Han, X.
Gong, Z.
Guo, Y.
Tang, W.
Wei, X.
Eur J Radiol2024Journal Article, cited 0 times
TCGA-BRCA
ISPY1
Breast Neoplasms
Machine Learning
Magnetic Resonance Imaging
Radiomics
Tumor Microenvironment
CIBERSORT
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Classification
K-means Clustering
Radiogenomics
Manual segmentation
Random Forest
RATIONALE AND OBJECTIVES: Breast cancer progression and treatment response are significantly influenced by the tumor microenvironment (TME). Traditional methods for assessing TME are invasive, posing a challenge for patient care. This study introduces a non-invasive approach to TME classification by integrating radiomics and machine learning, aiming to predict the TME status using imaging data, thereby aiding in prognostic outcome prediction. MATERIALS AND METHODS: Utilizing multi-omics data from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA), this study employed CIBERSORT and MCP-counter algorithms analyze immune infiltration in breast cancer. A radiomics classifier was developed using a random forest algorithm, leveraging quantitative features extracted from intratumoral and peritumoral regions of Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) scans. The classifer's ability to predict diverse TME states were and their prognostic implications were evaluated using Kaplan-Meier survival curves. RESULTS: Three distinct TME states were identified using RNA-Seq data, each displaying unique prognostic and biological characteristics. Notably, patients with increased immune cell infiltration showed significantly improved prognoses (P < 0.05). The classifier, comprising 24 radiomic features, demonstrated high predictive accuracy (AUC of training set = 0.960, 95 % CI: 0.922, 0.997; AUC of testing set = 0.853, 95 % CI: 0.687, 1.000) in differentiating these TME states. Predictions from the classifier also correlated significantly with overall patient survival (P < 0.05). CONCLUSION: This study offers a detailed analysis of the complex TME states in breast cancer and presents a reliable, noninvasive radiomics classifier for TME assessment. The classifer's accurate prediction of TME status and its correlation with prognosis highlight its potential as a tool in personalized breast cancer treatment, paving the way for more individualized and less invasive therapeutic strategies.
Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E‐stained and multiphoton microscopy images
Han, Xiahui
Liu, Yulan
Zhang, Shichao
Li, Lianhuang
Zheng, Liqin
Qiu, Lida
Chen, Jianhua
Zhan, Zhenlin
Wang, Shu
Ma, Jianli
Kang, Deyong
Chen, Jianxin
2024Journal Article, cited 0 times
HE-vs-MPM
Immunohistochemistry
Breast
Ductal carcinoma in situ with microinvasion (DCISM) is a challenging subtype of breast cancer with controversial invasiveness and prognosis. Accurate diagnosis of DCISM from ductal carcinoma in situ (DCIS) is crucial for optimal treatment and improved clinical outcomes. However, there are often some suspicious small cancer nests in DCIS, and it is difficult to diagnose the presence of intact myoepithelium by conventional hematoxylin and eosin (H&E) stained images. Although a variety of biomarkers are available for immunohistochemical (IHC) staining of myoepithelial cells, no single biomarker is consistently sensitive to all tumor lesions. Here, we introduced a new diagnostic method that provides rapid and accurate diagnosis of DCISM using multiphoton microscopy (MPM). Suspicious foci in H&E-stained images were labeled as regions of interest (ROIs), and the nuclei within these ROIs were segmented using a deep learning model. MPM was used to capture images of the ROIs in H&E-stained sections. The intensity of two-photon excitation fluorescence (TPEF) in the myoepithelium was significantly different from that in tumor parenchyma and tumor stroma. Through the use of MPM, the myoepithelium and basement membrane can be easily observed via TPEF and second-harmonic generation (SHG), respectively. By fusing the nuclei in H&E-stained images with MPM images, DCISM can be differentiated from suspicious small cancer clusters in DCIS. The proposed method demonstrated good consistency with the cytokeratin 5/6 (CK5/6) myoepithelial staining method (kappa coefficient = 0.818).
Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset
Stability and Reproducibility of Radiomic Features Based Various Segmentation Technique on MR Images of Hepatocellular Carcinoma (HCC)
Haniff, N. S. M.
Abdul Karim, M. K.
Osman, N. H.
Saripan, M. I.
Che Isa, I. N.
Ibahim, M. J.
Diagnostics (Basel)2021Journal Article, cited 1 times
Website
TCGA-LIHC
LIVER
Magnetic Resonance Imaging (MRI)
Manual segmentation
Radiomics
Semi-automatic segmentation
Hepatocellular carcinoma (HCC) is considered as a complex liver disease and ranked as the eighth-highest mortality rate with a prevalence of 2.4% in Malaysia. Magnetic resonance imaging (MRI) has been acknowledged for its advantages, a gold technique for diagnosing HCC, and yet the false-negative diagnosis from the examinations is inevitable. In this study, 30 MR images from patients diagnosed with HCC is used to evaluate the robustness of semi-automatic segmentation using the flood fill algorithm for quantitative features extraction. The relevant features were extracted from the segmented MR images of HCC. Four types of features extraction were used for this study, which are tumour intensity, shape feature, textural feature and wavelet feature. A total of 662 radiomic features were extracted from manual and semi-automatic segmentation and compared using intra-class relation coefficient (ICC). Radiomic features extracted using semi-automatic segmentation utilized flood filling algorithm from 3D-slicer had significantly higher reproducibility (average ICC = 0.952 +/- 0.009, p < 0.05) compared with features extracted from manual segmentation (average ICC = 0.897 +/- 0.011, p > 0.05). Moreover, features extracted from semi-automatic segmentation were more robust compared to manual segmentation. This study shows that semi-automatic segmentation from 3D-Slicer is a better alternative to the manual segmentation, as they can produce more robust and reproducible radiomic features.
Classification of Lung Nodule from CT and PET/CT Images Using Artificial Neural Network
This work aims to design and develop an artificial neural network (ANN) architecture for the classification of cancerous tissue in the lung. A sequential model is used for the machine learning process. ReLU and Sigmoid activation functions have been used to supply weights to the model. The present work encompasses detecting and classifying the tumor cells into four categories. The four types of lung cancer nodules are adenocarcinoma, squamous-cell carcinoma, large-cell carcinoma, and small-cell carcinoma. Computed tomography (CT) and Positron emission tomography (PET) scan DICOM images are used for the classification. The proposed approach has been validated with the subset of the original dataset. A total of 6500 images have been taken in the experiment. The approach is to feed the CT scan images into ANNs and classify the image as the correct type. The dataset is provided by The Cancer Imaging Archive (TCIA). The dataset is titled “A Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis.” The tumor cells are classified using the ANN architecture with 99.6% of validation accuracy and 4.35% loss.
Revisiting Iterative Highly Efficient Optimisation Schemes in Medical Image Registration
QGFormer: Queries-guided transformer for flexible medical image synthesis with domain missing
Hao, Huaibo
Xue, Jie
Huang, Pu
Ren, Liwen
Li, Dengwang
Expert Systems with Applications2024Journal Article, cited 0 times
Website
RSNA-ASNR-MICCAI BraTS 2021
GammaKnife-Hippocampal
BraTS 2021
Synthetic images
Multi-modal imaging
Domain missing poses a common challenge in medical clinical practice, limiting diagnostic accuracy compared to the complete multi-domain images that provide complementary information. We propose QGFormer to address this issue by flexibly imputing missing domains from any available source domain using a single model, which is challenging due to (1) the inherent limitation of CNNs to capture long-range dependencies, (2) the difficulty in modeling the inter- and intra-domain dependencies of multi-domain images, and (3) inefficiencies in fusing domain-specific features associated with missing domains. To tackle these challenges, we introduce two spatial-domanial attentions (SDAs), which establish intra-domain (spatial dimension) and inter-domain (domain dimension) dependencies independently or jointly. QGFormer, constructed based on SDAs, comprises three components: Encoder, Decoder and Fusion. The Encoder and Decoder form the backbone, modeling contextual dependencies to create a hierarchical representation of features. The QGFormer Fusion then adaptively aggregates these representations to synthesize specific missing domains from coarse to fine, guided by learnable domain queries. This process is interpretable because the attention scores in Fusion indicate how much attention the target domains pay to different inputs and regions. In addition, the scalable architecture enables QGFormer to segment tumors with domain missing by replacing domain queries with segment queries. Extensive experiments demonstrate that our approach achieves consistent improvements in multi-domain imputation, cross-domain image translation and multitask of synthesis and segmentation.
MFUnetr: A transformer-based multi-task learning network for multi-organ segmentation from partially labeled datasets
Hao, Qin
Tian, Shengwei
Yu, Long
Wang, Junwen
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Pancreas-CT
As multi-organ segmentation of CT images is crucial for clinical applications, most state-of-the-art models rely on a fully annotated dataset with strong supervision to achieve high accuracy for particular organs. However, these models have weak generalization when applied to various CT images due to the small scale and single source of the training data. To utilize existing partially labeled datasets to obtain segmentation containing more organs and with higher accuracy and robustness, we create a multi-task learning network called MFUnetr. By directly feeding a union of datasets, MFUnetr trains an encoder-decoder network on two tasks in parallel. The main task is to produce full organ segmentation using a specific training strategy. The auxiliary task is to segment labeled organs of each dataset using label priors. Additionally, we offer a new weighted combined loss function to optimize the model. Compared to the base model UNETR trained on the fully annotated dataset BTCV, our network model, utilizing a combination of three partially labeled datasets, achieved mean Dice on overlapping organs: spleen + 0.35 %, esophagus + 15.28 %, and aorta + 8.31 %. Importantly, without fine-tuning, the mean Dice calculated on 13 organs of BTCV remained + 1.91 % when all 15 organs were segmented. The experimental results show that our proposed method can effectively use existing large partially annotated datasets to alleviate the problem of data scarcity in multi-organ segmentation.
Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.
Breast Cancer MRI Classification Based on Fractional Entropy Image Enhancement and Deep Feature Extraction
Hasan, Ali M.
Qasim, Asaad F.
Jalab, Hamid A.
Ibrahim, Rabha W.
2022Journal Article, cited 0 times
BREAST-DIAGNOSIS
سرطان الثدي يعتبر واحد من الامراض القاتلة الشائعة بين النساء في جميع أنحاء العالم. والتشخيص المبكر لسرطان الثدي الكشف المبكر من أهم استراتيجيات الوقاية الثانوية. نظرًا لاستخدام التصوير الطبي على نطاق واسع في تشخيص العديد من الأمراض المزمنة ومراقبتها، فقد تم اقتراح العديد من خوارزميات معالجة الصور على مر السنين لزيادة مجال التصوير الطبي بحيث تصبح عملية التشخيص أكثر دقة وكفاءة. تقدم هذه الدراسة خوارزمية جديدة لاستخراج الخواص العميقة من نوعين من صور الرنين المغناطيسي T2W-TSE و STIR MRI كمدخلات للشبكات العصبية العميقة المقترحة والتي تُستخدم لاستخراج الخواص للتمييز بين فحوصات التصوير بالرنين المغناطيسي للثدي المرضية والصحية. في هذه الخوارزمية، تتم معالجة فحوصات التصوير بالرنين المغناطيسي للثدي مسبقًا قبل خطوة استخراج الخواص لتقليل تأثيرات الاختلافات بين شرائح التصوير بالرنين المغناطيسي، وفصل الثدي الايمن عن الايسر، بالإضافة الى عزل خلفية الصور. وقد كانت أقصى دقة تم تحقيقها لتصنيف مجموعة بيانات تضم 326 شريحة تصوير بالرنين المغناطيسي للثدي 98.77٪. يبدو أن النموذج يتسم بالكفاءة والأداء ويمكن بالتالي اعتباره مرشحًا للتطبيق في بيئة سريرية.
Prostate cancer classification from ultrasound and MRI images using deep learning based Explainable Artificial Intelligence
Hassan, Md Rafiul
Islam, Md Fakrul
Uddin, Md Zia
Ghoshal, Goutam
Hassan, Mohammad Mehedi
Huda, Shamsul
Fortino, Giancarlo
Future Generation Computer Systems2022Journal Article, cited 0 times
Website
Prostate-MRI-US-Biopsy
Prostate Cancer
Deep Learning
Multi-Modal Medical Image Fusion for Non-Small Cell Lung Cancer Classification
The early detection and nuanced subtype classification of non-small cell lung cancer (NSCLC), a predominant cause of cancer mortality worldwide, is a critical and complex issue. In this paper, we introduce an innovative integration of multi-modal data, synthesizing fused medical imaging (CT and PET scans) with clinical health records and genomic data. This unique fusion methodology leverages advanced machine learning models, notably MedClip and BEiT, for sophisticated image feature extraction, setting a new standard in computational oncology. Our research surpasses existing approaches, as evidenced by a substantial enhancement in NSCLC detection and classification precision. The results showcase notable improvements across key performance metrics, including accuracy, precision, recall, and F1-score. Specifically, our leading multi-modal classifier model records an impressive accuracy of 94.04%. We believe that our approach has the potential to transform NSCLC diagnostics, facilitating earlier detection and more effective treatment planning and, ultimately, leading to superior patient outcomes in lung cancer care.
Breast cancer masses classification using deep convolutional neural networks and transfer learning
Hassan, Shayma’a A.
Sayed, Mohammed S.
Abdalla, Mahmoud I.
Rashwan, Mohsen A.
Multimedia Tools and Applications2020Journal Article, cited 0 times
Website
CBIS-DDSM
Deep Learning
Deep convolutional neural network (DCNN)
With the recent advances in the deep learning field, the use of deep convolutional neural networks (DCNNs) in biomedical image processing becomes very encouraging. This paper presents a new classification model for breast cancer masses based on DCNNs. We investigated the use of transfer learning from AlexNet and GoogleNet pre-trained models to suit this task. We experimentally determined the best DCNN model for accurate classification by comparing different models, which vary according to the design and hyper-parameters. The effectiveness of these models were demonstrated using four mammogram databases. All models were trained and tested using a mammographic dataset from CBIS-DDSM and INbreast databases to select the best AlexNet and GoogleNet models. The performance of the two proposed models was further verified using images from Egyptian National Cancer Institute (NCI) and MIAS database. When tested on CBIS-DDSM and INbreast databases, the proposed AlexNet model achieved an accuracy of 100% for both databases. While, the proposed GoogleNet model achieved accuracy of 98.46% and 92.5%, respectively. When tested on NCI images and MIAS databases, AlexNet achieved an accuracy of 97.89% with AUC of 98.32%, and accuracy of 98.53% with AUC of 98.95%, respectively. GoogleNet achieved an accuracy of 91.58% with AUC of 96.5%, and accuracy of 88.24% with AUC of 94.65%, respectively. These results suggest that AlexNet has better performance and more robustness than GoogleNet. To the best of our knowledge, the proposed AlexNet model outperformed the latest methods. It achieved the highest accuracy and AUC score and the lowest testing time reported on CBIS-DDSM, INbreast and MIAS databases.
Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images
Hatamizadeh, Ali
Nath, Vishwesh
Tang, Yucheng
Yang, Dong
Roth, Holger R.
Xu, Daguang
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
U-Net
Transformer
Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can assist clinicians in diagnosing the patient and successively studying the progression of the malignant entity. In recent years, Fully Convolutional Neural Networks (FCNNs) approaches have become the de facto standard for 3D medical image segmentation. The popular “U-shaped” network architecture has achieved state-of-the-art performance benchmarks on different 2D and 3D semantic segmentation tasks and across various imaging modalities. However, due to the limited kernel size of convolution layers in FCNNs, their performance of modeling long-range information is sub-optimal, and this can lead to deficiencies in the segmentation of tumors with variable sizes. On the other hand, transformer models have demonstrated excellent capabilities in capturing such long-range information in multiple domains, including natural language processing and computer vision. Inspired by the success of vision transformers and their variants, we propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is reformulated as a sequence to sequence prediction problem wherein multi-modal input data is projected into a 1D sequence of embedding and used as an input to a hierarchical Swin transformer as the encoder. The swin transformer encoder extracts features at five different resolutions by utilizing shifted windows for computing self-attention and is connected to an FCNN-based decoder at each resolution via skip connections. We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase.; ; Code: https://monai.io/research/swin-unetr.
Segmentation of Kidney Tumors on Non-Contrast CT Images Using Protuberance Detection Network
Hatsutani, Taro
Ichinose, Akimichi
Nakamura, Keigo
Kitamura, Yoshiro
2023Book Section, cited 0 times
C4KC-KiTS
Segmentation
Algorithm Development
Shape analysis
Many renal cancers are incidentally found on non-contrast CT (NCCT) images. On contrast-enhanced CT (CECT) images, most kidney tumors, especially renal cancers, have different intensity values compared to normal tissues. However, on NCCT images, some tumors called isodensity tumors, have similar intensity values to the surrounding normal tissues, and can only be detected through a change in organ shape. Several deep learning methods which segment kidney tumors from CECT images have been proposed and showed promising results. However, these methods fail to capture such changes in organ shape on NCCT images. In this paper, we present a novel framework, which can explicitly capture protruded regions in kidneys to enable a better segmentation of kidney tumors. We created a synthetic mask dataset that simulates a protuberance, and trained a segmentation network to separate the protruded regions from the normal kidney regions. To achieve the segmentation of whole tumors, our framework consists of three networks. The first network is a conventional semantic segmentation network which extracts a kidney region mask and an initial tumor region mask. The second network, which we name protuberance detection network, identifies the protruded regions from the kidney region mask. Given the initial tumor region mask and the protruded region mask, the last network fuses them and predicts the final kidney tumor mask accurately. The proposed method was evaluated on a publicly available KiTS19 dataset, which contains 108 NCCT images, and showed that our method achieved a higher dice score of 0.615 (+0.097) and sensitivity of 0.721 (+0.103) compared to 3D-UNet. To the best of our knowledge, this is the first deep learning method that is specifically designed for kidney tumor segmentation on NCCT images.
Centerline detection and estimation of pancreatic duct from abdominal CT images
Purpose: The aim of this work is to automatically detect and estimate the centerline of the pancreatic duct accurately. The proposed method uses four different algorithms for tracking the pancreatic duct in each of four type pancreatic zones. Method: The pancreatic duct was divided into 4 zones; Zone A has a clearly delineated pancreatic duct, Zone B is obscured, Zone C runs from visible segment to the pancreas’ tail and Zone D extends from head of the pancreas to the first visible point. The pancreatic duct is obscured in regions of lengths from 10-40 mm. Proposed method combines deep learning CNN for duct segmentation, followed by Dijkstra's rooting algorithm for estimation of centerline in Zones A and Zones B. In Zone C and D, the centerline was estimated using geometric information. The reference standard for the pancreatic duct was determined using non-obscured data by skilled technologists. Results: Zones A, which used a neural network method, had a success rate of 94%. In Zone B, the difference was <3mm when obscured interval was 10-40mm In Zone C and D, distance between computer estimated pancreas head and tail points and operator determined anatomical point was 10mm and 19mm, respectively. Optimal characteristic cost functions for each zone allow the natural centerline to be estimated even in obscured region. The new algorithms increased the average visible centerline length by 146% with calculation time of <40 seconds.
Fully Automated MR Based Virtual Biopsy of Cerebral Gliomas
Haubold, Johannes
Hosch, René
Parmar, Vicky
Glas, Martin
Guberina, Nika
Catalano, Onofrio Antonio
Pierscianek, Daniela
Wrede, Karsten
Deuschl, Cornelius
Forsting, Michael
Nensa, Felix
Flaschel, Nils
Umutlu, Lale
Cancers2021Journal Article, cited 0 times
BraTS 2019
Automatic Segmentation
BRAIN
cerebral glioma
multi-parametric MRI
Radiogenomics
Radiomics
OBJECTIVE: The aim of this study was to investigate the diagnostic accuracy of a radiomics analysis based on a fully automated segmentation and a simplified and robust MR imaging protocol to provide a comprehensive analysis of the genetic profile and grading of cerebral gliomas for everyday clinical use. METHODS: MRI examinations of 217 therapy-naive patients with cerebral gliomas, each comprising a non-contrast T1-weighted, FLAIR and contrast-enhanced T1-weighted sequence, were included in the study. In addition, clinical and laboratory parameters were incorporated into the analysis. The BraTS 2019 pretrained DeepMedic network was used for automated segmentation. The segmentations generated by DeepMedic were evaluated with 200 manual segmentations with a DICE score of 0.8082 +/- 0.1321. Subsequently, the radiomics signatures were utilized to predict the genetic profile of ATRX, IDH1/2, MGMT and 1p19q co-deletion, as well as differentiating low-grade glioma from high-grade glioma. RESULTS: The network provided an AUC (validation/test) for the differentiation between low-grade gliomas vs. high-grade gliomas of 0.981 +/- 0.015/0.885 +/- 0.02. The best results were achieved for the prediction of the ATRX expression loss with AUCs of 0.979 +/- 0.028/0.923 +/- 0.045, followed by 0.929 +/- 0.042/0.861 +/- 0.023 for the prediction of IDH1/2. The prediction of 1p19q and MGMT achieved moderate results, with AUCs of 0.999 +/- 0.005/0.711 +/- 0.128 for 1p19q and 0.854 +/- 0.046/0.742 +/- 0.050 for MGMT. CONCLUSION: This fully automated approach utilizing simplified MR protocols to predict the genetic profile and grading of cerebral gliomas provides an easy and efficient method for non-invasive tumor decoding.; ; SIMPLE SUMMARY: Over the past few years, radiomics-based tissue characterization has demonstrated its potential for non-invasive prediction of the genetic profile and grading in cerebral gliomas using multiparametric MRI. The aim of our study was to investigate the feasibility and diagnostic accuracy of a fully automated radiomics analysis based on a simplified MR protocol derived from various scanner systems to prospectively ease the transition of radiomics-based non-invasive tissue sampling into clinical practice. Using an MRI with non-contrast and post-contrast T1-weighted sequences and FLAIR, our workflow automatically predicts the IDH1/2 mutation, the ATRX expression loss, the 1p19q co-deletion and the MGMT methylation status. It also effectively differentiates low-grade from high-grade gliomas. In summary, the present study demonstrated that a fully automated prediction of grading and the genetic profile of cerebral gliomas could be performed with our proposed method using a simplified MRI protocol that is robust to variations in scanner systems, imaging parameters and field strength.
Descriptions and evaluations of methods for determining surface curvature in volumetric data
Hauenstein, Jacob D.
Newman, Timothy S.
Computers & Graphics2020Journal Article, cited 0 times
Website
FDA-Phantom
RIDER PHANTOM PET-CT
Highlights; • Methods using convolution or fitting are often the most accurate.; • The existing TE method is fast and accurate on noise-free data.; • The OP method is faster than existing, similarly accurate methods on real data.; • Even modest errors in curvature notably impact curvature-based renderings.; • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings.; Abstract; Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.
A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients
He, Bo
Zhao, Wei
Pi, Jiang-Yuan
Han, Dan
Jiang, Yuan-Ming
Zhang, Zhen-Guang
Respiratory research2018Journal Article, cited 0 times
Website
non-small cell lung cancer
Radiomics
Computed tomography‐based radiomics prediction of CTLA4 expression and prognosis in clear cell renal cell carcinoma
OBJECTIVES: To predict CTLA4 expression levels and prognosis of clear cell renal cell carcinoma (ccRCC) by constructing a computed tomography-based radiomics model and establishing a nomogram using clinicopathologic factors. METHODS: The clinicopathologic parameters and genomic data were extracted from 493 ccRCC cases of the Cancer Genome Atlas (TCGA)-KIRC database. Univariate and multivariate Cox regression and Kaplan-Meier analysis were performed for prognosis analysis. Cibersortx was applied to evaluate the immune cell composition. Radiomic features were extracted from the TCGA/the Cancer Imaging Archive (TCIA) (n = 102) datasets. The support vector machine (SVM) was employed to establish the radiomics signature for predicting CTLA4 expression. Receiver operating characteristic curve (ROC), decision curve analysis (DCA), and precision-recall curve were utilized to assess the predictive performance of the radiomics signature. Correlations between radiomics score (RS) and selected features were also evaluated. An RS-based nomogram was constructed to predict prognosis. RESULTS: CTLA4 was significantly overexpressed in ccRCC tissues and was related to lower overall survival. A higher CTLA4 expression was independently linked to the poor prognosis (HR = 1.458, 95% CI 1.13-1.881, p = 0.004). The radiomics model for the prediction of CTLA4 expression levels (AUC = 0.769 in the training set, AUC = 0.724 in the validation set) was established using seven radiomic features. A significant elevation in infiltrating M2 macrophages was observed in the RS high group (p < 0.001). The predictive efficiencies of the RS-based nomogram measured by AUC were 0.826 at 12 months, 0.805 at 36 months, and 0.76 at 60 months. CONCLUSIONS: CTLA4 mRNA expression status in ccRCC could be predicted noninvasively using a radiomics model based on nephrographic phase contrast-enhanced CT images. The nomogram established by combining RS and clinicopathologic factors could predict overall survival for ccRCC patients. Our findings may help stratify prognosis of ccRCC patients and identify those who may respond best to ICI-based treatments.
MTF1 has the potential as a diagnostic and prognostic marker for gastric cancer and is associated with good prognosis
He, J.
Jiang, X.
Yu, M.
Wang, P.
Fu, L.
Zhang, G.
Cai, H.
Clin Transl Oncol2023Journal Article, cited 0 times
Website
TCGA-STAD
Radiogenomics
Biomarker
Gastric cancer
Mtf1
Survival
PURPOSE: Metal Regulatory Transcription Factor 1 (MTF1) can be an essential transcription factor for heavy metal response in cells and can also reduce oxidative and hypoxic stresses in cells. However, the current research on MTF1 in gastric cancer is lacking. METHODS: Bioinformatics techniques were used to perform expression analysis, prognostic analysis, enrichment analysis, tumor microenvironment correlation analysis, immunotherapy Immune cell Proportion Score (IPS) correlation and drug sensitivity correlation analysis of MTF1 in gastric cancer. And qRT-PCR was used to verify MTF1 expression in gastric cancer cells and tissues. RESULTS: MTF1 showed low expression in gastric cancer cells and tissues, and low expression in T3 stage compared with T1 stage. KM prognostic analysis showed that high expression of MTF1 was significantly associated with longer overall survival (OS), FP (first progression) and PPS (post-progression survival) in gastric cancer patients. Cox regression analysis showed that MTF1 was an independent prognostic factor and a protective factor in gastric cancer patients. MTF1 is involved in pathways in cancer, and the high expression of MTF1 is negatively correlated with the half maximal inhibitory concentration (IC50) of common chemotherapeutic drugs. CONCLUSION: MTF1 is relatively lowly expressed in gastric cancer. MTF1 is also an independent prognostic factor for gastric cancer patients and is associated with good prognosis. It has the potential to be a diagnostic and prognostic marker for gastric cancer.
Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction
He, Qiang
Li, Xin
Kim, DW Nathan
Jia, Xun
Gu, Xuejun
Zhen, Xin
Zhou, Linghong
Information Fusion2020Journal Article, cited 0 times
Website
NSCLC-Radiomics
radiomics
Fast Super-Resolution in MRI Images Using Phase Stretch Transform, Anchored Point Regression and Zero-Data Learning
Medical imaging is fundamentally challenging due to absorption and scattering in tissues and by the need to minimize illumination of the patient with harmful radiation. Common problems are low spatial resolution, limited dynamic range and low contrast. These predicaments have fueled interest in enhancing medical images using digital post processing. In this paper, we propose and demonstrate an algorithm for real-time inference that is suitable for edge computing. Our locally adaptive learned filtering technique named Phase Stretch Anchored Regression (PhSAR) combines the Phase Stretch Transform for local features extraction in visually impaired images with clustered anchored points to represent image feature space and fast regression based learning. In contrast with the recent widely-used deep neural network for image super-resolution, our algorithm achieves significantly faster inference and less hallucination on image details and is interpretable. Tests on brain MRI images using zero-data learning reveal its robustness with explicit PSNR improvement and lower latency compared to relevant benchmarks.
Deep Convolutional Neural Network With a Multi-Scale Attention Feature Fusion Module for Segmentation of Multimodal Brain Tumor
He, Xueqin
Xu, Wenjie
Yang, Jane
Mao, Jianyao
Chen, Sifang
Wang, Zhanxiang
Frontiers in Neuroscience2021Journal Article, cited 0 times
BraTS-TCGA-GBM
As a non-invasive, low-cost medical imaging technology, magnetic resonance imaging (MRI) has become an important tool for brain tumor diagnosis. Many scholars have carried out some related researches on MRI brain tumor segmentation based on deep convolutional neural networks, and have achieved good performance. However, due to the large spatial and structural variability of brain tumors and low image contrast, the segmentation of MRI brain tumors is challenging. Deep convolutional neural networks often lead to the loss of low-level details as the network structure deepens, and they cannot effectively utilize the multi-scale feature information. Therefore, a deep convolutional neural network with a multi-scale attention feature fusion module (MAFF-ResUNet) is proposed to address them. The MAFF-ResUNet consists of a U-Net with residual connections and a MAFF module. The combination of residual connections and skip connections fully retain low-level detailed information and improve the global feature extraction capability of the encoding block. Besides, the MAFF module selectively extracts useful information from the multi-scale hybrid feature map based on the attention mechanism to optimize the features of each layer and makes full use of the complementary feature information of different scales. The experimental results on the BraTs 2019 MRI dataset show that the MAFF-ResUNet can learn the edge structure of brain tumors better and achieve high accuracy.
Glioma grade detection using grasshopper optimization algorithm‐optimized machine learning methods: The Cancer Imaging Archive study
Hedyehzadeh, Mohammadreza
Maghooli, Keivan
MomenGharibvand, Mohammad
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Abstract Detection of brain tumor's grade is a very important task in treatment plan design which was done using invasive methods such as pathological examination. This examination needs resection procedure and resulted in pain, hemorrhage and infection. The aim of this study is to provide an automated non‐invasive method for estimation of brain tumor's grade using Magnetic Resonance Images (MRI). After pre‐processing, using Fuzzy C‐Means (FCM) segmentation method, tumor region was extracted from post‐processed images. In feature extraction, texture, Local Binary Pattern (LBP) and fractal‐based features were extracted using Matlab software. Then using Grasshopper Optimization Algorithm (GOA), parameters of three different classification methods including Random Forest (RF), K‐Nearest Neighbor (KNN) and Support Vector Machine (SVM) were optimized. Finally, performance of three applied classifiers before and after optimization were compared. The results showed that the random forest with accuracy of 99.09% has achieved better performance comparing other classification methods.
A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients
Hedyehzadeh, Mohammadreza
Maghooli, Keivan
MomenGharibvand, Mohammad
Pistorius, Stephen
J Digit Imaging2020Journal Article, cited 0 times
Website
TCGA-GBM
Radiogenomics
Glioblastoma
Deep convolution neural network
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.
Multi- class classification of breast cancer abnormalities using Deep Convolutional Neural Network (CNN)
Heenaye-Mamode Khan, M.
Boodoo-Jahangeer, N.
Dullull, W.
Nathire, S.
Gao, X.
Sinha, G. R.
Nagwanshi, K. K.
PLoS One2021Journal Article, cited 0 times
Website
CBIS-DDSM
Deep convolutional neural network (DCNN)
BREAST
The real cause of breast cancer is very challenging to determine and therefore early detection of the disease is necessary for reducing the death rate due to risks of breast cancer. Early detection of cancer boosts increasing the survival chance up to 8%. Primarily, breast images emanating from mammograms, X-Rays or MRI are analyzed by radiologists to detect abnormalities. However, even experienced radiologists face problems in identifying features like micro-calcifications, lumps and masses, leading to high false positive and high false negative. Recent advancement in image processing and deep learning create some hopes in devising more enhanced applications that can be used for the early detection of breast cancer. In this work, we have developed a Deep Convolutional Neural Network (CNN) to segment and classify the various types of breast abnormalities, such as calcifications, masses, asymmetry and carcinomas, unlike existing research work, which mainly classified the cancer into benign and malignant, leading to improved disease management. Firstly, a transfer learning was carried out on our dataset using the pre-trained model ResNet50. Along similar lines, we have developed an enhanced deep learning model, in which learning rate is considered as one of the most important attributes while training the neural network. The learning rate is set adaptively in our proposed model based on changes in error curves during the learning process involved. The proposed deep learning model has achieved a performance of 88% in the classification of these four types of breast cancer abnormalities such as, masses, calcifications, carcinomas and asymmetry mammograms.
Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer
Hegde, John V
Mulkern, Robert V
Panych, Lawrence P
Fennessy, Fiona M
Fedorov, Andriy
Maier, Stephan E
Tempany, Clare
Journal of Magnetic Resonance Imaging2013Journal Article, cited 164 times
Website
Explainable AI identifies diagnostic cells of genetic AML subtypes
Hehr, Matthias
Sadafi, Ario
Matek, Christian
Lienemann, Peter
Pohlkamp, Christian
Haferlach, Torsten
Spiekermann, Karsten
Marr, Carsten
2023Journal Article, cited 0 times
AML-Cytomorphology_MLL_Helmholtz
Explainable AI is deemed essential for clinical applications as it allows rationalizing model predictions, helping to build trust between clinicians and automated decision support tools. We developed an inherently explainable AI model for the classification of acute myeloid leukemia subtypes from blood smears and found that high-attention cells identified by the model coincide with those labeled as diagnostically relevant by human experts. Based on over 80,000 single white blood cell images from digitized blood smears of 129 patients diagnosed with one of four WHO-defined genetic AML subtypes and 60 healthy controls, we trained SCEMILA, a single-cell based explainable multiple instance learning algorithm. SCEMILA could perfectly discriminate between AML patients and healthy controls and detected the APL subtype with an F1 score of 0.86±0.05 (mean±s.d., 5-fold cross-validation). Analyzing a novel multi-attention module, we confirmed that our algorithm focused with high concordance on the same AML-specific cells as human experts do. Applied to classify single cells, it is able to highlight subtype specific cells and deconvolve the composition of a patient's blood smear without the need of single-cell annotation of the training data. Our large AML genetic subtype dataset is publicly available, and an interactive online tool facilitates the exploration of data and predictions. SCEMILA enables a comparison of algorithmic and expert decision criteria and can present a detailed analysis of individual patient data, paving the way to deploy AI in the routine diagnostics for identifying hematopoietic neoplasms.
Effective full connection neural network updating using a quantized full FORCE algorithm
Heidarian, Mehdi
Karimi, Gholamreza
Applied Soft Computing2023Journal Article, cited 0 times
QIN Breast DCE-MRI
This paper presents a new training algorithm that can update the situation of layers’ network, and therefore, connections, neurons, and firing rate of neurons based on FORCE (first-order reduced and controlled error) training algorithm. The Quantized Full FORCE algorithm (QFF) also updates the number of neurons and connections between different layers in the network per iteration in a way that the whole firing rate of each layer is updated via selecting the best neurons and combining strong features. The update method is sequential, so that with each instance passing through the network, the network structure is updated with the Full FORCE algorithm. The algorithm updates the structure of networks with a multiple/single middle layer of the supervised version of feed forward networks such as Multilayer perceptron (MLP), changing them into partially-connected networks. A combination of principal component analysis PCA and Linear Discriminant Analysis (LDA) algorithms has been used to cluster the network input features. The paper focuses on the deep supervised MLP network with backpropagation (BP) and various datasets and its comparison with other MLP based stat of art methods and hybrid evolutionary algorithms. We achieved 98.15 percent accuracy for facial expression 98.6 and 97.7 percent for Wisconsin breast Cancer and Iris Flower in respectively. The training algorithm employed in the study enjoys a lower computational complexity while yielding faster and more accurate convergence, starting with a very low level of errors of 0.009 in comparison with the full connection network and it solves the challenge of getting stuck in local minima and poor convergence of Gradient Decent with BP.
Accurate segmentation of head and neck radiotherapy CT scans with 3D CNNs: consistency is key
Henderson, Edward G A
Osorio, Eliana M Vasquez
van Herk, Marcel
Brouwer, Charlotte L
Steenbakkers, Roel J H M
Green, Andrew F
Physics in Medicine and Biology2023Journal Article, cited 0 times
HNSCC
Objective.Automatic segmentation of organs-at-risk in radiotherapy planning computed tomography (CT) scans using convolutional neural networks (CNNs) is an active research area. Very large datasets are usually required to train such CNN models. In radiotherapy, large, high-quality datasets are scarce and combining data from several sources can reduce the consistency of training segmentations. It is therefore important to understand the impact of training data quality on the performance of auto-segmentation models for radiotherapy.Approach.In this study, we took an existing 3D CNN architecture for head and neck CT auto-segmentation and compare the performance of models trained with a small, well-curated dataset (n= 34) and then a far larger dataset (n= 185) containing less consistent training segmentations. We performed 5-fold cross-validations in each dataset and tested segmentation performance using the 95th percentile Hausdorff distance and mean distance-to-agreement metrics. Finally, we validated the generalisability of our models with an external cohort of patient data (n= 12) with five expert annotators.Main results.The models trained with a large dataset were greatly outperformed by models (of identical architecture) trained with a smaller, but higher consistency set of training samples. Our models trained with a small dataset produce segmentations of similar accuracy as expert human observers and generalised well to new data, performing within inter-observer variation.Significance.We empirically demonstrate the importance of highly consistent training samples when training a 3D auto-segmentation model for use in radiotherapy. Crucially, it is the consistency of the training segmentations which had a greater impact on model performance rather than the size of the dataset used.
Artificial Intelligence for Detection of Lung and Airway Nodules in Clinical Chest CT scans
Deep learning for the detection of benign and malignant pulmonary nodules in non-screening chest CT scans
Hendrix, W.
Hendrix, N.
Scholten, E. T.
Mourits, M.
Trap-de Jong, J.
Schalekamp, S.
Korst, M.
van Leuken, M.
van Ginneken, B.
Prokop, M.
Rutten, M.
Jacobs, C.
Commun Med (Lond)2023Journal Article, cited 0 times
LIDC-IDRI
DICOM-LIDC-IDRI-Nodules
Algorithm Development
Lung Cancer
Nodule classification
Deep Learning
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
BACKGROUND: Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD: We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS: On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS: The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting.; Early-stage lung cancer can be diagnosed after identifying an abnormal spot on a chest CT scan ordered for other medical reasons. These spots or lung nodules can be overlooked by radiologists, as they are not necessarily the focus of an examination and can be as small as a few millimeters. Software using Artificial Intelligence (AI) technology has proven to be successful for aiding radiologists in this task, but its performance is understudied outside a lung cancer screening setting. We therefore developed and validated AI software for the detection of cancerous nodules or non-cancerous nodules that would need attention. We show that the software can reliably detect these nodules in a non-screening setting and could potentially aid radiologists in daily clinical practice.; eng
Brain Tumor Segmentation with Self-ensembled, Deeply-Supervised 3D U-Net Neural Networks: A BraTS 2020 Challenge Solution
Henry, Théophraste
Carré, Alexandre
Lerousseau, Marvin
Estienne, Théo
Robert, Charlotte
Paragios, Nikos
Deutsch, Eric
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Brain tumor segmentation is a critical task for patient’s disease management. In order to automate and standardize this task, we trained multiple U-net like neural networks, mainly with deep supervision and stochastic weight averaging, on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. Two independent ensembles of models from two different training pipelines were trained, and each produced a brain tumor segmentation map. These two labelmaps per patient were then merged, taking into account the performance of each ensemble for specific tumor subregions. Our performance on the online validation dataset with test time augmentation were as follows: Dice of 0.81, 0.91 and 0.85; Hausdorff (95%) of 20.6, 4, 3, 5.7 mm for the enhancing tumor, whole tumor and tumor core, respectively. Similarly, our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff (95%) of 20.4, 6.7 and 19.5 mm on the final test dataset, ranking us among the top ten teams. More complicated training schemes and neural network architectures were investigated without significant performance gain at the cost of greatly increased training time. Overall, our approach yielded good and balanced performance for each tumor subregion. Our solution is open sourced at https://github.com/lescientifik/open_brats2020.
Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning
Hering, Alessa
Hansen, Lasse
Mok, Tony C. W.
Chung, Albert C. S.
Siebert, Hanna
Häger, Stephanie
Lange, Annkristin
Kuckertz, Sven
Heldmann, Stefan
Shao, Wei
Vesal, Sulaiman
Rusu, Mirabela
Sonn, Geoffrey
Estienne, Théo
Vakalopoulou, Maria
Han, Luyi
Huang, Yunzhi
Yap, Pew-Thian
Brudfors, Mikael
Balbastre, Yaël
Joutard, Samuel
Modat, Marc
Lifshitz, Gal
Raviv, Dan
Lv, Jinxin
Li, Qiang
Jaouen, Vincent
Visvikis, Dimitris
Fourcade, Constance
Rubeaux, Mathieu
Pan, Wentao
Xu, Zhe
Jian, Bailiang
De Benetti, Francesca
Wodzinski, Marek
Gunnarsson, Niklas
Sjölund, Jens
Grzech, Daniel
Qiu, Huaqi
Li, Zeju
Thorley, Alexander
Duan, Jinming
Großbröhmer, Christoph
Hoopes, Andrew
Reinertsen, Ingerid
Xiao, Yiming
Landman, Bennett
Huo, Yuankai
Murphy, Keelin
Lessmann, Nikolas
van Ginneken, Bram
Dalca, Adrian V.
Heinrich, Mattias P.
IEEE Transactions on Medical Imaging2023Journal Article, cited 0 times
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Deep Feature Learning For Soft Tissue Sarcoma Classification In MR Images Via Transfer Learning
Hermessi, Haithem
Mourali, Olfa
Zagrouba, Ezzeddine
Expert Systems with Applications2018Journal Article, cited 0 times
Website
Soft Tissue Sarcoma
Liposarcomas
Leiomyosarcomas
Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification
Artificial neural networks applications in computer aided diagnosis: system design and use as an educational tool
Hernández Rodríguez, Jorge
2016Conference Paper, cited 0 times
LIDC-IDRI
CT COLONOGRAPHY
CBIS-DDSM
This paper describes the motivation, state-of-the-art, hypotheses and research objectives of the Doctoral Thesis "Artificial Neural Networks applications in Computer Aided Diagnosis. System design and use as an educational tool". A description of the investigation approaches and methodologies, the current dissertation status and expected contributions is also presented. At the time of writing, this dissertation is in its first year of development. Its central topic is Computer Aided Diagnosis and Detection (CAD), a valuable automated tool for specialists who interpret medical images, that provides information which can be used as a "second opinion" or supplementary data in their decision making process. Developing CAD schemes based in the machine learning models called Artificial Neural Networks (ANNs), which could be applied to different image modalities, is the main objective of the first phase of the dissertation. Their integration in a software environment that allows the user to handle and access to information efficiently is of key importance in the process. The validation of the system in clinical practice and the investigation of their possible uses as an educational tool for trainees during residency programs is the second phase.
Detección Asistida por Ordenador basada en redes neuronales de convolución en tomografía computarizada y mamografía: diseño de sistemas, desarrollo de la aplicación JORCAD y validación en un contexto educativo
[ES]El Radiodiagnóstico es una especialidad médica que ha vivido un rápido desarrollo tecnológico en las últimas décadas, convirtiéndose en una herramienta diagnóstica de primer nivel en Medicina. La IA ha supuesto una revolución en muchas áreas del conocimiento, incluyendo el radiodiagnóstico, donde su irrupción como sistemas de soporte en la toma de decisiones de los especialistas ha supuesto un cambio de paradigma en la práctica clínica. Estos sistemas han demostrado su utilidad en tareas como la detección de lesiones y su clasificación o diagnóstico. Sin embargo, su gran potencial como herramientas que asistan en diferentes etapas del proceso de aprendizaje de estudiantes de medicina y residentes, parece haber quedado en segundo plano con respecto a las aplicaciones clínicas. El interés en la imagen radiológica y en ambas vertientes de la IA dota a esta Tesis Doctoral de un carácter interdisciplinar, al estar relacionada con la informática mediante el desarrollo de un sistema de IA, la radiología y la física médica a través del uso de imágenes de dos modalidades radiológicas para la detección de lesiones, siendo necesario su tratamiento y procesado, y también con la educación mediante el desarrollo de una aplicación educativa para la formación de especialistas en radiodiagnóstico (JORCAD) y la realización de una actividad formativa interactiva para su validación.
Convolutional Neural Networks for Multi-scale Lung Nodule Classification in CT: Influence of Hyperparameter Tuning on Performance
Hernández-Rodríguez, Jorge
Cabrero-Fraile, Francisco-Javier
Rodríguez-Conde, María-José
TEM Journal2022Journal Article, cited 0 times
Website
LIDC-IDRI
SPIE-AAPM Lung CT Challenge
Algorithm Development
Computer Aided Detection (CADe)
Computed Tomography (CT)
LUNG
In this study, a system based in Convolutional Neural Networks for differentiating lung nodules and non-nodules in Computed Tomography is developed. Multi-scale patches, extracted from LIDC-IDRI database, are used to train different CNN models. Adjustable hyperparameters are modified sequentially, to study their influence, evaluate learning process and find each size best performing network. Classification accuracies obtained are superior to 87% for all sizes with areas under Receiver Operating Characteristic in the interval (0.936-0.951). Trained models are tested with nodules from an independent database, providing sensitivities above 96%. Performance of trained models is similar to other published articles and show good classification capacities. As a basis for developing CAD systems, recommendations regarding hyperparameter tuning are provided.
Development and validation of an educational software based in artificial neural networks for training in radiology (JORCAD) through an interactive learning activity
Hernández-Rodríguez, Jorge
Rodríguez-Conde, María-José
Santos-Sánchez, José-Ángel
Cabrero-Fraile, Francisco-Javier
Heliyon2023Journal Article, cited 0 times
Lung-PET-CT-Dx
The use of Computer Aided Detection (CAD) software has been previously documented as a valuable tool to improve specialist training in Radiology. This research assesses the utility of an educational software tool aimed to train residents in Radiology and other related medical specialties and students from Medicine degree. This in-house developed software, called JORCAD, integrates a CAD system based in Convolutional Neural Networks (CNNs) with annotated cases from radiological image databases. The methodology followed for software validation was expert judgement after completing an interactive learning activity. Participants received a theoretical session and a software usage tutorial and afterwards utilized the application in a dedicated workstation to analyze a series of proposed cases of thorax computed tomography (CT) and mammography. A total of 26 expert participants from the Radiology Department at Salamanca University Hospital (15 specialists and 11 residents) fulfilled the activity and evaluated different aspects through a series of surveys: software usability, case navigation tools, CAD module utility for learning and JORCAD educational capabilities. Participants also graded imaging cases to establish JORCAD usefulness for training radiology residents. According to the statistical analysis of survey results and expert cases scoring, along with their opinions, it can be concluded that JORCAD software is a useful tool for training future specialists. The combination of CAD with annotated cases from validated databases enhances learning, offering a second opinion and changing the usual training paradigm. Including software as JORCAD in residency training programs of Radiology and other medical specialties would have a positive effect on trainees' background knowledge.
Intensity Augmentation to Improve Generalizability of Breast Segmentation Across Different MRI Scan Protocols
Hesse, Linde S.
Kuling, Grey
Veta, Mitko
Martel, Anne L.
2021Journal Article, cited 0 times
QIN Breast DCE-MRI
OBJECTIVE: The segmentation of the breast from the chest wall is an important first step in the analysis of breast magnetic resonance images. 3D U-Nets have been shown to obtain high segmentation accuracy and appear to generalize well when trained on one scanner type and tested on another scanner, provided that a very similar MR protocol is used. There has, however, been little work addressing the problem of domain adaptation when image intensities or patient orientation differ markedly between the training set and an unseen test set. In this work we aim to address this domain shift problem.
METHOD: We propose to apply extensive intensity augmentation in addition to geometric augmentation during training. We explored both style transfer and a novel intensity remapping approach as intensity augmentation strategies. For our experiments, we trained a 3D U-Net on T1-weighted scans. We tested our network on T2-weighted scans from the same dataset as well as on an additional independent test set acquired with a T1-weighted TWIST sequence and a different coil configuration.
RESULTS: By applying intensity augmentation we increased segmentation performance for the T2-weighted scans from a Dice of 0.71 to 0.88. This performance is very close to the baseline performance of training with T2-weighted scans (0.92). On the T1-weighted dataset we obtained a performance increase from 0.77 to 0.85.
CONCLUSION: Our results show that the proposed intensity augmentation increases segmentation performance across different datasets.
SIGNIFICANCE: The proposed method can improve whole breast segmentation of clinical MR scans acquired with different protocols.
Practical applications of machine learning in imaging trials
Hesterman, Jacob Y.
Greenblatt, Elliot
Novicki, Andrew
Ghayoor, Ali
Wellman, Tyler
Avants, Brian
2021Conference Proceedings, cited 0 times
NaF PROSTATE
Machine learning and deep learning are ubiquitous across a wide variety of scientific disciplines, including medical imaging. An overview of multiple application areas along the imaging chain where deep learning methods are utilized in discovery and clinical quantitative imaging trials is presented. Example application areas along the imaging chain include quality control, preprocessing, segmentation, and scoring. Within each area, one or more specific applications is demonstrated, such as automated structural brain MRI quality control assessment in a core lab environment, super-resolution MRI preprocessing for neurodegenerative disease quantification in translational clinical trials, and multimodal PET/CT tumor segmentation in prostate cancer trials. The quantitative output of these algorithms is described, including their impact on decision making and relationship to traditional read-based methods. Development and deployment of these techniques for use in quantitative imaging trials presents unique challenges. The interplay between technical and scientific domain knowledge required for algorithm development is highlighted. The infrastructure surrounding algorithm deployment is critical, given regulatory, method robustness, computational, and performance considerations. The sensitivity of a given technique to these considerations and thus complexity of deployment is task- and phase-dependent. Context is provided for the infrastructure surrounding these methods, including common strategies for data flow, storage, access, and dissemination as well as application-specific considerations for individual use cases.
Design of a Patient-Specific Radiotherapy Treatment Target
This paper describes the design of a patient-specific, radiotherapy quality assurance target that can be used to verify a treatment plan by measurement of actual dosage. Staring from a patient's (segmented) MR images, a physical model containing insertable cartridges for holding dosimeters is printed in 3D. Dosimeters can be located at specific locations of interest (e.g., tumor, nerve bundles, urethra). The model (dosimeter insert) can be placed into a pelvis 'shell' and subject to a specified treatment plan. A design for the dosimeter insert can be efficiently fabricated using rapid prototyping techniques.
Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling
Hiasa, Yuta
Otake, Yoshito
Takao, Masaki
Ogawa, Takeshi
Sugano, Nobuhiko
Sato, Yoshinobu
IEEE Trans Med Imaging2019Journal Article, cited 2 times
Website
Algorithm Development
Computed Tomography (CT)
Segmentation
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.
3D CNN-BN: A Breakthrough in Colorectal Cancer Detection with Deep Learning Technique
Convolutional Neural Network (CNN) has made remarkable progress in the medical field. The use of CNN is widely necessary to extract highly representative characteristics in the case of acute medical pathology. Composed of fully connected layers, the CNN allows the classification of the data. The classification process is done among the network layers by filtering, selecting, and applying these features at the last layers. CNN offers a better prognosis, especially in the case of colorectal cancer (CRC) prevention. CRC develops from cells that line the inner lining of the colon. Mostly, it comes from a benign tumor, called a polyp, which slowly grows with time to develop into malignant cells. However, classification of 3D scan images of the abdomen based on the presence or absence of polyps is necessary to increase the chance of early detection of the disease and thus guide it to the appropriate treatment. In this work, we present and study a 3D CNN model for the processing and classification of polyps. The results show promising performances for a 12 layers 3D CNN model.
Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations
Hickman, Sarah E.
Baxter, Gabrielle C.
Gilbert, Fiona J.
2021Journal Article, cited 0 times
CBIS-DDSM
ISPY1
TCGA-BRCA
Retrospective studies have shown artificial intelligence (AI) algorithms can match as well as enhance radiologist’s performance in breast screening. These tools can facilitate tasks not feasible by humans such as the automatic triage of patients and prediction of treatment outcomes. Breast imaging faces growing pressure with the exponential growth in imaging requests and a predicted reduced workforce to provide reports. Solutions to alleviate these pressures are being sought with an increasing interest in the adoption of AI to improve workflow efficiency as well as patient outcomes. Vast quantities of data are needed to test and monitor AI algorithms before and after their incorporation into healthcare systems. Availability of data is currently limited, although strategies are being devised to harness the data that already exists within healthcare institutions. Challenges that underpin the realisation of AI into everyday breast imaging cannot be underestimated and the provision of guidance from national agencies to tackle these challenges, taking into account views from a societal, industrial and healthcare prospective is essential. This review provides background on the evaluation and use of AI in breast imaging in addition to exploring key ethical, technical, legal and regulatory challenges that have been identified so far.
Gross tumour volume radiomics for prognostication of recurrence & death following radical radiotherapy for NSCLC
Hindocha, S.
Charlton, T. G.
Linton-Reid, K.
Hunter, B.
Chan, C.
Ahmed, M.
Greenlay, E. J.
Orton, M.
Bunce, C.
Lunn, J.
Doran, S. J.
Ahmad, S.
McDonald, F.
Locke, I.
Power, D.
Blackledge, M.
Lee, R. W.
Aboagye, E. O.
NPJ Precis Oncol2022Journal Article, cited 0 times
NSCLC-Radiomics
Radiomics
Non-Small Cell Lung Cancer (NSCLC)
Classification
Recurrence occurs in up to 36% of patients treated with curative-intent radiotherapy for NSCLC. Identifying patients at higher risk of recurrence for more intensive surveillance may facilitate the earlier introduction of the next line of treatment. We aimed to use radiotherapy planning CT scans to develop radiomic classification models that predict overall survival (OS), recurrence-free survival (RFS) and recurrence two years post-treatment for risk-stratification. A retrospective multi-centre study of >900 patients receiving curative-intent radiotherapy for stage I-III NSCLC was undertaken. Models using radiomic and/or clinical features were developed, compared with 10-fold cross-validation and an external test set, and benchmarked against TNM-stage. Respective validation and test set AUCs (with 95% confidence intervals) for the radiomic-only models were: (1) OS: 0.712 (0.592-0.832) and 0.685 (0.585-0.784), (2) RFS: 0.825 (0.733-0.916) and 0.750 (0.665-0.835), (3) Recurrence: 0.678 (0.554-0.801) and 0.673 (0.577-0.77). For the combined models: (1) OS: 0.702 (0.583-0.822) and 0.683 (0.586-0.78), (2) RFS: 0.805 (0.707-0.903) and 0.755 (0.672-0.838), (3) Recurrence: 0.637 (0.51-0..765) and 0.738 (0.649-0.826). Kaplan-Meier analyses demonstrate OS and RFS difference of >300 and >400 days respectively between low and high-risk groups. We have developed validated and externally tested radiomic-based prediction models. Such models could be integrated into the routine radiotherapy workflow, thus informing a personalised surveillance strategy at the point of treatment. Our work lays the foundations for future prospective clinical trials for quantitative personalised risk-stratification for surveillance following curative-intent radiotherapy for NSCLC.
An integrated radiology-pathology machine learning classifier for outcome prediction following radical prostatectomy: Preliminary findings
OBJECTIVES: To evaluate the added benefit of integrating features from pre-treatment MRI (radiomics) and digitized post-surgical pathology slides (pathomics) in prostate cancer (PCa) patients for prognosticating outcomes post radical-prostatectomy (RP) including a) rising prostate specific antigen (PSA), and b) extraprostatic-extension (EPE). METHODS: Multi-institutional data (N = 58) of PCa patients who underwent pre-treatment 3-T MRI prior to RP were included in this retrospective study. Radiomic and pathomic features were extracted from PCa regions on MRI and RP specimens delineated by expert clinicians. On training set (D1, N = 44), Cox Proportional-Hazards models M(R), M(P) and M(RaP) were trained using radiomics, pathomics, and their combination, respectively, to prognosticate rising PSA (PSA > 0.03 ng/mL). Top features from M(RaP) were used to train a model to predict EPE on D1 and test on external dataset (D2, N = 14). C-index, Kalplan-Meier curves were used for survival analysis, and area under ROC (AUC) was used for EPE. M(RaP) was compared with the existing post-treatment risk-calculator, CAPRA (M(C)). RESULTS: Patients had median follow-up of 34 months. M(RaP) (c-index = 0.685 +/- 0.05) significantly outperformed M(R) (c-index = 0.646 +/- 0.05), M(P) (c-index = 0.631 +/- 0.06) and M(C) (c-index = 0.601 +/- 0.071) (p < 0.0001). Cross-validated Kaplan-Meier curves showed significant separation among risk groups for rising PSA for M(RaP) (p < 0.005, Hazard Ratio (HR) = 11.36) as compared to M(R) (p = 0.64, HR = 1.33), M(P) (p = 0.19, HR = 2.82) and M(C) (p = 0.10, HR = 3.05). Integrated radio-pathomic model M(RaP) (AUC = 0.80) outperformed M(R) (AUC = 0.57) and M(P) (AUC = 0.76) in predicting EPE on external-data (D2). CONCLUSIONS: Results from this preliminary study suggest that a combination of radiomic and pathomic features can better predict post-surgical outcomes (rising PSA and EPE) compared to either of them individually as well as extant prognostic nomogram (CAPRA).
Deep learning reveals lung shape differences on baseline chest CT between mild and severe COVID-19: A multi-site retrospective study
Hiremath, A.
Viswanathan, V. S.
Bera, K.
Shiradkar, R.
Yuan, L.
Armitage, K.
Gilkeson, R.
Ji, M.
Fu, P.
Gupta, A.
Lu, C.
Madabhushi, A.
Comput Biol Med2024Journal Article, cited 0 times
Website
COVID-19-NY-SBU
NLST
Severe COVID-19 can lead to extensive lung disease causing lung architectural distortion. In this study we employed machine learning and statistical atlas-based approaches to explore possible changes in lung shape among COVID-19 patients and evaluated whether the extent of these changes was associated with COVID-19 severity. On a large multi-institutional dataset (N = 3443), three different populations were defined; a) healthy (no COVID-19), b) mild COVID-19 (no ventilator required), c) severe COVID-19 (ventilator required), and the presence of lung shape differences between them were explored using baseline chest CT. Significant lung shape differences were observed along mediastinal surfaces of the lungs across all severity of COVID-19 disease. Additionally, differences were seen on basal surfaces of the lung when compared between healthy and severe COVID-19 patients. Finally, an AI model (a 3D residual convolutional network) characterizing these shape differences coupled with lung infiltrates (ground-glass opacities and consolidation regions) was found to be associated with COVID-19 severity.
Deep learning models for classifying cancer and COVID-19 lung diseases
The use of Computed Tomography (CT) images for detecting lung diseases is both hard and time-consuming for humans. In the past few years, Artificial Intelligence (AI), especially, deep learning models have provided impressive results vs the classical methods in a lot of different fields. Nowadays, a lot of researchers are trying to develop different deep learning mechanisms to increase and improve the performance of different systems in lung disease screening with CT images. In this work, different deep learning-based models such as DarkNet-53 (the backbone of YOLO-v3), ResNet50, and VGG19 were applied to classify CT images of patients having Corona Virus disease (COVID-19) or lung cancer. Each model's performance is presented, analyzed, and compared. The dataset used in the study came from two different sources, the large-scale CT dataset for lung cancer diagnoses (Lung-PET -CT-Dx) for lung cancer CT images while International COVID-19 Open Radiology Dataset (RICORD) for COVID-19 CT images. As a result, DarkNet-53 overperformed other models by achieving 100% accuracy. While the accuracies for ResNet and VGG19 were 80% and 77% respectively.
A deep-learning framework to predict cancer treatment response from histopathology images through imputed transcriptomics
Advances in artificial intelligence have paved the way for leveraging hematoxylin and eosin-stained tumor slides for precision oncology. We present ENLIGHT-DeepPT, an indirect two-step approach consisting of (1) DeepPT, a deep-learning framework that predicts genome-wide tumor mRNA expression from slides, and (2) ENLIGHT, which predicts response to targeted and immune therapies from the inferred expression values. We show that DeepPT successfully predicts transcriptomics in all 16 The Cancer Genome Atlas cohorts tested and generalizes well to two independent datasets. ENLIGHT-DeepPT successfully predicts true responders in five independent patient cohorts involving four different treatments spanning six cancer types, with an overall odds ratio of 2.28 and a 39.5% increased response rate among predicted responders versus the baseline rate. Notably, its prediction accuracy, obtained without any training on the treatment data, is comparable to that achieved by directly predicting the response from the images, which requires specific training on the treatment evaluation cohorts.
Artificial CT images can enhance variation of case images in diagnostic radiology skills training
Hofmeijer, E. I. S.
Wu, S. C.
Vliegenthart, R.
Slump, C. H.
van der Heijden, F.
Tan, C. O.
Insights Imaging2023Journal Article, cited 0 times
LIDC-IDRI
Synthetic images
Artificial image
Artificial intelligence
Medical image education
Personalized education
Radiology
Classification
OBJECTIVES: We sought to investigate if artificial medical images can blend with original ones and whether they adhere to the variable anatomical constraints provided. METHODS: Artificial images were generated with a generative model trained on publicly available standard and low-dose chest CT images (805 scans; 39,803 2D images), of which 17% contained evidence of pathological formations (lung nodules). The test set (90 scans; 5121 2D images) was used to assess if artificial images (512 x 512 primary and control image sets) blended in with original images, using both quantitative metrics and expert opinion. We further assessed if pathology characteristics in the artificial images can be manipulated. RESULTS: Primary and control artificial images attained an average objective similarity of 0.78 +/- 0.04 (ranging from 0 [entirely dissimilar] to 1[identical]) and 0.76 +/- 0.06, respectively. Five radiologists with experience in chest and thoracic imaging provided a subjective measure of image quality; they rated artificial images as 3.13 +/- 0.46 (range of 1 [unrealistic] to 4 [almost indistinguishable to the original image]), close to their rating of the original images (3.73 +/- 0.31). Radiologists clearly distinguished images in the control sets (2.32 +/- 0.48 and 1.07 +/- 0.19). In almost a quarter of the scenarios, they were not able to distinguish primary artificial images from the original ones. CONCLUSION: Artificial images can be generated in a way such that they blend in with original images and adhere to anatomical constraints, which can be manipulated to augment the variability of cases. CRITICAL RELEVANCE STATEMENT: Artificial medical images can be used to enhance the availability and variety of medical training images by creating new but comparable images that can blend in with original images. KEY POINTS: * Artificial images, similar to original ones, can be created using generative networks. * Pathological features of artificial images can be adjusted through guiding the network. * Artificial images proved viable to augment the depth and broadening of diagnostic training.
Prostate Segmentation according to the PI-RADS standard using a 3D CNN
Segmentation of the prostate and its internal anatomical zones in magnetic resonance images is an important step in many diagnostic applications. This task can be time consuming, and is therefore a good candidate for introducing an automated method.; The aim of this thesis has been to train a three dimensional Convolutional Neural Network (CNN) that segments the prostate and its four anatomical zones, according to the global PI-RADS standard for use as decision support in the delineation process.; This was performed on a publicly available data set that included images for training (n=78) and validation (n=20). For the evaluation, an internal data set from the University Hospital of Umeå consisting of forty patients, were used to test the generalization capability of the model. Prior to training, the delineations of anterior fibromuscular stroma (AFS), the peripheral (PZ), central (CZ) and transitional (TZ) zones, as well as the prostatic urethra, were validated in collaboration with an experienced radiologist.; The Dice score for the segmentation of the prostate was 0.88, and for the internal zones: PZ: 0.72, CZ: 0.40, TZ: 0.72, U: 0.05, and AFS: 0.34, for the test dataset. Accurate segmentation of the Urethra was challenging due to the structural differences between the data sets, and therefore these results can easily be discarded and viewed as less relevant when reviewing the structures. In conclusion, the trained CNN can be used as decision support for prostate zone delineation.
ProstateZones - Segmentations of the prostatic zones and urethra for the PROSTATEx dataset
Holmlund, W.
Simko, A.
Soderkvist, K.
Palasti, P.
Totin, S.
Kalmar, K.
Domoki, Z.
Fejes, Z.
Kincses, Z. T.
Brynolfsson, P.
Nyholm, T.
Sci Data2024Journal Article, cited 0 times
Website
PROSTATEx
Manual segmentations are considered the gold standard for ground truth in machine learning applications. Such tasks are tedious and time-consuming, albeit necessary to train reliable models. In this work, we present a dataset with expert segmentations of the prostatic zones and urethra for 200 randomly selected patients from the PROSTATEx dataset. Notably, independent duplicate segmentations were performed for 40 patients, providing inter-reader variability data. This results in a total of 240 segmentations. This dataset can be used to train machine learning models or serve as an external test set for evaluating models trained on private data, thereby addressing a current gap in the field. The delineated structures and terminology adhere to the latest Prostate Imaging Reporting and Data Systems v2.1 guidelines, ensuring consistency.
Classification of Histological Types of Primary Lung Cancer from CT Images Using Clinical Information
Honda, Naoya
Kamiya, Tohru
Kido, Shoji
2024Conference Paper, cited 0 times
Lung-PET-CT-Dx
Identification of primary lung cancer is very important because it influences the course of treatment, especially for small cell carcinomas, which metastasize rapidly and have to be detected at an early stage. In addition to imaging, clinical information is often used in CAD (computer aided diagnosis) systems. Especially, clinical information such as smoking history, which is considered to be important in the diagnosis of lung cancer. In this paper, we propose a method to identify primary lung cancer by adding clinical information from medical records in addition to images to improve the accuracy of diagnosis. We use tumor images surrounded by rectangular regions from CT images in an open dataset as input images and train the method based on deep learning technique. We evaluate the proposed method by discriminating tumors from unknown data. In our experiments, we found that the accuracy was improved about 6% when clinical information was added to 604 images, which included four classes of cancer; adenocarcinoma, small cell carcinoma, squamous cell carcinoma, and large cell carcinoma.
CT and cone-beam CT of ablative radiation therapy for pancreatic cancer with expert organ-at-risk contours
Hong, Jun
Reyngold, Marsha
Crane, Christopher
Cuaron, John
Hajj, Carla
Mann, Justin
Zinovoy, Melissa
Yorke, Ellen
LoCastro, Eve
Apte, Aditya P.
Mageras, Gig
Scientific Data2022Journal Article, cited 0 times
Pancreatic-CT-CBCT-SEG
We describe a dataset from patients who received ablative radiation therapy for locally advanced pancreatic cancer (LAPC), consisting of computed tomography (CT) and cone-beam CT (CBCT) images with physician-drawn organ-at-risk (OAR) contours. The image datasets (one CT for treatment planning and two CBCT scans at the time of treatment per patient) were collected from 40 patients. All scans were acquired with the patient in the treatment position and in a deep inspiration breath-hold state. Six radiation oncologists delineated the gastrointestinal OARs consisting of small bowel, stomach and duodenum, such that the same physician delineated all image sets belonging to the same patient. Two trained medical physicists further edited the contours to ensure adherence to delineation guidelines. The image and contour files are available in DICOM format and are publicly available from The Cancer Imaging Archive (https://doi.org/10.7937/TCIA.ESHQ-4D90, Version 2). The dataset can serve as a criterion standard for evaluating the accuracy and reliability of deformable image registration and auto-segmentation algorithms, as well as a training set for deep-learning-based methods.
Modulation of Nogo receptor 1 expression orchestrates myelin-associated infiltration of glioblastoma
As the clinical failure of glioblastoma treatment is attributed by multiple components, including myelin-associated infiltration, assessment of the molecular mechanisms underlying such process and identification of the infiltrating cells have been the primary objectives in glioblastoma research. Here, we adopted radiogenomic analysis to screen for functionally relevant genes that orchestrate the process of glioma cell infiltration through myelin and promote glioblastoma aggressiveness. The receptor of the Nogo ligand (NgR1) was selected as the top candidate through Differentially Expressed Genes (DEG) and Gene Ontology (GO) enrichment analysis. Gain and loss of function studies on NgR1 elucidated its underlying molecular importance in suppressing myelin-associated infiltration in vitro and in vivo. The migratory ability of glioblastoma cells on myelin is reversibly modulated by NgR1 during differentiation and dedifferentiation process through deubiquitinating activity of USP1, which inhibits the degradation of ID1 to downregulate NgR1 expression. Furthermore, pimozide, a well-known antipsychotic drug, upregulates NgR1 by post-translational targeting of USP1, which sensitizes glioma stem cells to myelin inhibition and suppresses myelin-associated infiltration in vivo. In primary human glioblastoma, downregulation of NgR1 expression is associated with highly infiltrative characteristics and poor survival. Together, our findings reveal that loss of NgR1 drives myelin-associated infiltration of glioblastoma and suggest that novel therapeutic strategies aimed at reactivating expression of NgR1 will improve the clinical outcome of glioblastoma patients.
Approaches to uncovering cancer diagnostic and prognostic molecular signatures
Hong, Shengjun
Huang, Yi
Cao, Yaqiang
Chen, Xingwei
Han, Jing-Dong J
Molecular & Cellular Oncology2014Journal Article, cited 2 times
Website
Algorithm Development
The recent rapid development of high-throughput technology enables the study of molecular signatures for cancer diagnosis and prognosis at multiple levels, from genomic and epigenomic to transcriptomic. These unbiased large-scale scans provide important insights into the detection of cancer-related signatures. In addition to single-layer signatures, such as gene expression and somatic mutations, integrating data from multiple heterogeneous platforms using a systematic approach has been proven to be particularly effective for the identification of classification markers. This approach not only helps to uncover essential driver genes and pathways in the cancer network that are responsible for the mechanisms of cancer development, but will also lead us closer to the ultimate goal of personalized cancer therapy.
SynthStrip: skull-stripping for any brain image
Hoopes, Andrew
Mora, Jocelyn S
Dalca, Adrian V
Fischl, Bruce
Hoffmann, Malte
Neuroimage2022Journal Article, cited 0 times
QIN GBM Treatment Response
The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.
The Auto-Lindberg Project: Standardized Target Nomenclature in Radiation Oncology Enables Real-World Data Extraction From Radiation Treatment Plans
Hope, A.
Kim, J. W.
Kazmierski, M.
Welch, M.
Marsilla, J.
Huang, S. H.
Hosni, A.
Tadic, T.
Patel, T.
Haibe-Kains, B.
Waldron, J.
O'Sullivan, B.
Bratman, S.
Int J Radiat Oncol Biol Phys2024Journal Article, cited 0 times
RADCURE
*Radiation Oncology
Radiotherapy Dosage
*Radiotherapy
Intensity-Modulated
Radiotherapy Planning
Computer-Assisted
Lymph Nodes
Oropharyngeal cancer
Laryngeal cancer
Larynx
nasopharynx
hypopharynx
Head and neck cancer
Algorithm Development
Treatment plan archives contain vast quantities of patient-specific data in a digital format, but are underused due to challenges in storage, retrieval, and analysis methodology. With standardized nomenclature and careful patient outcomes monitoring, treatment plans can be rich sources of data to explore relevant clinical questions. Even without outcomes, treatment plan archives contain data to address questions such as pretreatment disease distribution or institutional treatment strategies.; ; A comprehensive understanding of cancer's natural history and lymph node (LN) distribution is critical to management of each patient's disease. Macroscopic tumor location has important implications for adjacent LN regions that may also harbor microscopic cancer involvement. Lindberg et al demonstrated from large patient data sets that different head and neck cancer subsites had different distributions of involved LNs.1 Similar population-based data are rare2 in the modern era, barring some surgical studies.3, 4, 5 Nodal involvement risk estimates can help select patients for elective neck irradiation, including choices of ipsilateral versus bilateral treatment (eg, oropharyngeal carcinoma [OPC]).6; ; In this study, an algorithm automatically extracted LN data from a large data set of treatment plans for patients with head and neck cancer. Further programmatic methods generated representative example “AutoLindberg” diagrams and summary tables regarding the extent of cervical LN involvement for clinically relevant patient subsets.
Improved generalized ComBat methods for harmonization of radiomic features
Radiomic approaches in precision medicine are promising, but variation associated with image acquisition factors can result in severe biases and low generalizability. Multicenter datasets used in these studies are often heterogeneous in multiple imaging parameters and/or have missing information, resulting in multimodal radiomic feature distributions. ComBat is a promising harmonization tool, but it only harmonizes by single/known variables and assumes standardized input data are normally distributed. We propose a procedure that sequentially harmonizes for multiple batch effects in an optimized order, called OPNested ComBat. Furthermore, we propose to address bimodality by employing a Gaussian Mixture Model (GMM) grouping considered as either a batch variable (OPNested + GMM) or as a protected clinical covariate (OPNested - GMM). Methods were evaluated on features extracted with CapTK and PyRadiomics from two public lung computed tomography (CT) datasets. We found that OPNested ComBat improved harmonization performance over standard ComBat. OPNested + GMM ComBat exhibited the best harmonization performance but the lowest predictive performance, while OPNested - GMM ComBat showed poorer harmonization performance, but the highest predictive performance. Our findings emphasize that improved harmonization performance is no guarantee of improved predictive performance, and that these methods show promise for superior standardization of datasets heterogeneous in multiple or unknown imaging parameters and greater generalizability.
Iterative ComBat methods for harmonization of radiomic features
Horng, Hannah
Singh, Apurva
Yousefi, Bardia
Cohen, Eric A.
Haghighi, Babak
Katz, Sharyn
Noël, Peter B.
Shinohara, Russell T.
Kontos, Despina
2022Conference Proceedings, cited 0 times
NSCLC-Radiomics-Genomics
Background: ComBat is a promising harmonization method for radiomic features, but it cannot harmonize simultaneously by multiple batch effects and shows reduced performance in the setting of bimodal distributions and unknown clinical/batch variables. In this study, we develop and evaluate two iterative ComBat approaches (Nested and Nested+GMM ComBat) to address these limitations and improve radiomic feature harmonization performance. Methods: In Nested ComBat, radiomic features are sequentially harmonized by multiple batch effects with order determined by the permutation associated with the smallest number of features with statistically significant differences due to batch effects. In Nested+GMM ComBat, a Gaussian mixture model is used to identify a scan grouping associated with a latent variable from the observed feature distributions to be added as a batch effect to Nested ComBat. These approaches were used to harmonize differences associated with contrast enhancement, spatial resolution due to reconstruction kernel, and manufacturer in radiomic datasets generated by using CapTK and PyRadiomics to extract features from lung CT datasets (Lung3 and Radiogenomics). Differences due to batch effects in the original data and data harmonized with standard ComBat, Nested ComBat, and Nested+GMM ComBat were assessed. Results: Nested ComBat exhibits similar or better performance compared to standard ComBat, likely due to bimodal feature distributions. Nested+GMM ComBat successfully harmonized features with bimodal distributions and in most cases showed superior harmonization performance when compared to Nested and standard ComBat. Conclusions: Our findings show that Nested ComBat can harmonize by multiple batch effects and that Nested+GMM ComBat can improve harmonization of bimodal features.
Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
Hosny, Ahmed
Bitterman, Danielle S.
Guthier, Christian V.
Qian, Jack M.
Roberts, Hannah
Perni, Subha
Saraf, Anurag
Peng, Luke C.
Pashtan, Itai
Ye, Zezhong
Kann, Benjamin H.
Kozono, David E.
Christiani, David
Catalano, Paul J.
Aerts, Hugo J. W. L.
Mak, Raymond H.
The Lancet Digital Health2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
NSCLC Radiogenomics
NSCLC-Cetuximab (RTOG-0617)
Inter-observer variability
Radiation Therapy
Background; Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts.; ; Methods; In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting.; ; Findings; We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83–0·92], p=0·0062; SD 0·86 [0·71–0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76–0·88) and SD 0·79 (0·68–0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56–0·80) and SD 0·50 (0·34–0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60–0·81) and SD 0·47 (0·35–0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013).; ; Interpretation; We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance.; ; Funding; US National Institutes of Health and EU European Research Council.
Residual Tumor Cellularity Assessment of Breast Cancer After Neoadjuvant Therapy Using Image Transformer
Hossain, M. D. Shakhawat
Rahman, M. D. Sahilur
Ahmed, Munim
Alfaz, Nazia
Munira Shifat, Sirajum
Mahbubul Syeed, M. M.
Hussen, Mohammad Anowar
Uddin, Mohammad Faisal
IEEE Access2024Journal Article, cited 0 times
Post-NAT-BRCA
Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate
Hossain, Shamim
Jalab, Hamid A.
Zulfiqar, Fariha
Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering2019Journal Article, cited 0 times
Website
TCGA-KIRC
Kidney
Convolutional Neural Network (CNN)
Machine Learning
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct; proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of; abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.
A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.
Tissue Artifact Segmentation and Severity Assessment for Automatic Analysis Using WSI
Hossain, Shakhawat
Shahriar, Galib Muhammad
Syeed, M. M. Mahbubul
Uddin, Mohammad Faisal
Hasan, Mahady
Hossain, Sakir
Bari, Rubina
IEEE Access2023Journal Article, cited 0 times
Post-NAT-BRCA
Traditionally, pathological analysis and diagnosis are performed by manually eyeballing glass-slide specimens under a microscope by an expert. The whole slide image (WSI) is the digital specimen produced from the glass slide. WSI enabled specimens to be observed on a computer screen and led to computational pathology where computer vision and artificial intelligence are utilized for automated analysis and diagnosis. With the current computational advancement, the entire WSI can be analyzed autonomously without human supervision. However, the analysis could fail or lead to wrong diagnosis if the WSI is affected by tissue artifacts such as tissue fold or air bubbles depending on the severity. Existing artifact detection methods rely on experts for severity assessment to eliminate artifact-affected regions from the analysis. This process is time-consuming, exhausting and undermines the goal of automated analysis or removal of artifacts without evaluating their severity, which could result in the loss of diagnostically important data. Therefore, it is necessary to detect artifacts and then assess their severity automatically. In this paper, we propose a system that incorporates severity evaluation with artifact detection utilizing convolutional neural networks (CNN). The proposed system uses DoubleUNet to segment artifacts and an ensemble network of six fine-tuned CNN models to determine severity. This method outperformed current state-of-the-art in accuracy by 9% for artifact segmentation and achieved a strong correlation of 97% with the pathologist’s evaluation for severity assessment. The robustness of the system was demonstrated using our proposed heterogeneous dataset and practical usability was ensured by integrating it with an automated analysis system.
Assessing the stability and discriminative ability of radiomics features in the tumor microenvironment: Leveraging peri-tumoral regions in vestibular schwannoma
Hosseini, Mahboube Sadat
Aghamiri, Seyed Mahmoud Reza
Fatemi Ardekani, Ali
BagheriMofidi, Seyed Mehdi
European Journal of Radiology2024Journal Article, cited 0 times
Vestibular-Schwannoma-SEG
Radiomics
Tumor microenvironment
Vestibular Schwannoma
Peri-tumoral regions
Stability analysis
Purpose
The tumor microenvironment (TME) plays a crucial role in tumor progression and treatment response. Radiomics offers a non-invasive approach to studying the TME by extracting quantitative features from medical images. In this study, we present a novel approach to assess the stability and discriminative ability of radiomics features in the TME of vestibular schwannoma (VS).
Methods
Magnetic Resonance Imaging (MRI) data from 242 VS patients were analyzed, including contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) sequences. Radiomics features were extracted from concentric peri-tumoral regions of varying sizes. The intraclass correlation coefficient (ICC) was used to assess feature stability and discriminative ability, establishing quantile thresholds for ICCmin and ICCmax.
Results
The identified thresholds for ICCmin and ICCmax were 0.45 and 0.72, respectively. Features were classified into four categories: stable and discriminative (S-D), stable and non-discriminative (S-ND), unstable and discriminative (US-D), and unstable and non-discriminative (US-ND). Different feature groups exhibited varying proportions of S-D features across ceT1 and hrT2 sequences. The similarity of S-D features between ceT1 and hrT2 sequences was evaluated using Jaccard’s index, with a value of 0.78 for all feature groups which is ranging from 0.68 (intensity features) to 1.00 (Neighbouring Gray Tone Difference Matrix (NGTDM) features).
Conclusions
This study provides a framework for identifying stable and discriminative radiomics features in the TME, which could serve as potential biomarkers or predictors of patient outcomes, ultimately improving the management of VS patients.
Deep Learning Segmentation of Ascites on Abdominal CT Scans for Automatic Volume Quantification
Purpose To evaluate the performance of an automated deep learning method in detecting ascites and subsequently quantifying its volume in patients with liver cirrhosis and ovarian cancer. Materials and Methods This retrospective study included contrast-enhanced and noncontrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer from two institutions, National Institutes of Health (NIH) and University of Wisconsin (UofW). The model, trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age, 60 years +/- 11 [SD]; 143 female), was tested on two internal (NIH-LC and NIH-OV) and one external dataset (UofW-LC). Its performance was measured by the Dice coefficient, standard deviations, and 95% confidence intervals, focusing on ascites volume in the peritoneal cavity. Results On NIH-LC (25 patients; mean age, 59 years +/- 14; 14 male) and NIH-OV (166 patients; mean age, 65 years +/- 9; all female), the model achieved Dice scores of 85.5% +/- 6.1% (CI: 83.1%-87.8%) and 82.6% +/- 15.3% (CI: 76.4%-88.7%), with median volume estimation errors of 19.6% (IQR: 13.2%-29.0%) and 5.3% (IQR: 2.4%- 9.7%), respectively. On UofW-LC (124 patients; mean age, 46 years +/- 12; 73 female), the model had a Dice score of 83.0% +/- 10.7% (CI: 79.8%-86.3%) and median volume estimation error of 9.7% (IQR: 4.5%-15.1%). The model showed strong agreement with expert assessments, with r(2) values of 0.79, 0.98, and 0.97 across the test sets. Conclusion The proposed deep learning method performed well in segmenting and quantifying the volume of ascites in concordance with expert radiologist assessments. (c)RSNA, 2024.
Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types
Hou, Le
Gupta, Rajarsi
Van Arnam, John S.
Zhang, Yuwei
Sivalenka, Kaustubh
Samaras, Dimitris
Kurc, Tahsin M.
Saltz, Joel H.
Scientific Data2020Journal Article, cited 0 times
Pan-Cancer-Nuclei-Seg
The distribution and appearance of nuclei are essential markers for the diagnosis and study of cancer. Despite the importance of nuclear morphology, there is a lack of large scale, accurate, publicly accessible nucleus segmentation data. To address this, we developed an analysis pipeline that segments nuclei in whole slide tissue images from multiple cancer types with a quality control process. We have generated nucleus segmentation results in 5,060 Whole Slide Tissue images from 10 cancer types in The Cancer Genome Atlas. One key component of our work is that we carried out a multi-level quality control process (WSI-level and image patch-level), to evaluate the quality of our segmentation results. The image patch-level quality control used manual segmentation ground truth data from 1,356 sampled image patches. The datasets we publish in this work consist of roughly 5 billion quality controlled nuclei from more than 5,060 TCGA WSIs from 10 different TCGA cancer types and 1,356 manually segmented TCGA image patches from the same 10 cancer types plus additional 4 cancer types.
Diffraction Block in Extended nn-UNet for Brain Tumor Segmentation
Hou, Qingfan
Wang, Zhuofei
Wang, Jiao
Jiang, Jian
Peng, Yanjun
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic brain tumor segmentation based on 3D mpMRI is highly significant for brain diagnosis, monitoring, and treatment planning. Due to the limitation of manual delineation, automatic and accurate segmentation based on a deep learning network has a tremendous practical necessity. The BraTS2022 challenge provides many data to develop our network. In this work, we proposed a diffraction block based on the Fraunhofer single-slit diffraction principle, which emphasizes the effect of associated features and suppresses isolated features. We added the diffraction block to nn-UNet, which took first place in the BraTS 2020 competition. We also improved nn-UNet by referring to the solution proposed by the 2021 winner, including using a larger network and replacing the batch with a group normalization. In the final unseen test data, our method is ranked first for Pediatric population data and third for BraTS continuous evaluation data.
Brain Tumor Segmentation based on Knowledge Distillation and Adversarial Training
3D MRI brain tumor segmentation is a reliable method for disease diagnosis and treatment plans in the future. Early on, the segmentation of brain tumors is mostly done manually. However, manual segmentation of 3D MRI brain tumor requires professional anatomical knowledge and may be inaccurate. In this paper, we propose a 3D MRI brain tumor segmentation architecture based on the encoder-decoder structure. Specially, we introduce knowledge distillation and adversarial training methods, which compresses and improves the accuracy and robustness of the model. Furthermore, we obtain soft targets by designing multiple teacher network training and then apply them to the student network. Finally, we evaluate our method on a challenging BraTS dataset. As a result, the performance of our proposed model is superior to state-of-the-art methods.
Learning-based parameter prediction for quality control in three-dimensional medical image compression
Hou, Y. X.
Ren, Z.
Tao, Y. B.
Chen, W.
Frontiers of Information Technology & Electronic Engineering2021Journal Article, cited 0 times
Website
LIDC-IDRI
RIDER Lung CT
LungCT-Diagnosis
REMBRANDT
TCGA-HNSC
Imaging Feature
medical image compression
high efficiency video coding (hevc)
quality control
learning-based
Quality control is of vital importance in compressing three-dimensional (3D) medical imaging data. Optimal compression parameters need to be determined based on the specific quality requirement. In high efficiency video coding (HEVC), regarded as the state-of-the-art compression tool, the quantization parameter (QP) plays a dominant role in controlling quality. The direct application of a video-based scheme in predicting the ideal parameters for 3D medical image compression cannot guarantee satisfactory results. In this paper we propose a learning-based parameter prediction scheme to achieve efficient quality control. Its kernel is a support vector regression (SVR) based learning model that is capable of predicting the optimal QP from both video-based and structural image features extracted directly from raw data, avoiding time-consuming processes such as pre-encoding and iteration, which are often needed in existing techniques. Experimental results on several datasets verify that our approach outperforms current video-based quality control methods.; ; 质量控制是三维医学图像压缩过程至关重要的环节, 需设定最佳图像压缩参数才能满足特定的压缩质量需求. 高效视频编码 (HEVC) 是目前最先进的压缩工具. 其中, 量化参数 (QP) 对HEVC的压缩质量控制起决定性作用, 如能对其精确预测, 就能完成质量控制的目标; 然而, 直接将视频压缩领域中的预测方法套用到三维医学数据压缩, 精度和效率无法取得令人满意的结果. 为此, 提出一种基于学习的参数预测方法, 用于实现三维医学图像压缩中的高效质量控制. 本文方法基于支撑向量回归 (SVR), 可以直接利用从原始数据中提取的基于视频的特征与基于结构的特征来预测最佳QP, 无需经过耗时长的预编码或迭代过程. 在若干数据集上的实验结果证明, 本文方法比现有方法在预测准确度和速度上表现更好.
Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations
Hrynaszkiewicz, Iain
Khodiyar, Varsha
Hufton, Andrew L
Sansone, Susanna-Assunta
Research Integrity and Peer Review2016Journal Article, cited 8 times
Website
Open science
Sharing of experimental clinical research data usually happens between individuals or research groups rather than via public repositories, in part due to the need to protect research participant privacy. This approach to data sharing makes it difficult to connect journal articles with their underlying datasets and is often insufficient for ensuring access to data in the long term. Voluntary data sharing services such as the Yale Open Data Access (YODA) and Clinical Study Data Request (CSDR) projects have increased accessibility to clinical datasets for secondary uses while protecting patient privacy and the legitimacy of secondary analyses but these resources are generally disconnected from journal articles-where researchers typically search for reliable information to inform future research. New scholarly journal and article types dedicated to increasing accessibility of research data have emerged in recent years and, in general, journals are developing stronger links with data repositories. There is a need for increased collaboration between journals, data repositories, researchers, funders, and voluntary data sharing services to increase the visibility and reliability of clinical research. Using the journal Scientific Data as a case study, we propose and show examples of changes to the format and peer-review process for journal articles to more robustly link them to data that are only available on request. We also propose additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements (DUAs).
Performance of sparse-view CT reconstruction with multi-directional gradient operators
Hsieh, C. J.
Jin, S. C.
Chen, J. C.
Kuo, C. W.
Wang, R. T.
Chu, W. C.
PLoS One2019Journal Article, cited 0 times
Website
TCGA-STAD
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.
Quantitative glioma grading using transformed gray-scale invariant textures of MRI
Hsieh, Kevin Li-Chun
Chen, Cheng-Yu
Lo, Chung-Ming
Computers in Biology and Medicine2017Journal Article, cited 8 times
Website
Algorithm Development
TCGA-LGG
TCGA-GBM
BRAIN
Computer Aided Diagnosis (CADx)
Background: A computer-aided diagnosis (CAD) system based on intensity-invariant magnetic resonance (MR) imaging features was proposed to grade gliomas for general application to various scanning systems and settings.; Method: In total, 34 glioblastomas and 73 lower-grade gliomas comprised the image database to evaluate the proposed CAD system. For each case, the local texture on MR images was transformed into a local binary pattern (LBP) which was intensity-invariant. From the LBP, quantitative image features, including the histogram moment and textures, were extracted and combined in a logistic regression classifier to establish a malignancy prediction model. The performance was compared to conventional texture features to demonstrate the improvement.; Results: The performance of the CAD system based on LBP features achieved an accuracy of 93% (100/107), a sensitivity of 97% (33/34), a negative predictive value of 99% (67/68), and an area under the receiver operating characteristic curve (Az) of 0.94, which were significantly better than the conventional texture features: an accuracy of 84% (90/107), a sensitivity of 76% (26/34), a negative predictive value of 89% (64/72), and an Az of 0.89 with respective p values of 0.0303, 0.0122, 0.0201, and 0.0334.; Conclusions: More-robust texture features were extracted from MR images and combined into a significantly better CAD system for distinguishing glioblastomas from lower-grade gliomas. The proposed CAD system would be more practical in clinical use with various imaging systems and settings.
Computer-aided grading of gliomas based on local and global MRI features
Hsieh, Kevin Li-Chun
Lo, Chung-Ming
Hsiao, Chih-Jou
Computer Methods and Programs in Biomedicine2016Journal Article, cited 13 times
Website
TCGA-GBM
TCGA-LGG
Radiomics
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.
Effect of a computer-aided diagnosis system on radiologists' performance in grading gliomas with MRI
Hsieh, Kevin Li-Chun
Tsai, Ruei-Je
Teng, Yu-Chuan
Lo, Chung-Ming
PLoS One2017Journal Article, cited 0 times
Algorithm Development
Computer Aided Diagnosis (CADx)
Classification
Lower-grade glioma (LGG)
Glioblastoma Multiforme (GBM)
The effects of a computer-aided diagnosis (CAD) system based on quantitative intensity features with magnetic resonance (MR) imaging (MRI) were evaluated by examining radiologists' performance in grading gliomas. The acquired MRI database included 71 lower-grade gliomas and 34 glioblastomas. Quantitative image features were extracted from the tumor area and combined in a CAD system to generate a prediction model. The effect of the CAD system was evaluated in a two-stage procedure. First, a radiologist performed a conventional reading. A sequential second reading was determined with a malignancy estimation by the CAD system. Each MR image was regularly read by one radiologist out of a group of three radiologists. The CAD system achieved an accuracy of 87% (91/105), a sensitivity of 79% (27/34), a specificity of 90% (64/71), and an area under the receiver operating characteristic curve (Az) of 0.89. In the evaluation, the radiologists' Az values significantly improved from 0.81, 0.87, and 0.84 to 0.90, 0.90, and 0.88 with p = 0.0011, 0.0076, and 0.0167, respectively. Based on the MR image features, the proposed CAD system not only performed well in distinguishing glioblastomas from lower-grade gliomas but also provided suggestions about glioma grading to reinforce radiologists' confidence rating.
Classification of Prostate Transitional Zone Cancer and Hyperplasia Using Deep Transfer Learning From Disease-Related Images
Purpose The diagnosis of prostate transition zone cancer (PTZC) remains a clinical challenge due to their similarity to benign prostatic hyperplasia (BPH) on MRI. The Deep Convolutional Neural Networks (DCNNs) showed high efficacy in diagnosing PTZC on medical imaging but was limited by the small data size. A transfer learning (TL) method was combined with deep learning to overcome this challenge. Materials and methods A retrospective investigation was conducted on 217 patients enrolled from our hospital database (208 patients) and The Cancer Imaging Archive (nine patients). Using T2-weighted images (T2WIs) and apparent diffusion coefficient (ADC) maps, DCNN models were trained and compared between different TL databases (ImageNet vs. disease-related images) and protocols (from scratch, fine-tuning, or transductive transferring). Results PTZC and BPH can be classified through traditional DCNN. The efficacy of TL from natural images was limited but improved by transferring knowledge from the disease-related images. Furthermore, transductive TL from disease-related images had comparable efficacy to the fine-tuning method. Limitations include retrospective design and a relatively small sample size. Conclusion Deep TL from disease-related images is a powerful tool for an automated PTZC diagnostic system. In developing regions where only conventional MR scans are available, the accurate diagnosis of PTZC can be achieved via transductive deep TL from disease-related images.
Hu similarity coefficient: a clinically oriented metric to evaluate contour accuracy in radiation therapy
To propose a clinically oriented quantitative metric, Hu similarity coefficient (HSC), to evaluate contour quality, gauge the performance of auto contouring methods, and aid effective allocation of clinical resources. The HSC is defined as the ratio of the number of boundary points of the initial contour that doesn't require modifications over the number of boundary points of the final adjusted contour. To demonstrate the clinical utility of the HSC in contour evaluation, we used publicly available pelvic CT data from the Cancer Imaging Archive. The bladder was selected as the organ of interest. It was contoured by a certified medical dosimetrist and reviewed by a certified medical physicist. This contour served as the ground truth contour. From this contour, we simulated two contour sets. The first set had the same Dice similarity coefficient (DSC) but different HSCs, whereas the second set kept a constant HSC while exhibiting different DSCs. Four individuals were asked to adjust the simulated contours until they met clinical standards. The corresponding contour modification times were recorded and normalized by individual's manual contouring times from scratch. The normalized contour modification time was correlated to the HSC and DSC to evaluate their suitability as quantitative metrics assessing contour quality. The HSC maintained a strong correlation with the normalized contour modification time when both sets of simulated contours were included in analysis. The correlation between the DSC and normalized contour modification time, however, was weak. Compared to the DSC, the HSC is more suitable for evaluating contour quality. We demonstrated that the HSC correlated well with the average normalized contour modification time. Clinically, contour modification time is the most relevant factor in allocating clinical resources. Therefore, the HSC is better suited than the DSC to assess contour quality from a clinical perspective.
HU Coefficient: A Clinically Oriented Metric to Evaluate Contour Accuracy in Radiation Therapy
Purpose To propose a clinically oriented quantitative metric, the HU coefficient, to evaluate contour quality, gauge the performance of auto contouring methods, and aid effective allocation of clinical resources.Materials and Methods Publicly available pelvic CT data from the Cancer Imaging Archive was used to demonstrate the clinical utility of the HU coefficient in contour evaluation. The bladder was selected as the organ of interest. It was contoured by a certified medical dosimetrist and reviewed by a certified medical physicist. This contour served as the ground truth contour. From this contour, we simulated two contour sets. The first set had the same Dice similarity coefficient (DSC) but different HU coefficients, whereas the second set kept a constant HU coefficient while exhibiting different DSCs. Four individuals were asked to adjust the simulated contours until they met clinical standards. The corresponding contour modification times were recorded and normalized by individual’s manual contouring times from scratch. The normalized contour modification time was correlated to the HU and DSC to evaluate their suitability as quantitative metrics assessing contour quality.Results The HU coefficient maintained a strong correlation with the normalized contour modification time when both sets of simulated contours were included in analysis. The correlation between the DSC and normalized contour modification time, however, was weak. Compared to DSC, HU is more suitable for evaluating contour quality.Conclusions We demonstrated that the HU coefficient correlated well with the average normalized contour modification time. Clinically, contour modification time is the most relevant factor in allocating clinical resources. Therefore, the HU coefficient is better suited than DSC to assess contour quality from a clinical perspective.
An End-to-end Image Feature Representation Model of Pulmonary Nodules
Hu, Jinqiao
2022Conference Paper, cited 0 times
LIDC-IDRI
Computer Aided Detection (CADe)
LUNG
Support Vector Machine (SVM)
Convolutional Neural Network (CNN)
Swarm
Deep Learning
Lung cancer is a cancer with a high mortality rate. If lung cancer can be detected early, the mortality rate can be greatly reduced. Lung nodule detection based on CT or MRI equipment is a common method to detect early lung cancer. Computer vision technology is widely used for image processing and classification of pulmonary nodules, but because the distinction between pulmonary nodule areas and surrounding non-nodule areas is not obvious, general image processing methods can only extract the superficial features of the image in pulmonary nodules. The detection accuracy cannot be further improved. In this paper, we propose an end-to-end model for constructing feature representations for lung nodule image classification based on local and global features. First, local plaque regions are selected and associated with relatively intact tissue, and then local and global features are extracted from each region. Deep models represent features that implement high-level abstract representations that describe image objects. The test results on standard datasets show that the method proposed in this paper has advantages on some evaluation metrics.
Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field
Hu, Kai
Gan, Qinghai
Zhang, Yuan
Deng, Shuhua
Xiao, Fen
Huang, Wei
Cao, Chunhong
Gao, Xieping
IEEE Access2019Journal Article, cited 2 times
Website
BraTS
TCGA-GBM
TCGA-LGG
Convolutional Neural Network (CNN)
Magnetic Resonance Imaging (MRI)
Segmentation
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.
Weakly Supervised Deep Learning for COVID-19 Infection Detection and Classification From CT Images
Hu, Shaoping
Gao, Yuan
Niu, Zhangming
Jiang, Yinghui
Li, Lao
Xiao, Xianglu
Wang, Minhao
Fang, Evandro Fei
Menpes-Smith, Wade
Xia, Jun
Ye, Hui
Yang, Guang
IEEE Access2020Journal Article, cited 0 times
LCTSC
An outbreak of a novel coronavirus disease (i.e., COVID-19) has been recorded in Wuhan, China since late December 2019, which subsequently became pandemic around the world. Although COVID-19 is an acutely treated disease, it can also be fatal with a risk of fatality of 4.03% in China and the highest of 13.04% in Algeria and 12.67% Italy (as of 8th April 2020). The onset of serious illness may result in death as a consequence of substantial alveolar damage and progressive respiratory failure. Although laboratory testing, e.g., using reverse transcription polymerase chain reaction (RT-PCR), is the golden standard for clinical diagnosis, the tests may produce false negatives. Moreover, under the pandemic situation, shortage of RT-PCR testing resources may also delay the following clinical decision and treatment. Under such circumstances, chest CT imaging has become a valuable tool for both diagnosis and prognosis of COVID-19 patients. In this study, we propose a weakly supervised deep learning strategy for detecting and classifying COVID-19 infection from CT images. The proposed method can minimise the requirements of manual labelling of CT images but still be able to obtain accurate infection detection and distinguish COVID-19 from non-COVID-19 cases. Based on the promising results obtained qualitatively and quantitatively, we can envisage a wide deployment of our developed technique in large-scale clinical studies.
Domain and Content Adaptive Convolution based Multi-Source Domain Generalization for Medical Image Segmentation
Hu, S.
Liao, Z.
Zhang, J.
Xia, Y.
IEEE Trans Med Imaging2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
MIDRC-RICORD-1a
Radiomics
Algorithm Development
Transfer learning
COVID-19
PROSTATE
LUNG
The domain gap caused mainly by variable medical image quality renders a major obstacle on the path between training a segmentation model in the lab and applying the trained model to unseen clinical data. To address this issue, domain generalization methods have been proposed, which however usually use static convolutions and are less flexible. In this paper, we propose a multi-source domain generalization model based on the domain and content adaptive convolution (DCAC) for the segmentation of medical images across different modalities. Specifically, we design the domain adaptive convolution (DAC) module and content adaptive convolution (CAC) module and incorporate both into an encoder-decoder backbone. In the DAC module, a dynamic convolutional head is conditioned on the predicted domain code of the input to make our model adapt to the unseen target domain. In the CAC module, a dynamic convolutional head is conditioned on the global image features to make our model adapt to the test image. We evaluated the DCAC model against the baseline and four state-of-the-art domain generalization methods on the prostate segmentation, COVID-19 lesion segmentation, and optic cup/optic disc segmentation tasks. Our results not only indicate that the proposed DCAC model outperforms all competing methods on each segmentation task but also demonstrate the effectiveness of the DAC and CAC modules. Code is available at https://git.io/DCAC.
Hierarchical Multi-class Segmentation of Glioma Images Using Networks with Multi-level Activation Function
Hu, Xiaobin
Li, Hongwei
Zhao, Yu
Dong, Chao
Menze, Bjoern H.
Piraud, Marie
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
For many segmentation tasks, especially for the biomedical image, the topological prior is vital information which is useful to exploit. The containment/nesting is a typical inter-class geometric relationship. In the MICCAI Brain tumor segmentation challenge, with its three hierarchically nested classes ‘whole tumor’, ‘tumor core’, ‘active tumor’, the nested classes relationship is introduced into the 3D-residual-Unet architecture. The network comprises a context aggregation pathway and a localization pathway, which encodes increasingly abstract representation of the input as going deeper into the network, and then recombines these representations with shallower features to precisely localize the interest domain via a localization path. The nested-class-prior is combined by proposing the multi-class activation function and its corresponding loss function. The model is trained on the training dataset of Brats2018, and 20% of the dataset is regarded as the validation dataset to determine parameters. When the parameters are fixed, we retrain the model on the whole training dataset. The performance achieved on the validation leaderboard is 86%, 77% and 72% Dice scores for the whole tumor, enhancing tumor and tumor core classes without relying on ensembles or complicated post-processing steps. Based on the same start-of-the-art network architecture, the accuracy of nested-class (enhancing tumor) is reasonably improved from 69% to 72% compared with the traditional Softmax-based method which blind to topological prior.
MLLCD: A Meta Learning-based Method for Lung Cancer Diagnosis Using Histopathology Images
Lung cancer is a leading cause of death. An accurate early lung cancer diagnosis can improve a patient’s survival chances. Histopathological images are essential for cancer diagnosis. With the development of deep learning in the past decade, many scholars have used deep learning to learn the features of histopathological images and achieve lung cancer classification. However, deep learning requires a large quantity of annotated data to train the model to achieve a good classification effect, and collecting many annotated pathological images is time-consuming and expensive. Faced with the scarcity of pathological data, we present a meta-learning method for lung cancer diagnosis (called MLLCD). In detail, the MLLCD works in three steps. First, we preprocess all data using the bilinear interpolation method and then design the base learner which units a convolutional neural network(CNN) and transformer to distill local features and global features of pathology images with different resolutions. Finally, we train and update the base learner with a model-agnostic meta-learning (MAML) algorithm. Clinical Proteomic Tumor Analysis Consortium (CPTAC) cancer patient data demonstrate that our proposed model achieves the receiver operating characteristic (ROC) values of 0.94 for lung cancer diagnosis.
Brain Tumor Segmentation on Multimodal MR Imaging Using Multi-level Upsampling in Decoder
Hu, Yan
Liu, Xiang
Wen, Xin
Niu, Chen
Xia, Yong
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate brain tumor segmentation plays a pivotal role in clinical practice and research settings. In this paper, we propose the multi-level up-sampling network (MU-Net) to learn the image presentations of transverse, sagittal and coronal view and fuse them to automatically segment brain tumors, including necrosis, edema, non-enhancing, and enhancing tumor, in multimodal magnetic resonance (MR) sequences. The MU-Net model has an encoder–decoder structure, in which low level feature maps obtained by the encoder and high level feature maps obtained by the decoder are combined by using a newly designed global attention (GA) module. The proposed model has been evaluated on the BraTS 2018 Challenge validation dataset and achieved an average Dice similarity coefficient of 0.88, 0.74, 0.69 and 0.85, 0.72, 0.66 for the whole tumor, core tumor and enhancing tumor on the validation dataset and testing dataset, respectively. Our results indicate that the proposed model has a promising performance in automated brain tumor segmentation.
A neural network approach to lung nodule segmentation
Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%±0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.
3D Deep Neural Network-Based Brain Tumor Segmentation Using Multimodality Magnetic Resonance Sequences
Brain tumor segmentation plays a pivotal role in clinical practice and research settings. In this paper, we propose a 3D deep neural network-based algorithm for joint brain tumor detection and intra-tumor structure segmentation, including necrosis, edema, non-enhancing and enhancing tumor, using multimodal magnetic resonance imaging sequences. An ensemble of cascaded U-Nets is designed to detect the tumor and a deep convolutional neural network is constructed for patch-based intra-tumor structure segmentation. This algorithm has been evaluated on the BraTS 2017 Challenge dataset and achieved Dice similarity coefficients of 0.81, 0.69 and 0.55 in the segmentation of whole tumor, core tumor and enhancing tumor, respectively. Our results suggest that the proposed algorithm has promising performance in automated brain tumor segmentation.
Evaluation of the practical application of the category-imbalanced myeloid cell classification model
Hu, Zhigang
Ge, Aoru
Wang, Xinzheng
Ou, Cuisi
Wang, Shen
Wang, Junwen
PLoS One2025Journal Article, cited 0 times
Website
MIL normalization -- prerequisites for accurate MRI radiomics analysis
Hu, Z.
Zhuang, Q.
Xiao, Y.
Wu, G.
Shi, Z.
Chen, L.
Wang, Y.
Yu, J.
Comput Biol Med2021Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
Deep Learning
Radiomics
Magnetic Resonance Imaging (MRI)
The quality of magnetic resonance (MR) images obtained with different instruments and imaging parameters varies greatly. A large number of heterogeneous images are collected, and they suffer from acquisition variation. Such imaging quality differences will have a great impact on the radiomics analysis. The main differences in MR images include modality mismatch (M), intensity distribution variance (I), and layer-spacing differences (L), which are referred to as MIL differences in this paper for convenience. An MIL normalization system is proposed to reconstruct uneven MR images into high-quality data with complete modality, a uniform intensity distribution and consistent layer spacing. Three radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis of glioma, were used to verify the effect of MIL normalization on radiomics analysis. Three retrospective glioma datasets were analyzed in this study: BraTs (285 cases), TCGA (112 cases) and HuaShan (403 cases). They were used to test the effect of MIL on three different radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis. MIL normalization included three components: multimodal synthesis based on an encoder-decoder network, intensity normalization based on CycleGAN, and layer-spacing unification based on Statistical Parametric Mapping (SPM). The Dice similarity coefficient, areas under the curve (AUC) and six other indicators were calculated and compared after different normalization steps. The MIL normalization system can improved the Dice coefficient of segmentation by 9% (P < .001), the AUC of pathological grading by 32% (P < .001), and IDH1 status prediction by 25% (P < .001) when compared to non-normalization. The proposed MIL normalization system provides high-quality standardized data, which is a prerequisite for accurate radiomics analysis.
Segmenting Brain Tumor Using Cascaded V-Nets in Multimodal MR Images
Hua, Rui
Huo, Quan
Gao, Yaozong
Sui, He
Zhang, Bing
Sun, Yu
Mo, Zhanhao
Shi, Feng
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this work, we propose a novel cascaded V-Nets method to segment brain tumor substructures in multimodal brain magnetic resonance imaging. Although V-Net has been successfully used in many segmentation tasks, we demonstrate that its performance could be further enhanced by using a cascaded structure and ensemble strategy. Briefly, our baseline V-Net consists of four levels with encoding and decoding paths and intra- and inter-path skip connections. Focal loss is chosen to improve performance on hard samples as well as balance the positive and negative samples. We further propose three preprocessing pipelines for multimodal magnetic resonance images to train different models. By ensembling the segmentation probability maps obtained from these models, segmentation result is further improved. In other hand, we propose to segment the whole tumor first, and then divide it into tumor necrosis, edema, and enhancing tumor. Experimental results on BraTS 2018 online validation set achieve average Dice scores of 0.9048, 0.8364, and 0.7748 for whole tumor, tumor core and enhancing tumor, respectively. The corresponding values for BraTS 2018 online testing set are 0.8761, 0.7953, and 0.7364, respectively. We also evaluate the proposed method in two additional data sets from local hospitals comprising of 28 and 28 subjects, and the best results are 0.8635, 0.8036, and 0.7217, respectively. We further make a prediction of patient overall survival by ensembling multiple classifiers for long, mid and short groups, and achieve accuracy of 0.519, mean square error of 367240 and Spearman correlation coefficient of 0.168 for BraTS 2018 online testing set.
Multi kernel cross sparse graph attention convolutional neural network for brain magnetic resonance imaging super-resolution
Hua, Xin
Du, Zhijiang
Ma, Jixin
Yu, Hongjian
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
Website
BraTS 2019
GammaKnife-Hippocampal
Brain MRI
Sparse Graph Attention
Deep Learning
Medical image super-resolution
High-resolution Magnetic Resonance Imaging (MRI) is pivotal in both diagnosing and treating brain tumors, assisting physicians in diagnosis and treatment by displaying anatomical structures. Utilizing convolutional neural network-based super-resolution methods enables the efficient acquisition of high-resolution MRI images.However, Convolutional neural networks are limited by their kernel size, which restricts their ability to capture a wider field of view, potentially leading to feature omission and difficulties in establishing global and local feature relationships. To overcome these shortcomings, We have designed a novel network architecture that highlights three main modules: (i) Multiple Convolutional Feature (MCF)extraction module, which diversifies convolution operations for extracting image features, achieving comprehensive feature representation. (ii) Multiple Groupss of Cross-Iterative Feature(MGCIF) modules, promoting inter-channel feature interactions and emphasizing crucial features needed for subsequent learning. (iii) A Graph Neural Network Module based on a sparse attention mechanism, capable of connecting distant pixel features and identifying influential neighboring pixels for target pixel inpainting. To evaluate the accuracy of our proposed network, we conducted tests on four datasets, comprising two sets of brain tumor data and two sets of healthy head MRI data, all of which underwent varying degrees of degradation. We conducted experiments using nineteen super-resolution (SR) models. Our experiments were carried out on four datasets, and the results demonstrate that our method outperforms the current leading-edge methods. In the four datasets, our model showed improvements in Peak Signal-to-Noise Ratio (PSNR) scores compared to the second-place model,with increase of 1.16 %,1.08 %,0.19 % and 0.53 % for ×2,and 2.26 %,1.67 %,0.13 % and 0.45 % for × 4,respectively.
Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes
Huang, Chao
Cintra, Murilo
Brennan, Kevin
Zhou, Mu
Colevas, A Dimitrios
Fischbein, Nancy
Zhu, Shankuan
Gevaert, Olivier
EBioMedicine2019Journal Article, cited 1 times
Website
TCGA-HNSC
Radiomics
Radiogenomics
Transcriptomics
BACKGROUND: Radiomics-based non-invasive biomarkers are promising to facilitate the translation of therapeutically related molecular subtypes for treatment allocation of patients with head and neck squamous cell carcinoma (HNSCC). METHODS: We included 113 HNSCC patients from The Cancer Genome Atlas (TCGA-HNSCC) project. Molecular phenotypes analyzed were RNA-defined HPV status, five DNA methylation subtypes, four gene expression subtypes and five somatic gene mutations. A total of 540 quantitative image features were extracted from pre-treatment CT scans. Features were selected and used in a regularized logistic regression model to build binary classifiers for each molecular subtype. Models were evaluated using the average area under the Receiver Operator Characteristic curve (AUC) of a stratified 10-fold cross-validation procedure repeated 10 times. Next, an HPV model was trained with the TCGA-HNSCC, and tested on a Stanford cohort (N=53). FINDINGS: Our results show that quantitative image features are capable of distinguishing several molecular phenotypes. We obtained significant predictive performance for RNA-defined HPV+ (AUC=0.73), DNA methylation subtypes MethylMix HPV+ (AUC=0.79), non-CIMP-atypical (AUC=0.77) and Stem-like-Smoking (AUC=0.71), and mutation of NSD1 (AUC=0.73). We externally validated the HPV prediction model (AUC=0.76) on the Stanford cohort. When compared to clinical models, radiomic models were superior to subtypes such as NOTCH1 mutation and DNA methylation subtype non-CIMP-atypical while were inferior for DNA methylation subtype CIMP-atypical and NSD1 mutation. INTERPRETATION: Our study demonstrates that radiomics can potentially serve as a non-invasive tool to identify treatment-relevant subtypes of HNSCC, opening up the possibility for patient stratification, treatment allocation and inclusion in clinical trials. FUND: Dr. Gevaert reports grants from National Institute of Dental & Craniofacial Research (NIDCR) U01 DE025188, grants from National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIBIB), R01 EB020527, grants from National Cancer Institute (NCI), U01 CA217851, during the conduct of the study; Dr. Huang and Dr. Zhu report grants from China Scholarship Council (Grant NO:201606320087), grants from China Medical Board Collaborating Program (Grant NO:15-216), the Cyrus Tang Foundation, and the Zhejiang University Education Foundation during the conduct of the study; Dr. Cintra reports grants from Sao Paulo State Foundation for Teaching and Research (FAPESP), during the conduct of the study.
Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder
Huang, Detian
Huang, Weiqin
Yuan, Zhenguo
Lin, Yanming
Zhang, Jian
Zheng, Lixin
Information2018Journal Article, cited 0 times
Website
Lung Cancer
Algorithm Development
Image resampling
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.
Assigning readers to cases in imaging studies using balanced incomplete block designs
Huang, Erich P
Shih, Joanna H
Stat Methods Med Res2021Journal Article, cited 0 times
Website
TCGA-OV-Radiogenomics
Imaging studies
balanced incomplete block designs
kappa statistics
negative predictive value
positive predictive value
reader studies
sensitivity
specificity
In many imaging studies, each case is reviewed by human readers and characterized according to one or more features. Often, the inter-reader agreement of the feature indications is of interest in addition to their diagnostic accuracy or association with clinical outcomes. Complete designs in which all participating readers review all cases maximize efficiency and guarantee estimability of agreement metrics for all pairs of readers but often involve a heavy reading burden. Assigning readers to cases using balanced incomplete block designs substantially reduces reading burden by having each reader review only a subset of cases, while still maintaining estimability of inter-reader agreement for all pairs of readers. Methodology for data analysis and power and sample size calculations under balanced incomplete block designs is presented and applied to simulation studies and an actual example. Simulation studies results suggest that such designs may reduce reading burdens by >40% while in most scenarios incurring a <20% increase in the standard errors and a <8% and <20% reduction in power to detect between-modality differences in diagnostic accuracy and kappa statistics, respectively.
Open-source algorithm and software for computed tomography-based virtual pancreatoscopy and other applications
Huang, H.
Yu, X.
Tian, M.
He, W.
Li, S. X.
Liang, Z.
Gao, Y.
Vis Comput Ind Biomed Art2022Journal Article, cited 0 times
Website
Pancreas-CT
3D Slicer
Pancreatic cancer
Pancreatic duct segmentation
Virtual pancreatoscopy
Pancreatoscopy plays a significant role in the diagnosis and treatment of pancreatic diseases. However, the risk of pancreatoscopy is remarkably greater than that of other endoscopic procedures, such as gastroscopy and bronchoscopy, owing to its severe invasiveness. In comparison, virtual pancreatoscopy (VP) has shown notable advantages. However, because of the low resolution of current computed tomography (CT) technology and the small diameter of the pancreatic duct, VP has limited clinical use. In this study, an optimal path algorithm and super-resolution technique are investigated for the development of an open-source software platform for VP based on 3D Slicer. The proposed segmentation of the pancreatic duct from the abdominal CT images reached an average Dice coefficient of 0.85 with a standard deviation of 0.04. Owing to the excellent segmentation performance, a fly-through visualization of both the inside and outside of the duct was successfully reconstructed, thereby demonstrating the feasibility of VP. In addition, a quantitative analysis of the wall thickness and topology of the duct provides more insight into pancreatic diseases than a fly-through visualization. The entire VP system developed in this study is available at https://github.com/gaoyi/VirtualEndoscopy.git .
CDDnet: Cross-domain denoising network for low-dose CT image via local and global information alignment
Huang, Jiaxin
Chen, Kecheng
Ren, Yazhou
Sun, Jiayu
Wang, Yanmei
Tao, Tao
Pu, Xiaorong
Computers in Biology and Medicine2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
LIDC-IDRI
Image denoising
Algorithm Development
Deep Learning
Computed Tomography (CT)
LUNG
Low-dose CT
encoder-decoder
The domain shift problem has emerged as a challenge in cross-domain low-dose CT (LDCT) image denoising task, where the acquisition of a sufficient number of medical images from multiple sources may be constrained by privacy concerns. In this study, we propose a novel cross-domain denoising network (CDDnet) that incorporates both local and global information of CT images. To address the local component, a local information alignment module has been proposed to regularize the similarity between extracted target and source features from selected patches. To align the general information of the semantic structure from a global perspective, an autoencoder is adopted to learn the latent correlation between the source label and the estimated target label generated by the pre-trained denoiser. Experimental results demonstrate that our proposed CDDnet effectively alleviates the domain shift problem, outperforming other deep learning-based and domain adaptation-based methods under cross-domain scenarios.
Local-Whole-Focus: Identifying Breast Masses and Calcified Clusters on Full-Size Mammograms
Huang, Jun
Xiao, He
Wang, Qingfeng
Liu, Zhiqin
Chen, Bo
Wang, Yaobin
Zhang, Ping
Zhou, Ying
2022Conference Paper, cited 0 times
CBIS-DDSM
Algorithm Development
Transfer learning
Deep Learning
Automatic detection
Computer Aided Detection (CADe)
The detection of breast masses and calcified clusters on mammograms is critical for early diagnosis and treatment to improve the survivals of breast cancer patients. In this study, we propose a local-whole-focus pipeline to automatically identify breast masses and calcified clusters on full-size mammograms, from local breast tissues to the whole mammograms, and then focusing on the lesion areas. We first train a deep model to learn the fine features of breast masses and calcified clusteres on local breast tissues, and then transfer the well-trained deep model to identify breast masses and calcified clusteres on full-size mammograms with image-level annotations. We also highlight the areas of the breast masses and calcified clusteres in mammograms to visualize the identification results. We evaluated the proposed local-whole-focus pipeline on a public dataset CBIS-DDSM (Curated Breast Imaging Subset of Digital Database for Screening Mammography) and a private dataset MY-Mammo (Mianyang central hospital mammograms). The experiment results showed the DenseNet embedded with squeeze-and-excitation (SE) blocks achieved competitive results on the identification of breast masses and calcified clusteres on full-size mammograms. The highlight areas of the breast masses and calcified clusteres on the entire mammograms could also explain model decision making, which are important in practical medical applications. Index Terms–Breast mass, calcified cluster, local breast tissue, full-size mammogram, automatic identification.
Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types
Huang, Lyu
Chen, Jiayan
Hu, Weigang
Xu, Xinyan
Liu, Di
Wen, Junmiao
Lu, Jiayu
Cao, Jianzhao
Zhang, Junhua
Gu, Yu
Wang, Jiazhou
Fan, Min
Clinical Lung Cancer2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
RIDER Lung CT
Radiomics
Non Small Cell Lung Cancer (NSCLC)
Objectives; To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types.; ; Methods; After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis.; ; Results; The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028).; ; Conclusions; This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary.; ; Abbreviations and acronyms; TCIA The Cancer Imaging Archive ; ALK Anaplastic lymphoma kinase ; NSCLC Non-small cell lung cancer ; EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion ; C-index Concordance index; CI Confidence interval; ICC The intra-class correlation coefficient; OS Overall Survival ; LASSO The Least Absolute Shrinkage and Selection Operator; EGFR Epidermal Growth Factor Receptor; TKI Tyrosine-kinase inhibitor
The Study on Data Hiding in Medical Images
Huang, Li-Chin
Tseng, Lin-Yu
Hwang, Min-Shiang
International Journal of Network Security2012Journal Article, cited 25 times
Website
Algorithm Development
Image analysis
Reversible data hiding plays an important role in medical image systems. Many hospitals have already applied the electronic medical information in healthcare systems. Reversible data hiding is one of the feasible methodologies to protect the individual privacy and confidential information. With application in several high quality medical devices, the detection rate of diseases and treating are improved at the early stage. Its demands havebeen rising for recognizing complicated anatomical structures in high quality images. However, most data hiding methods are still applied in 8-bit depth medical images with 255 intensity levels. This paper summarizes the existing reversible data hiding algorithms and introduces basic knowledge in medical image.
A reversible data hiding method by histogram shifting in high quality medical images
Huang, Li-Chin
Tseng, Lin-Yu
Hwang, Min-Shiang
Journal of Systems and Software2013Journal Article, cited 60 times
Website
A Semiautomated Deep Learning Approach for Pancreas Segmentation
Huang, M.
Huang, C.
Yuan, J.
Kong, D.
J Healthc Eng2021Journal Article, cited 1 times
Website
Pancreas-CT
Algorithms
Deep Learning
Tomography
X-Ray Computed
Accurate pancreas segmentation from 3D CT volumes is important for pancreas diseases therapy. It is challenging to accurately delineate the pancreas due to the poor intensity contrast and intrinsic large variations in volume, shape, and location. In this paper, we propose a semiautomated deformable U-Net, i.e., DUNet for the pancreas segmentation. The key innovation of our proposed method is a deformable convolution module, which adaptively adds learned offsets to each sampling position of 2D convolutional kernel to enhance feature representation. Combining deformable convolution module with U-Net enables our DUNet to flexibly capture pancreatic features and improve the geometric modeling capability of U-Net. Moreover, a nonlinear Dice-based loss function is designed to tackle the class-imbalanced problem in the pancreas segmentation. Experimental results show that our proposed method outperforms all comparison methods on the same NIH dataset.
An ensemble-acute lymphoblastic leukemia model for acute lymphoblastic leukemia image classification
The timely diagnosis of acute lymphoblastic leukemia (ALL) is of paramount importance for enhancing the treatment efficacy and the survival rates of patients. In this study, we seek to introduce an ensemble-ALL model for the image classification of ALL, with the goal of enhancing early diagnostic capabilities and streamlining the diagnostic and treatment processes for medical practitioners. In this study, a publicly available dataset is partitioned into training, validation, and test sets. A diverse set of convolutional neural networks, including InceptionV3, EfficientNetB4, ResNet50, CONV_POOL-CNN, ALL-CNN, Network in Network, and AlexNet, are employed for training. The top-performing four individual models are meticulously chosen and integrated with the squeeze-and-excitation (SE) module. Furthermore, the two most effective SE-embedded models are harmoniously combined to create the proposed ensemble-ALL model. This model leverages the Bayesian optimization algorithm to enhance its performance. The proposed ensemble-ALL model attains remarkable accuracy, precision, recall, F1-score, and kappa scores, registering at 96.26, 96.26, 96.26, 96.25, and 91.36%, respectively. These results surpass the benchmarks set by state-of-the-art studies in the realm of ALL image classification. This model represents a valuable contribution to the field of medical image recognition, particularly in the diagnosis of acute lymphoblastic leukemia, and it offers the potential to enhance the efficiency and accuracy of medical professionals in the diagnostic and treatment processes.
The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge
Huang, Wei
Chen, Yiyi
Fedorov, Andriy
Li, Xia
Jajamovich, Guido H
Malyarenko, Dariya I
Aryal, Madhava P
LaViolette, Peter S
Oborski, Matthew J
O'Sullivan, Finbarr
Tomography: a journal for imaging research2016Journal Article, cited 21 times
Website
QIN PROSTATE
Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge
Huang, W.
Li, X.
Chen, Y.
Li, X.
Chang, M. C.
Oborski, M. J.
Malyarenko, D. I.
Muzi, M.
Jajamovich, G. H.
Fedorov, A.
Tudorica, A.
Gupta, S. N.
Laymon, C. M.
Marro, K. I.
Dyvorne, H. A.
Miller, J. V.
Barbodiak, D. P.
Chenevert, T. L.
Yankeelov, T. E.
Mountz, J. M.
Kinahan, P. E.
Kikinis, R.
Taouli, B.
Fennessy, F.
Kalpathy-Cramer, J.
Transl Oncol2014Journal Article, cited 60 times
Website
QIN Breast DCE-MRI
DCE-MRI
Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.
Fast and Fully-Automated Detection and Segmentation of Pulmonary Nodules in Thoracic CT Scans Using Deep Convolutional Neural Networks
Huang, X.
Sun, W.
Tseng, T. B.
Li, C.
Qian, W.
Computerized Medical Imaging and Graphics2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Lung
Radiomics
Segmentation
Classification
Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.
GammaNet: An intensity-invariance deep neural network for computer-aided brain tumor segmentation
Due to their wide variety in location, appearance, size and intensity distribution, automatic and precise brain tumor segmentation is a challenging task. To address this issue, a computer-aided brain tumor segmentation system based on an adaptive gamma correction neural network (GammaNet) is proposed in this paper. Inspired from the conventional gamma correction, an adaptive gamma correction (AGC) block is proposed to realize intensity invariance and force the network to focus on significant regions. In addition, to adaptively adjust the intensity distributions of local regions, the feature maps are divided into several proposal regions, and local image characteristics are emphasized. Furthermore, to enlarge the receptive field without information loss and improve the segmentation performance, a dense atrous spatial pyramid pooling (Dense-ASPP) module is combined with AGC blocks to construct the GammaNet. The experimental results show that the dice similarity coefficient (DSC), sensitivity and intersection of union (IoU) of GammaNet are 85.8%, 87.8% and 80.31%, respectively, the implementation of AGC blocks and the Dense-ASPP can improve the DSC by 3.69% and 1.11%, respectively, which indicates that the GammaNet can achieve state-of-the-art performance.
N6-methyladenosine-related lncRNAs in combination with computational histopathology and radiomics predict the prognosis of bladder cancer
Huang, Z.
Wang, G.
Wu, Y.
Yang, T.
Shao, L.
Yang, B.
Li, P.
Li, J.
Transl Oncol2022Journal Article, cited 0 times
Website
TIL-WSI-TCGA
BLADDER
Biomarker
Diagnosis
Prognosis
Radiomics
Urinary bladder neoplasms
Image color analysis
OBJECTIVES: Identification of m6A- related lncRNAs associated with BC diagnosis and prognosis. METHODS: From the TCGA database, we obtained transcriptome data and corresponding clinical information (including histopathological and CT imaging data) for 408 patients. And bioinformatics, computational histopathology, and radiomics were used to identify and analyze diagnostic and prognostic biomarkers of m6A-related lncRNAs in BC. RESULTS: 3 significantly high-expressed m6A-related lncRNAs were significantly associated with the prognosis of BC. The BC samples were divided into two subgroups based on the expression of the 3 lncRNAs. The overall survival of patients in cluster 2 was significantly lower than that in cluster 1. The immune landscape results showed that the expression of PD-L1, T cells follicular helper, NK cells resting, and mast cells activated in cluster 2 were significantly higher, and naive B cells, plasma cells, T cells regulatory (Tregs), and mast cells resting were significantly lower. Computational histopathology results showed a significantly higher percentage of tumor-infiltrating lymphocytes (TILs) in cluster 2. The radiomics results show that the 3 eigenvalues of diagnostics image-original minimum, diagnostics image-original maximum, and original GLCM inverse variance are significantly higher in cluster 2. High expression of 2 bridge genes in the PPI network of 30 key immune genes predicts poorer disease-free survival, while immunohistochemistry showed that their expression levels were significantly higher in high-grade BC than in low-grade BC and normal tissue. CONCLUSION: Based on the results of immune landscape, computational histopathology, and radiomics, these 3 m6A-related lncRNAs may be diagnostic and prognostic biomarkers for BC.
Batch Similarity Based Triplet Loss Assembled into Light-Weighted Convolutional Neural Networks for Medical Image Classification
Huang, Z.
Zhou, Q.
Zhu, X.
Zhang, X.
Sensors (Basel)2021Journal Article, cited 0 times
Website
H&E-stained slides
Classification
Convolutional Neural Network (CNN)
In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.
Medical Image Classification Using a Light-Weighted Hybrid Neural Network Based on PCANet and DenseNet
Huang, Zhiwen
Zhu, Xingxing
Ding, Mingyue
Zhang, Xuming
IEEE Access2020Journal Article, cited 23 times
Website
Osteosarcoma-Tumor-Assessment
Classification
Deep Learning
Histopathology imaging features
CBIS-DDSM
Medical image classification plays an important role in disease diagnosis since it can provide important reference information for doctors. The supervised convolutional neural networks (CNNs) such as DenseNet provide the versatile and effective method for medical image classification tasks, but they require large amounts of data with labels and involve complex and time-consuming training process. The unsupervised CNNs such as principal component analysis network (PCANet) need no labels for training but cannot provide desirable classification accuracy. To realize the accurate medical image classification in the case of a small training dataset, we have proposed a light-weighted hybrid neural network which; consists of a modified PCANet cascaded with a simplified DenseNet. The modified PCANet has two stages, in which the network produces the effective feature maps at each stage by convoluting inputs with various learned kernels. The following simplified DenseNet with a small number of weights will take all feature maps produced by the PCANet as inputs and employ the dense shortcut connections to realize accurate medical; image classification. To appreciate the performance of the proposed method, some experiments have been done on mammography and osteosarcoma histology images. Experimental results show that the proposed hybrid neural network is easy to train and it outperforms such popular CNN models as PCANet, ResNet and DenseNet in terms of classification accuracy, sensitivity and specificity.; INDEX TERMS Medical image classification, hybrid neural network, PCANet, DenseNet.
Conditional generative adversarial network driven radiomic prediction of mutation status based on magnetic resonance imaging of breast cancer
Huang, Z. H.
Chen, L.
Sun, Y.
Liu, Q.
Hu, P.
J Transl Med2024Journal Article, cited 0 times
TCGA-BRCA
Radiomics
Female
Generative Adversarial Network (GAN)
*Breast Neoplasms/diagnostic imaging/genetics
Radiomics
DNA Copy Number Variations
Bayes Theorem
Magnetic Resonance Imaging/methods
Mutation/genetics
TP53
PIK3CA
CDH1
Breast cancer
Machine learning
Magnetic Resonance Imaging (MRI)
Synthetic data generation
Radiogenomics
cGANs
BACKGROUND: Breast Cancer (BC) is a highly heterogeneous and complex disease. Personalized treatment options require the integration of multi-omic data and consideration of phenotypic variability. Radiogenomics aims to merge medical images with genomic measurements but encounter challenges due to unpaired data consisting of imaging, genomic, or clinical outcome data. In this study, we propose the utilization of a well-trained conditional generative adversarial network (cGAN) to address the unpaired data issue in radiogenomic analysis of BC. The generated images will then be used to predict the mutations status of key driver genes and BC subtypes. METHODS: We integrated the paired MRI and multi-omic (mRNA gene expression, DNA methylation, and copy number variation) profiles of 61 BC patients from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). To facilitate this integration, we employed a Bayesian Tensor Factorization approach to factorize the multi-omic data into 17 latent features. Subsequently, a cGAN model was trained based on the matched side-view patient MRIs and their corresponding latent features to predict MRIs for BC patients who lack MRIs. Model performance was evaluated by calculating the distance between real and generated images using the Frechet Inception Distance (FID) metric. BC subtype and mutation status of driver genes were obtained from the cBioPortal platform, where 3 genes were selected based on the number of mutated patients. A convolutional neural network (CNN) was constructed and trained using the generated MRIs for mutation status prediction. Receiver operating characteristic area under curve (ROC-AUC) and precision-recall area under curve (PR-AUC) were used to evaluate the performance of the CNN models for mutation status prediction. Precision, recall and F1 score were used to evaluate the performance of the CNN model in subtype classification. RESULTS: The FID of the images from the well-trained cGAN model based on the test set is 1.31. The CNN for TP53, PIK3CA, and CDH1 mutation prediction yielded ROC-AUC values 0.9508, 0.7515, and 0.8136 and PR-AUC are 0.9009, 0.7184, and 0.5007, respectively for the three genes. Multi-class subtype prediction achieved precision, recall and F1 scores of 0.8444, 0.8435 and 0.8336 respectively. The source code and related data implemented the algorithms can be found in the project GitHub at https://github.com/mattthuang/BC_RadiogenomicGAN . CONCLUSION: Our study establishes cGAN as a viable tool for generating synthetic BC MRIs for mutation status prediction and subtype classification to better characterize the heterogeneity of BC in patients. The synthetic images also have the potential to significantly augment existing MRI data and circumvent issues surrounding data sharing and patient privacy for future BC machine learning studies.
A longitudinal four‐dimensional computed tomography and cone beam computed tomography dataset for image‐guided radiation therapy research in lung cancer
Hugo, Geoffrey D
Weiss, Elisabeth
Sleeman, William C
Balik, Salim
Keall, Paul J
Lu, Jun
Williamson, Jeffrey F
Medical Physics2017Journal Article, cited 8 times
Website
4D-Lung
Computed Tomography (CT)
PURPOSE: To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ACQUISITION AND VALIDATION METHODS: As part of an NCI-sponsored research study 82 4DCT and 507 4DCBCT images were acquired in a population of 20 locally advanced nonsmall cell lung cancer patients undergoing radiation therapy. All subjects underwent concurrent radiochemotherapy to a total dose of 59.4-70.2 Gy using daily 1.8 or 2 Gy fractions. Audio-visual biofeedback was used to minimize breathing irregularity during all fractions, including acquisition of all 4DCT and 4DCBCT acquisitions in all subjects. Target, organs at risk, and implanted fiducial markers were delineated by a physician in the 4DCT images. Image coordinate system origins between 4DCT and 4DCBCT were manipulated in such a way that the images can be used to simulate initial patient setup in the treatment position. 4DCT images were acquired on a 16-slice helical CT simulator with 10 breathing phases and 3 mm slice thickness during simulation. In 13 of the 20 subjects, 4DCTs were also acquired on the same scanner weekly during therapy. Every day, 4DCBCT images were acquired on a commercial onboard CBCT scanner. An optically tracked external surrogate was synchronized with CBCT acquisition so that each CBCT projection was time stamped with the surrogate respiratory signal through in-house software and hardware tools. Approximately 2500 projections were acquired over a period of 8-10 minutes in half-fan mode with the half bow-tie filter. Using the external surrogate, the CBCT projections were sorted into 10 breathing phases and reconstructed with an in-house FDK reconstruction algorithm. Errors in respiration sorting, reconstruction, and acquisition were carefully identified and corrected. DATA FORMAT AND USAGE NOTES: 4DCT and 4DCBCT images are available in DICOM format and structures through DICOM-RT RTSTRUCT format. All data are stored in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection 4D-Lung and are publicly available. DISCUSSION: Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.
Pulmonary nodule detection on computed tomography using neuro-evolutionary scheme
Huidrom, Ratishchandra
Chanu, Yambem Jina
Singh, Khumanthem Manglem
Signal, Image and Video Processing2018Journal Article, cited 0 times
Website
LIDC-IDRI
lung cancer
particle swarm optimization
Neuro-evolutional based computer aided detection system on computed tomography for the early detection of lung cancer.
Huidrom, R.
Chanu, Y. J.
Singh, K. M.
Multimedia Tools and Applications2022Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
LUNG
regularized discriminant features
cuckoo search algorithm
particle swarm optimization
nodule detection
pulmonary nodules
automatic detection
ct images
chest ct
algorithms
Lung cancer is one of the highest deadly disease which can be treated effectively in its early stage. Computer aided detection (CADe) can detect pulmonary nodules of lung cancer accurately and faster than manual detection. This paper presents a new CADe system using neuro-evolutional approach. The proposed method is focused on machine learning algorithm which is a crucial area of the system. The CADe system extracts lung regions from computed tomography images and detects pulmonary nodules within the lung regions. False positive reduction is performed by using a new neuro-evolutionary approach which consists of a feed-forward neural network and a combination of cuckoo search algorithm and particle swarm optimization. The performance of the proposed method is further improved by using regularized discriminant features and achieves 95.8% sensitivity, 95.3% specificity and 95.5% accuracy.
An overview of publicly available patient-centered prostate cancer datasets
Hulsen, Tim
2019Journal Article, cited 0 times
NaF PROSTATE
Prostate Fused-MRI-Pathology
Prostate-3T
PROSTATE-DIAGNOSIS
PROSTATE-MRI
PROSTATEx
QIN PROSTATE
QIN-PROSTATE-Repeatability
TCGA-PRAD
Prostate cancer (PCa) is the second most common cancer in men, and the second leading cause of death from cancer in men. Many studies on PCa have been carried out, each taking much time before the data is collected and ready to be analyzed. However, on the internet there is already a wide range of PCa datasets available, which could be used for data mining, predictive modelling or other purposes, reducing the need to setup new studies to collect data. In the current scientific climate, moving more and more to the analysis of "big data" and large, international, multi-site projects using a modern IT infrastructure, these datasets could be proven extremely valuable. This review presents an overview of publicly available patient-centered PCa datasets, divided into three categories (clinical, genomics and imaging) and an "overall" section to enable researchers to select a suitable dataset for analysis, without having to go through days of work to find the right data. To acquire a list of human PCa databases, scientific literature databases and academic social network sites were searched. We also used the information from other reviews. All databases in the combined list were then checked for public availability. Only databases that were either directly publicly available or available after signing a research data agreement or retrieving a free login were selected for inclusion in this review. Data should be available to commercial parties as well. This paper focuses on patient-centered data, so the genomics data section does not include gene-centered databases or pathway-centered databases. We identified 42 publicly available, patient-centered PCa datasets. Some of these consist of different smaller datasets. Some of them contain combinations of datasets from the three data domains: clinical data, imaging data and genomics data. Only one dataset contains information from all three domains. This review presents all datasets and their characteristics: number of subjects, clinical fields, imaging modalities, expression data, mutation data, biomarker measurements, etc. Despite all the attention that has been given to making this overview of publicly available databases as extensive as possible, it is very likely not complete, and will also be outdated soon. However, this review might help many PCa researchers to find suitable datasets to answer the research question with, without the need to start a new data collection project. In the coming era of big data analysis, overviews like this are becoming more and more useful.
Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction
Introduction: Since the onset of COVID-19, physicians and scientists have been working to further understand biomarkers associated with the infection, so that patients who have contracted the virus can be treated. Although COVID-19 is a complex virus that affects patients differently, current research suggests that COVID-19 infections have been associated with increased procalcitonin, a biomarker traditionally indicative of bacterial infections. This paper aims to investigate the relationship between COVID-19 infection severity and procalcitonin levels in the hopes to aid the management of patients with COVID-19 infections.; ; Methods: Patient data were obtained from the Renaissance School of Medicine at Stony Brook University. The data of the patients who had tested positive for COVID-19 and had an associated procalcitonin value (n=1046) was divided into age splits of 18-59, 59-74, and 74-90. Multiple factors were analyzed to determine the severity of each patient’s infection. Patients were divided into low, medium, and high severity dependent on the patient's COVID-19 severity. A one-way analysis of variance (ANOVA) was done for each age split to compare procalcitonin values of the severity groups within the respective age split. Next, post hoc analysis was done for the severity groups in each age split to further compare the groups against each other. ; ; Results: One-way ANOVA testing of the three age splits all had a resulting p<0.0001, displaying that the null hypothesis was rejected. In the post hoc analysis, however, the test failed to reject the null hypothesis when comparing the medium and high severity groups against each other in the 59-74 and 74-90 age splits. The null hypothesis was rejected in all pairwise comparisons in the 18-59 age split. We determined that a procalcitonin value of greater than 0.24 ng/mL would be characterized as a more severe COVID-19 infection when considering patient factors and comorbidities. ; ; Conclusion: The analysis of the data concluded that elevated procalcitonin levels correlated with the severity of COVID-19 infections. This finding can be used to assist medical providers in the management of COVID-19 patients.
Collage CNN for Renal Cell Carcinoma Detection from CT
Learnable image histograms-based deep radiomics for renal cell carcinoma grading and staging
Hussain, M. A.
Hamarneh, G.
Garbi, R.
Comput Med Imaging Graph2021Journal Article, cited 0 times
Website
Algorithm Development
Deep Learning
KIDNEY
Computed Tomography (CT)
Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.
Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT
Hussain, M. A.
Mirikharaji, Z.
Momeny, M.
Marhamati, M.
Neshat, A. A.
Garbi, R.
Hamarneh, G.
Comput Med Imaging Graph2022Journal Article, cited 0 times
Website
CT Images in COVID-19
Active learning
Covid-19
Deep learning
Noisy teacher
Pneumonia
Segmentation
Semi-supervised learning
Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.
An Investigation into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features
Huti, Mohamed
Lee, Tiarna
Sawyer, Elinor
King, Andrew P.
2023Book Section, cited 0 times
Duke-Breast-Cancer-MRI
Bias
Artificial Intelligence
Radiomics
Random forest classifier
DCE-MRI
BREAST
Recent research has shown that artificial intelligence (AI) models can exhibit bias in performance when trained using data that are imbalanced by protected attribute(s). Most work to date has focused on deep learning models, but classical AI techniques that make use of hand-crafted features may also be susceptible to such bias. In this paper we investigate the potential for race bias in random forest (RF) models trained using radiomics features. Our application is prediction of tumour molecular subtype from dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. Our results show that radiomics features derived from DCE-MRI data do contain race-identifiable information, and that RF models can be trained to predict White and Black race from these data with 60–70% accuracy, depending on the subset of features used. Furthermore, RF models trained to predict tumour molecular subtype using race-imbalanced data seem to produce biased behaviour, exhibiting better performance on test data from the race on which they were trained.
Fully Automated Segmentation Models of Supratentorial Meningiomas Assisted by Inclusion of Normal Brain Images
Hwang, Kihwan
Park, Juntae
Kwon, Young-Jae
Cho, Se Jin
Choi, Byung Se
Kim, Jiwon
Kim, Eunchong
Jang, Jongha
Ahn, Kwang-Sung
Kim, Sangsoo
Kim, Chae-Yong
Journal of Imaging2022Journal Article, cited 0 times
BraTS 2019
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation
U-net
Deep learning
Magnetic Resonance Imaging (MRI)
meningioma
To train an automatic brain tumor segmentation model, a large amount of data is required. In this paper, we proposed a strategy to overcome the limited amount of clinically collected magnetic resonance image (MRI) data regarding meningiomas by pre-training a model using a larger public dataset of MRIs of gliomas and augmenting our meningioma training set with normal brain MRIs. Pre-operative MRIs of 91 meningioma patients (171 MRIs) and 10 non-meningioma patients (normal brains) were collected between 2016 and 2019. Three-dimensional (3D) U-Net was used as the base architecture. The model was pre-trained with BraTS 2019 data, then fine-tuned with our datasets consisting of 154 meningioma MRIs and 10 normal brain MRIs. To increase the utility of the normal brain MRIs, a novel balanced Dice loss (BDL) function was used instead of the conventional soft Dice loss function. The model performance was evaluated using the Dice scores across the remaining 17 meningioma MRIs. The segmentation performance of the model was sequentially improved via the pre-training and inclusion of normal brain images. The Dice scores improved from 0.72 to 0.76 when the model was pre-trained. The inclusion of normal brain MRIs to fine-tune the model improved the Dice score; it increased to 0.79. When employing BDL as the loss function, the Dice score reached 0.84. The proposed learning strategy for U-net showed potential for use in segmenting meningioma lesions.
Advanced MRI Techniques in the Monitoring of Treatment of Gliomas
Hyare, Harpreet
Thust, Steffi
Rees, Jeremy
Current treatment options in neurology2017Journal Article, cited 11 times
Website
TCGA-GBM
glioma
OPINION STATEMENT: With advances in treatments and survival of patients with glioblastoma (GBM), it has become apparent that conventional imaging sequences have significant limitations both in terms of assessing response to treatment and monitoring disease progression. Both 'pseudoprogression' after chemoradiation for newly diagnosed GBM and 'pseudoresponse' after anti-angiogenesis treatment for relapsed GBM are well-recognised radiological entities. This in turn has led to revision of response criteria away from the standard MacDonald criteria, which depend on the two-dimensional measurement of contrast-enhancing tumour, and which have been the primary measure of radiological response for over three decades. A working party of experts published RANO (Response Assessment in Neuro-oncology Working Group) criteria in 2010 which take into account signal change on T2/FLAIR sequences as well as the contrast-enhancing component of the tumour. These have recently been modified for immune therapies, which are associated with specific issues related to the timing of radiological response. There has been increasing interest in quantification and validation of physiological and metabolic parameters in GBM over the last 10 years utilising the wide range of advanced imaging techniques available on standard MRI platforms. Previously, MRI would provide structural information only on the anatomical location of the tumour and the presence or absence of a disrupted blood-brain barrier. Advanced MRI sequences include proton magnetic resonance spectroscopy (MRS), vascular imaging (perfusion/permeability) and diffusion imaging (diffusion weighted imaging/diffusion tensor imaging) and are now routinely available. They provide biologically relevant functional, haemodynamic, cellular, metabolic and cytoarchitectural information and are being evaluated in clinical trials to determine whether they offer superior biomarkers of early treatment response than conventional imaging, when correlated with hard survival endpoints. Multiparametric imaging, incorporating different combinations of these modalities, improves accuracy over single imaging modalities but has not been widely adopted due to the amount of post-processing analysis required, lack of clinical trial data, lack of radiology training and wide variations in threshold values. New techniques including diffusion kurtosis and radiomics will offer a higher level of quantification but will require validation in clinical trial settings. Given all these considerations, it is clear that there is an urgent need to incorporate advanced techniques into clinical trial design to avoid the problems of under or over assessment of treatment response.;
Encoder-Decoder Network for Brain Tumor Segmentation on Multi-sequence MRI
Iantsen, Andrei
Jaouen, Vincent
Visvikis, Dimitris
Hatt, Mathieu
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
In this paper we describe our approach based on convolutional neural networks for medical image segmentation in a context of the BraTS 2019 challenge. We use the conventional encoder-decoder architecture enhanced with residual blocks, as well as spatial and channel squeeze & excitation modules. The present paper describes the general pipeline including the data pre-processing, the choices regarding the model architecture, the training procedure and the chosen data augmentation techniques. Our final results in the BraTS 2019 segmentation challenge are Dice scores equal to 0.76, 0.87 and 0.80 for enhanced tumor, whole tumor and tumor core sub-regions, respectively.
Squeeze-and-Excitation Normalization for Brain Tumor Segmentation
Iantsen, Andrei
Jaouen, Vincent
Visvikis, Dimitris
Hatt, Mathieu
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
In this paper we described our approach for glioma segmentation in multi-sequence magnetic resonance imaging (MRI) in the context of the MICCAI 2020 Brain Tumor Segmentation Challenge (BraTS). We proposed an architecture based on U-Net with a new computational unit termed “SE Norm” that brought significant improvements in segmentation quality. Our approach obtained competitive results on the validation (Dice scores of 0.780, 0.911, 0.863) and test (Dice scores of 0.805, 0.887, 0.843) sets for the enhanced tumor, whole tumor and tumor core sub-regions. The full implementation and trained models are available at https://github.com/iantsen/brats.
Classification of the presence of malignant lesions on mammogram using deep learning
Ibragimov, Alisher A.
Senotrusova, Sofya A.
Litvinov, Arsenii A.
Beliaeva, Aleksandra A.
Ushakov, Egor N.
Markin, Yury V.
2024Journal Article, cited 0 times
CMMD
BACKGROUND: Breast cancer is one of the leading causes of cancer-related mortality in women [1]. Regular mass screening with mammography plays a critical role in the early detection of changes in breast tissue. However, the early stages of pathology often go undetected and are difficult to diagnose [2]. Despite the effectiveness of mammography in reducing breast cancer mortality, manual image analysis can be time consuming and labor intensive. Therefore, attempts to automate this process, for example using computer-aided diagnosis systems, are relevant [3]. In recent years, however, solutions based on neural networks have gained increasing interest, especially in biology and medicine [4-6]. Technological advances using artificial intelligence have already demonstrated their effectiveness in pathology detection [7, 8]. AIM: The study aimed to develop an automated solution to detect breast cancer on mammograms. MATERIALS AND METHODS: The solution is implemented as follows: a deep neural network-based tool has been developed to obtain the probability of malignancy from the input image. A combined dataset from public datasets such as MIAS, CBIS-DDSM, INbreast, CMMD, KAU-BCMD, and VinDr-Mammo [9–14] was used to train the model. RESULTS: The classification model, based on the EfficientNet-B3 architecture, achieved an area under the ROC curve of 0.95, a sensitivity of 0.88, and a specificity of 0.9 when tested on a sample from the combined dataset. The model’s high generalization ability, which is another advantage, was demonstrated by its ability to perform well on images from different datasets with varying data quality and acquisition regions. Furthermore, techniques such as image pre-cropping and augmentations during training were used to enhance the model's performance. CONCLUSIONS: The experimental results demonstrated that the model is capable of accurately detecting malignancies with a high degree of confidence. The obtained high-quality metrics offer a significant potential for implementing this method in automated diagnostics, for instance, as an additional opinion for medical specialists.
Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.
MaasPenn radiomics reproducibility score: A novel quantitative measure for evaluating the reproducibility of CT-based handcrafted radiomic features
The application of a workflow integrating the variable reproducibility and harmonizability of radiomic features on a phantom dataset
Ibrahim, Abdalla
Refaee, Turkey
Leijenaar, Ralph TH
Primakov, Sergey
Hustinx, Roland
Mottaghy, Felix M
Woodruff, Henry C
Maidment, Andrew DA
Lambin, Philippe
PLoS One2021Journal Article, cited 2 times
Website
Credence Cartridge Radiomics Phantom CT Scans
radiomic features
The Effects of In-Plane Spatial Resolution on CT-Based Radiomic Features’ Stability with and without ComBat Harmonization
Ibrahim, Abdalla
Refaee, Turkey
Primakov, Sergey
Barufaldi, Bruno
Acciavatti, Raymond J.
Granzier, Renée W. Y.
Hustinx, Roland
Mottaghy, Felix M.
Woodruff, Henry C.
Wildberger, Joachim E.
Lambin, Philippe
Maidment, Andrew D. A.
Cancers2021Journal Article, cited 0 times
CC-Radiomics-Phantom
While handcrafted radiomic features (HRFs) have shown promise in the field of personalized medicine, many hurdles hinder its incorporation into clinical practice, including but not limited to their sensitivity to differences in acquisition and reconstruction parameters. In this study, we evaluated the effects of differences in in-plane spatial resolution (IPR) on HRFs, using a phantom dataset (n = 14) acquired on two scanner models. Furthermore, we assessed the effects of interpolation methods (IMs), the choice of a new unified in-plane resolution (NUIR), and ComBat harmonization on the reproducibility of HRFs. The reproducibility of HRFs was significantly affected by variations in IPR, with pairwise concordant HRFs, as measured by the concordance correlation coefficient (CCC), ranging from 42% to 95%. The number of concordant HRFs (CCC > 0.9) after resampling varied depending on (i) the scanner model, (ii) the IM, and (iii) the NUIR. The number of concordant HRFs after ComBat harmonization depended on the variations between the batches harmonized. The majority of IMs resulted in a higher number of concordant HRFs compared to ComBat harmonization, and the combination of IMs and ComBat harmonization did not yield a significant benefit. Our developed framework can be used to assess the reproducibility and harmonizability of RFs.
Multi-Graph Convolutional Neural Network for Breast Cancer Multi-task Classification
Ibrahim, Mohamed
Henna, Shagufta
Cullen, Gary
2023Book Section, cited 0 times
CBIS-DDSM
Semi-supervised learning
Algorithm Development
Radiomics
Mammography is a popular diagnostic imaging procedure for detecting breast cancer at an early stage. Various deep-learning approaches to breast cancer detection incur high costs and are erroneous. Therefore, they are not reliable to be used by medical practitioners. Specifically, these approaches do not exploit complex texture patterns and interactions. These approaches warrant the need for labelled data to enable learning, limiting the scalability of these methods with insufficient labelled datasets. Further, these models lack generalisation capability to new-synthesised patterns/textures. To address these problems, in the first instance, we design a graph model to transform the mammogram images into a highly correlated multigraph that encodes rich structural relations and high-level texture features. Next, we integrate a pre-training self-supervised learning multigraph encoder (SSL-MG) to improve feature presentations, especially under limited labelled data constraints. Then, we design a semi-supervised mammogram multigraph convolution neural network downstream model (MMGCN) to perform multi-classifications of mammogram segments encoded in the multigraph nodes. Our proposed frameworks, SSL-MGCN and MMGCN, reduce the need for annotated data to 40% and 60%, respectively, in contrast to the conventional methods that require more than 80% of data to be labelled. Finally, we evaluate the classification performance of MMGCN independently and with integration with SSL-MG in a model called SSL-MMGCN over multi-training settings. Our evaluation results on DSSM, one of the recent public datasets, demonstrate the efficient learning performance of SSL-MNGCN and MMGCN with 0.97 and 0.98 AUC classification accuracy in contrast to the multitask deep graph (GCN) method Hao Du et al. (2021) with 0.81 AUC accuracy.
APPLICATION OF MAGNETIC RESONANCE RADIOMICS PLATFORM (MRP) FOR MACHINE LEARNING BASED FEATURES EXTRACTION FROM BRAIN TUMOR IMAGES
Idowu, B.A.
Dada, O. M.
Awojoyogbe, O.B.
Journal of Science, Technology, Mathematics and Education (JOSTMED)2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
BRAIN
Magnetic Resonance Imaging (MRI)
Machine Learning
Radiomic features
NIfTI
This study investigated the implementation of magnetic resonance radiomics platform (MRP) for machine learning based features extraction from brain tumor images. Magnetic resonance imaging data publicly available in The Cancer Imaging Archive (TCIA) were downloaded and used to perform image Coregistration, Multi-Modality, Images interpolation, Morphology and Extraction of radiomic features with MRP tools. Radiomics analyses were then applied to the data (containing AX-T1-POST, Diffusion weighted, AX-T2-FSE and AX-T2-FLAIR sequences) using wavelet decomposition principles. The results employing different configurations of low-pass and high-pass filters were exported to Microsoft excel data sheets. The exported data were visualized using MATLAB’s classification learner tool. These exported data and the visualizations provide a new way of deep assessment of image data as well as easier interpretation of image scans. Findings from this study revealed that Machine learning Radiomics Platform is important in characterizing, visualizing and gives adequate information of a brain tumor.
Multi-View Attention-based Late Fusion (MVALF) CADx system for breast cancer using deep learning
Iftikhar, Hina
Shahid, Ahmad Raza
Raza, Basit
Khan,Hasan Nasir
Machine Graphics & Vision2020Journal Article, cited 0 times
Website
CBIS-DDSM
Computer Aided Diagnosis (CADx)
BREAST
Transfer learning
Mammography
Information fusion
Breast cancer is a leading cause of death among women. Early detection can significantly reduce the mortality rate among women and improve their prognosis. Mammography is the first line procedure for early diagnosis. In the early era, conventional Computer-Aided Diagnosis (CADx) systems for breast lesion diagnosis were based on just single view information. The last decade evidence the use of two views mammogram: Medio-Lateral Oblique (MLO) and Cranio-Caudal (CC) view for the CADx systems. Most recent studies show the effectiveness of four views of mammogram to train CADx system with feature fusion strategy for classification task. In this paper, we proposed an end-to-end Multi-View Attention-based Late Fusion (MVALF) CADx system that fused the obtained predictions of four view models, which is trained for each view separately. These separate models have different predictive ability for each class. The appropriate fusion of multi-view models can achieve better diagnosis performance. So, it is necessary to assign the proper weights to the multi-view classification models. To resolve this issue, attention-based weighting mechanism is adopted to assign the proper weights to trained models for fusion strategy. The proposed methodology is used for the classification of mammogram into normal, mass, calcification, malignant masses and benign masses. The publicly available datasets CBIS-DDSM and mini-MIAS are used for the experimentation. The results show that our proposed system achieved 0.996 AUC for normal vs. abnormal, 0.922 for mass vs. calcification and 0.896 for malignant vs. benign masses. Superior results are seen for the classification of malignant vs benign masses with our proposed approach, which is higher than the results using single view, two views and four views early fusion-based systems. The overall results of each level show the potential of multi-view late fusion with transfer learning in the diagnosis of breast cancer.
Clinical and imaging characteristics of supratentorial glioma with IDH2 mutation
Ikeda, S.
Sakata, A.
Arakawa, Y.
Mineharu, Y.
Makino, Y.
Takeuchi, Y.
Fushimi, Y.
Okuchi, S.
Nakajima, S.
Otani, S.
Nakamoto, Y.
Neuroradiology2024Journal Article, cited 0 times
Website
UCSF-PDGM
TCGA-LGG
Glioma
Isocitrate Dehydrogenase
Magnetic Resonance Imaging
T2-FLAIR Mismatch Sign
PURPOSE: The rarity of IDH2 mutations in supratentorial gliomas has led to gaps in understanding their radiological characteristics, potentially resulting in misdiagnosis based solely on negative IDH1 immunohistochemical staining. We aimed to investigate the clinical and imaging characteristics of IDH2-mutant gliomas. METHODS: We analyzed imaging data from adult patients with pathologically confirmed diffuse lower-grade gliomas and known IDH1/2 alteration and 1p/19q codeletion statuses obtained from the records of our institute (January 2011 to August 2022, Cohort 1) and The Cancer Imaging Archive (TCIA, Cohort 2). Two radiologists evaluated clinical information and radiological findings using standardized methods. Furthermore, we compared the data for IDH2-mutant and IDH-wildtype gliomas. Multivariate logistic regression was used to identify the predictors of IDH2 mutation status, and receiver operating characteristic curve analysis was employed to assess the predictive performance of the model. RESULTS: Of the 20 IDH2-mutant supratentorial gliomas, 95% were in the frontal lobes, with 75% classified as oligodendrogliomas. Age and the T2-FLAIR discordance were independent predictors of IDH2 mutations. Receiver operating characteristic curve analysis for the model using age and T2-FLAIR discordance demonstrated a strong potential for discriminating between IDH2-mutant and IDH-wildtype gliomas, with an area under the curve of 0.96 (95% CI, 0.91-0.98, P = .02). CONCLUSION: A high frequency of oligodendrogliomas with 1p/19q codeletion was observed in IDH2-mutated gliomas. Younger age and the presence of the T2-FLAIR discordance were associated with IDH2 mutations and these findings may help with precise diagnoses and treatment decisions in clinical practice.
Brain tumor segmentation in MRI images using nonparametric localization and enhancement methods with U-net
Ilhan, A.
Sekeroglu, B.
Abiyev, R.
Int J Comput Assist Radiol Surg2022Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BRAIN
Image Enhancement/methods
Magnetic Resonance Imaging (MRI)
Segmentation
U-net
PURPOSE: Segmentation is one of the critical steps in analyzing medical images since it provides meaningful information for the diagnosis, monitoring, and treatment of brain tumors. In recent years, several artificial intelligence-based systems have been developed to perform this task accurately. However, the unobtrusive or low-contrast occurrence of some tumors and similarities to healthy brain tissues make the segmentation task challenging. These yielded researchers to develop new methods for preprocessing the images and improving their segmentation abilities. METHODS: This study proposes an efficient system for the segmentation of the complete brain tumors from MRI images based on tumor localization and enhancement methods with a deep learning architecture named U-net. Initially, the histogram-based nonparametric tumor localization method is applied to localize the tumorous regions and the proposed tumor enhancement method is used to modify the localized regions to increase the visual appearance of indistinct or low-contrast tumors. The resultant images are fed to the original U-net architecture to segment the complete brain tumors. RESULTS: The performance of the proposed tumor localization and enhancement methods with the U-net is tested on benchmark datasets, BRATS 2012, BRATS 2019, and BRATS 2020, and achieved superior results as 0.94, 0.85, 0.87, 0.88 dice scores for the BRATS 2012 HGG-LGG, BRATS 2019, and BRATS 2020 datasets, respectively. CONCLUSION: The results and comparisons showed how the proposed methods improve the segmentation ability of the deep learning models and provide high-accuracy and low-cost segmentation of complete brain tumors in MRI images. The results might yield the implementation of the proposed methods in segmentation tasks of different medical fields.
VR-Caps: A Virtual Environment for Capsule Endoscopy
İncetan, Kağan
Celik, Ibrahim Omer
Obeid, Abdulhamid
Gokceler, Guliz Irem
Ozyoruk, Kutsev Bengisu
Almalioglu, Yasin
Chen, Richard J
Mahmood, Faisal
Gilbert, Hunter
Durr, Nicholas J
Turan, Mehmet
Medical Image Analysis2021Journal Article, cited 0 times
CT COLONOGRAPHY
Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).
Spatiotemporal learning of dynamic positron emission tomography data improves diagnostic accuracy in breast cancer
Inglese, Marianna
Duggento, Andrea
Boccato, Tommaso
Ferrante, Matteo
Toschi, Nicola
2022Conference Proceedings, cited 0 times
ACRIN-FLT-Breast
Positron emission tomography (PET) can reveal metabolic activity in a voxelwise manner. PET analysis is commonly performed in a static manner by analyzing the standardized uptake value (SUV) obtained from the plateau region of PET acquisitions. A dynamic PET acquisition can provide a map of the spatiotemporal concentration of the tracer in vivo, hence conveying information about radiotracer delivery to tissue, its interaction with the target and washout. Therefore, tissue-specific biochemical properties are embedded in the shape of time activity curves (TACs), which are generally used for kinetic analysis. Conventionally, TACs are employed along with information about blood plasma activity concentration, i.e., the arterial input function (AIF), and specific compartmental models to obtain a full quantitative analysis of PET data. The main drawback of this approach is the need for invasive procedures requiring arterial blood sample collection during the whole PET scan. In this paper, we address the challenge of improving PET diagnostic accuracy through an alternative approach based on the analysis of time signal intensity patterns. Specifically, we demonstrate the diagnostic potential of tissue TACs provided by dynamic PET acquisition using various deep learning models. Our framework is shown to outperform the discriminative potential of classical SUV analysis, hence paving the way for more accurate PET-based lesion discrimination without additional acquisition time or invasive procedures. Clinical Relevance- The diagnostic accuracy of dynamic PET data exploited by deep-learning based time signal intensity pattern analysis is superior to that of static SUV imaging.
Spatiotemporal Learning of Dynamic Positron Emission Tomography Data Improves Diagnostic Accuracy in Breast Cancer
Inglese, Marianna
Ferrante, Matteo
Duggento, Andrea
Boccato, Tommaso
Toschi, Nicola
IEEE Transactions on Radiation and Plasma Medical Sciences2023Journal Article, cited 0 times
ACRIN-FLT-Breast
Positron emission tomography (PET) is a noninvasive imaging technology able to assess the metabolic or functional state of healthy and/or pathological tissues. In clinical practice, PET data are usually acquired statically and normalized for the evaluation of the standardized uptake value (SUV). In contrast, dynamic PET acquisitions provide information about radiotracer delivery to tissue, its interaction with the target, and its physiological washout. The shape of the time activity curves (TACs) embeds tissue-specific biochemical properties. Conventionally, TACs are employed along with information about blood plasma activity concentration, i.e., the arterial input function, and tracer-specific compartmental models to obtain a full quantitative analysis of PET data. This method’s primary disadvantage is the requirement for invasive arterial blood sample collection throughout the whole PET scan. In this study, we employ a variety of deep learning models to illustrate the diagnostic potential of dynamic PET acquisitions of varying lengths for discriminating breast cancer lesions in the absence of arterial blood sampling compared to static PET only. Our findings demonstrate that the use of TACs, even in the absence of arterial blood sampling and even when using only a share of all timeframes available, outperforms the discriminative ability of conventional SUV analysis.
Automatic head computed tomography image noise quantification with deep learning
Inkinen, S. I.
Makela, T.
Kaasalainen, T.
Peltonen, J.
Kangasniemi, M.
Kortesniemi, M.
Phys Med2022Journal Article, cited 0 times
Website
LDCT-and-Projection-data
*Deep Learning
Head/diagnostic imaging
Humans
Image Processing
Computer-Assisted/methods
Neural Networks
Computer
Tomography
X-Ray Computed/methods
Anthropomorphic phantom
BRAIN
Computed Tomography (CT)
Deep learning
Image quality
Noise
PURPOSE: Computed tomography (CT) image noise is usually determined by standard deviation (SD) of pixel values from uniform image regions. This study investigates how deep learning (DL) could be applied in head CT image noise estimation. METHODS: Two approaches were investigated for noise image estimation of a single acquisition image: direct noise image estimation using supervised DnCNN convolutional neural network (CNN) architecture, and subtraction of a denoised image estimated with denoising UNet-CNN experimented with supervised and unsupervised noise2noise training approaches. Noise was assessed with local SD maps using 3D- and 2D-CNN architectures. Anthropomorphic phantom CT image dataset (N = 9 scans, 3 repetitions) was used for DL-model comparisons. Mean square error (MSE) and mean absolute percentage errors (MAPE) of SD values were determined using the SD values of subtraction images as ground truth. Open-source clinical head CT low-dose dataset (N(train) = 37, N(test) = 10 subjects) were used to demonstrate DL applicability in noise estimation from manually labeled uniform regions and in automated noise and contrast assessment. RESULTS: The direct SD estimation using 3D-CNN was the most accurate assessment method when comparing in phantom dataset (MAPE = 15.5%, MSE = 6.3HU). Unsupervised noise2noise approach provided only slightly inferior results (MAPE = 20.2%, MSE = 13.7HU). 2DCNN and unsupervised UNet models provided the smallest MSE on clinical labeled uniform regions. CONCLUSIONS: DL-based clinical image assessment is feasible and provides acceptable accuracy as compared to true image noise. Noise2noise approach may be feasible in clinical use where no ground truth data is available. Noise estimation combined with tissue segmentation may enable more comprehensive image quality characterization.
Towards Efficient Segmentation and Classification of White Blood Cell Cancer Using Deep Learning
White Blood cell cancer is a plasma cell cancer that starts in the bone marrow and leads to the formation of abnormal plasma cells. Medical examiners must be exceedingly selective when diagnosing myeloma cells. Moreover, because the final judgment is dependent on human perception and judgment, there is a chance that the conclusion may be incorrect. This study is noteworthy because it creates a software-assisted way for recognizing and identifying myeloma cells in bone marrow scans. MASK-Recurrent Convolutional Neural Network has been utilized for recognition, while Efficient Net B3 has been used for detection. The mean Average Precision (mAP) of MASK-RCNN is 93%, whereas Efficient Net B3 is 95% accurate. According to the findings of this study, the Mask-RCNN model can identify multiple myeloma, and Efficient Net B3 can distinguish between myeloma and non-myeloma cells.
Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)
Iqbal, Sajid
Ghani, M Usman
Saba, Tanzila
Rehman, Amjad
Microscopy research and technique2018Journal Article, cited 8 times
Website
TCGA-LGG
MICCAI-BraTS
BraTS datasets
Convolutional Neural Network (CNN)
deep learning
feature mining
tumor segmentation
Improving the Robustness and Quality of Biomedical CNN Models through Adaptive Hyperparameter Tuning
Iqbal, S.
Qureshi, A. N.
Ullah, A.
Li, J. Q.
Mahmood, T.
Applied Sciences-Basel2022Journal Article, cited 0 times
BraTS 2020
BraTS 2021
BreakHis
Convolutional Neural Network (CNN)
Algorithm Development
Deep learning is an obvious method for the detection of disease, analyzing medical images and many researchers have looked into it. However, the performance of deep learning algorithms is frequently influenced by hyperparameter selection, the question of which combination of hyperparameters are best emerges. To address this challenge, we proposed a novel algorithm for Adaptive Hyperparameter Tuning (AHT) that automates the selection of optimal hyperparameters for Convolutional Neural Network (CNN) training. All of the optimal hyperparameters for the CNN models were instantaneously selected and allocated using a novel proposed algorithm Adaptive Hyperparameter Tuning (AHT). Using AHT, enables CNN models to be highly autonomous to choose optimal hyperparameters for classifying medical images into various classifications. The CNN model (Deep-Hist) categorizes medical images into basic classes: malignant and benign, with an accuracy of 95.71%. The most dominant CNN models such as ResNet, DenseNet, and MobileNetV2 are all compared to the already proposed CNN model (Deep-Hist). Plausible classification results were obtained using large, publicly available clinical datasets such as BreakHis, BraTS, NIH-Xray and COVID-19 X-ray. Medical practitioners and clinicians can utilize the CNN model to corroborate their first malignant and benign classification assessment. The recommended Adaptive high F1 score and precision, as well as its excellent generalization and accuracy, imply that it might be used to build a pathologist's aid tool.
Consistency and Comparison of Medical Image Registration-Segmentation and Mathematical Model for Glioblastoma Volume Progression
IRMAK, Emrah
2020Journal Article, cited 0 times
RIDER NEURO MRI
Tumor volume progression and calculation is a very common task in cancer research and image processing. Tumor volume analysis can be carried out in two ways. The first way is using different mathematical formulas and the second way is using image registration-segmentation method. In this paper an objective application of registration of multiple brain imaging scans with segmentation is used to investigate brain tumor growth in a 3 dimensional (3D) manner. Using 3D medical image registration-segmentation algorithm, multiple scans of MR images of a patient who has brain tumor are registered with different MR images of the same patient acquired at a different time so that growth of the tumor inside the patient's brain can be investigated. Brain tumor volume measurement is also achieved using mathematical model based formulas in this paper. Medical image registration-segmentation and mathematical based method are implemented to 19 patients and satisfactory results are obtained. An advantageous point of medical image registration-segmentation method for brain tumor investigation is that grown, diminished, and unchanged brain tumor parts of the patients are investigated and computed on an individual basis in a three-dimensional (3D) manner within the time. This paper is intended to provide a comprehensive reference source for researchers involved in medical image registration, segmentation and tumor growth investigation.
Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework
Irmak, Emrah
Iranian Journal of Science and Technology, Transactions of Electrical Engineering2021Journal Article, cited 0 times
Website
RIDER NEURO MRI
REMBRANDT
TCGA-LGG
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Classification
BRAIN
Brain tumor diagnosis and classification still rely on histopathological analysis of biopsy specimens today. The current method is invasive, time-consuming and prone to manual errors. These disadvantages show how essential it is to perform a fully automated method for multi-classification of brain tumors based on deep learning. This paper aims to make multi-classification of brain tumors for the early diagnosis purposes using convolutional neural network (CNN). Three different CNN models are proposed for three different classification tasks. Brain tumor detection is achieved with 99.33% accuracy using the first CNN model. The second CNN model can classify the brain tumor into five brain tumor types as normal, glioma, meningioma, pituitary and metastatic with an accuracy of 92.66%. The third CNN model can classify the brain tumors into three grades as Grade II, Grade III and Grade IV with an accuracy of 98.14%. All the important hyper-parameters of CNN models are automatically designated using the grid search optimization algorithm. To the best of author’s knowledge, this is the first study for multi-classification of brain tumor MRI images using CNN whose almost all hyper-parameters are tuned by the grid search optimizer. The proposed CNN models are compared with other popular state-of-the-art CNN models such as AlexNet, Inceptionv3, ResNet-50, VGG-16 and GoogleNet. Satisfactory classification results are obtained using large and publicly available clinical datasets. The proposed CNN models can be employed to assist physicians and radiologists in validating their initial screening for brain tumor multi-classification purposes.
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Irshad, Samra
Gomes, Douglas P. S.
Kim, Seong Tae
IEEE Access2023Journal Article, cited 0 times
Pancreas-CT
Algorithm Development
Automatic Segmentation
Quantitative assessment of the abdominal region from CT scans requires the accurate delineation of abdominal organs. Therefore, automatic abdominal image segmentation has been the subject of intensive research for the past two decades. Recently, deep learning-based methods have resulted in state-of-the-art performance for the 3D abdominal CT segmentation. However, the complex characterization of abdominal organs with weak boundaries prevents the deep learning methods from accurate segmentation. Specifically, the voxels on the boundary of organs are more vulnerable to misprediction due to the highly-varying intensities. This paper proposes a method for improved abdominal image segmentation by leveraging organ-boundary prediction as a complementary task. We train 3D encoder-decoder networks to simultaneously segment the abdominal organs and their boundaries via multi-task learning. We explore two network topologies based on the extent of weights shared between the two tasks within a unified multi-task framework. In the first topology, the whole-organ prediction task and the boundary detection task share all the layers in the network except for the last task-specific layers. The second topology employs a single shared encoder but two separate task-specific decoders. The effectiveness of utilizing the organs’ boundary information for abdominal multi-organ segmentation is evaluated on two publically available abdominal CT datasets: Pancreas-CT and the BTCV dataset. The improvements shown in segmentation results reveal the advantage of the multi-task training that forces the network to pay attention to ambiguous boundaries of organs. A maximum relative improvement of 3.5% and 3.6% is observed in Mean Dice Score for Pancreas-CT and BTCV datasets, respectively.
nnU-Net for Brain Tumor Segmentation
Isensee, Fabian
Jäger, Paul F.
Full, Peter M.
Vollmuth, Philipp
Maier-Hein, Klaus H.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
U-Net
We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnU-Net pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our method took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.
X-ray CT scatter correction by a physics-motivated deep neural network
A fundamental problem in X-ray Computed Tomography (CT) is the scatter occurring due to the interaction of photons with the imaged object. Unless it is corrected, this phenomenon manifests itself as degradations in the reconstructions in the form of various artifacts. This makes scatter correction a critical step to obtain the desired reconstruction quality. Scatter correction methods consist of two groups: hardware-based and software-based. Despite success in specific settings, hardware-based methods require modification in the hardware or an increase in the scan time or dose. This makes software-based methods attractive. In this context, Monte-Carlo based scatter estimation, analytical-numerical and kernel-based methods were developed. Furthermore, the capacity of data-driven approaches to tackle this problem was recently demonstrated. In this thesis, two novel physics-motivated deep-learning-based methods are proposed. The methods estimate and correct for the scatter in the obtained projection measurements. They incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it. They use a common specific deep neural network architecture and a cost function adapted to the problem. Numerical experiments with data obtained by Monte-Carlo simulations of the imaging of phantoms reveal noticeable improvement over a recent projection-domain deep neural network correction method.
A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks
Islam, Kh Tohidul
Wijewickrema, Sudanthi
O’Leary, Stephen
PeerJ Computer Science2019Journal Article, cited 0 times
Website
Radiomics
Deep Learning
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.
Spatially Varying Label Smoothing: Capturing Uncertainty from Expert Annotations
Islam, Mobarakol
Glocker, Ben
2021Book Section, cited 0 times
LIDC-IDRI
The task of image segmentation is inherently noisy due to ambiguities regarding the exact location of boundaries between anatomical structures. We argue that this information can be extracted from the expert annotations at no extra cost, and when integrated into state-of-the-art neural networks, it can lead to improved calibration between soft probabilistic predictions and the underlying uncertainty. We built upon label smoothing (LS) where a network is trained on ‘blurred’ versions of the ground truth labels which has been shown to be effective for calibrating output predictions. However, LS is not taking the local structure into account and results in overly smoothed predictions with low confidence even for non-ambiguous regions. Here, we propose Spatially Varying Label Smoothing (SVLS), a soft labeling technique that captures the structural uncertainty in semantic segmentation. SVLS also naturally lends itself to incorporate inter-rater uncertainty when multiple labelmaps are available. The proposed approach is extensively validated on four clinical segmentation tasks with different imaging modalities, number of classes and single and multi-rater expert annotations. The results demonstrate that SVLS, despite its simplicity, obtains superior boundary prediction with improved uncertainty and model calibration.
Glioma Prognosis: Segmentation of the Tumor and Survival Prediction Using Shape, Geometric and Clinical Information
Islam, Mobarakol
Jose, V. Jeya Maria
Ren, Hongliang
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation of brain tumor from magnetic resonance imaging (MRI) is a vital process to improve diagnosis, treatment planning and to study the difference between subjects with tumor and healthy subjects. In this paper, we exploit a convolutional neural network (CNN) with hypercolumn technique to segment tumor from healthy brain tissue. Hypercolumn is the concatenation of a set of vectors which form by extracting convolutional features from multiple layers. Proposed model integrates batch normalization (BN) approach with hypercolumn. BN layers help to alleviate the internal covariate shift during stochastic gradient descent (SGD) training by zero-mean and unit variance of each mini-batch. Survival Prediction is done by first extracting features (Geometric, Fractal, and Histogram) from the segmented brain tumor data. Then, the number of days of overall survival is predicted by implementing regression on the extracted features using an artificial neural network (ANN). Our model achieves a mean dice score of 89.78%, 82.53% and 76.54% for the whole tumor, tumor core and enhancing tumor respectively in segmentation task and 67.9% in overall survival prediction task with the validation set of BraTS 2018 challenge. It obtains a mean dice accuracy of 87.315%, 77.04% and 70.22% for the whole tumor, tumor core and enhancing tumor respectively in the segmentation task and a 46.8% in overall survival prediction task in the BraTS 2018 test data set.
Multi-modal PixelNet for Brain Tumor Segmentation
Islam, Mobarakol
Ren, Hongliang
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Abstract
Brain tumor segmentation using multi-modal MRI data sets is important for diagnosis, surgery and follow up evaluation. In this paper, a convolutional neural network (CNN) with hypercolumns features (e.g. PixelNet) utilizes for automatic brain tumor segmentation containing low and high-grade glioblastomas. Though pixel level convolutional predictors like CNNs, are computationally efficient, such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. PixelNet extracts features from multiple layers that correspond to the same pixel and samples a modest number of pixels across a small number of images for each SGD (Stochastic gradient descent) batch update. PixelNet has achieved whole tumor dice accuracy 87.6% and 85.8% for validation and testing data respectively in BraTS 2017 challenge.
Class Balanced PixelNet for Neurological Image Segmentation
Islam, Mobarakol
Ren, Hongliang
2018Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this paper, we propose an automatic brain tumor segmentation approach (e.g., PixelNet) using pixel level convolutional neural network (CNN). The model extracts feature from multiple convolutional layers and concatenates them to form a hyper-column where samples a modest number of pixels for optimization. Hyper-column ensures both local and global contextual information for pixel wise predictor. The model confirms the statistical efficiency by sampling few number of pixels in training phase where spatial redundancy limits the information learning among the neighboring pixels in conventional pixel-level semantic segmentation approaches. Besides, label skewness in training data leads the convolutional model often converge to the certain classes which is a common problem in the medical dataset. We deal this problem by selecting an equal number of pixels for all the classes in sampling time. The proposed model has achieved promising results in brain tumor and ischemic stroke lesion segmentation datasets.
Brain Tumor Segmentation and Survival Prediction Using 3D Attention UNet
Islam, Mobarakol
Vibashan, V. S.
Jose, V. Jeya Maria
Wijethilake, Navodini
Utkarsh, Uppal
Ren, Hongliang
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
In this work, we develop an attention convolutional neural network (CNN) to segment brain tumors from Magnetic Resonance Images (MRI). Further, we predict the survival rate using various machine learning methods. We adopt a 3D UNet architecture and integrate channel and spatial attention with the decoder network to perform segmentation. For survival prediction, we extract some novel radiomic features based on geometry, location, the shape of the segmented tumor and combine them with clinical information to estimate the survival duration for each patient. We also perform extensive experiments to show the effect of each feature for overall survival (OS) prediction. The experimental results infer that radiomic features such as histogram, location, and shape of the necrosis region and clinical features like age are the most critical parameters to estimate the OS.
Prostate Cancer Detection from MRI Using Efficient Feature Extraction with Transfer Learning
Islam, Rafiqul
Imran, Al
Rabbi, Md Fazle
Farhan, Mohd
Prostate Cancer2024Journal Article, cited 0 times
Website
PROSTATE-MRI
Computer Aided Detection (CADe)
Random forest classifier
Transfer learning
Radiomic features
Machine Learning
Deep Learning
Prostate cancer is a common cancer with significant implications for global health. Prompt and precise identification is crucial for efficient treatment strategizing and enhanced patient results. This research study investigates the utilization of machine learning techniques to diagnose prostate cancer. It emphasizes utilizing deep learning models, namely VGG16, VGG19, ResNet50, and ResNet50V2, to extract relevant features. The random forest approach then uses these features for classification. The study begins by doing a thorough comparison examination of the deep learning architectures outlined above to evaluate their effectiveness in extracting significant characteristics from prostate cancer imaging data. Key metrics such as sensitivity, specificity, and accuracy are used to assess the models’ efficacy. With an accuracy of 99.64%, ResNet50 outperformed other tested models when it came to identifying important features in images of prostate cancer. Furthermore, the analysis of understanding factors aims to offer valuable insights into the decision-making process, thereby addressing a critical problem for clinical practice acceptance. The random forest classifier, a powerful ensemble learning method renowned for its adaptability and ability to handle intricate datasets, then uses the collected characteristics as input. The random forest model seeks to identify patterns in the feature space and produce precise predictions on the presence or absence of prostate cancer. In addition, the study tackles the restricted availability of datasets by utilizing transfer learning methods to refine the deep learning models using a small amount of annotated prostate cancer data. The objective of this method is to improve the ability of the models to generalize across different patient populations and clinical situations. This study’s results are useful because they show how well VGG16, VGG19, ResNet50, and ResNet50V2 work for extracting features in the field of diagnosing prostate cancer, when used with random forest’s classification abilities. The results of this work provide a basis for creating reliable and easily understandable machine learning-based diagnostic tools for detecting prostate cancer. This will enhance the possibility of an early and precise diagnosis in clinical settings such as index terms deep learning, machine learning, prostate cancer, cancer identification, and cancer classification.
Fully automated deep-learning section-based muscle segmentation from CT images for sarcopenia assessment
Islam, S.
Kanavati, F.
Arain, Z.
Da Costa, O. F.
Crum, W.
Aboagye, E. O.
Rockall, A. G.
Clin Radiol2022Journal Article, cited 0 times
Website
Head-Neck-CT-Atlas
CT COLONOGRAPHY
Convolutional Neural Network (CNN)
AIM: To develop a fully automated deep-learning-based approach to measure muscle area for assessing sarcopenia on standard-of-care computed tomography (CT) of the abdomen without any case exclusion criteria, for opportunistic screening for frailty. MATERIALS AND METHODS: This ethically approved retrospective study used publicly available and institutional unselected abdominal CT images (n=1,070 training, n=31 testing). The method consisted of two sequential steps: section detection from CT volume followed by muscle segmentation on single-section. Both stages used fully convolutional neural networks (FCNN), based on a UNet-like architecture. Input data consisted of CT volumes with a variety of fields of view, section thicknesses, occlusions, artefacts, and anatomical variations. Output consisted of segmented muscle area on a CT section at the L3 vertebral level. The muscle was segmented into erector spinae, psoas, and rectus abdominus muscle groups. Output was tested against expert manual segmentation. RESULTS: Threefold cross-validation was used to evaluate the model. Section detection cross-validation error was 1.41 +/- 5.02 (in sections). Segmentation cross-validation Dice overlaps were 0.97 +/- 0.02, 0.95 +/- 0.04, and 0.94 +/- 0.04 for erector spinae, psoas, and rectus abdominus, respectively, and 0.96 +/- 0.02 for the combined muscle area, with R(2) = 0.95/0.98 for muscle attenuation/area in 28/31 hold-out test cases. No statistical difference was found between the automated output and a second annotator. Fully automated processing took <1 second per CT examination. CONCLUSIONS: A FCNN pipeline accurately and efficiently automates muscle segmentation at the L3 vertebral level from unselected abdominal CT volumes, with no manual processing step. This approach is promising as a generalisable tool for opportunistic screening for frailty on standard-of-care CT.
Lung Cancer Detection and Classification using Machine Learning Algorithm
Ismail, Meraj Begum Shaikh
Turkish Journal of Computer and Mathematics Education (TURCOMAT)2021Journal Article, cited 0 times
Website
LungCT-Diagnosis
Machine Learning
Segmentation
LUNG
co-occurrence matrix
The Main Objective of this research paper is to find out the early stage of lung cancer and explore the accuracy levels of various machine learning algorithms. After a systematic literature study, we found out that some classifiers have low accuracy and some are higher accuracy but difficult to reached nearer of 100%. Low accuracy and high implementation cost due to improper dealing with DICOM images. For medical image processing many different types of images are used but Computer Tomography (CT) scans are generally preferred because of less noise. Deep learning is proven to be the best method for medical image processing, lung nodule detection and classification, feature extraction and lung cancer stage prediction. In the first stage of this system used image processing techniques to extract lung regions. The segmentation is done using K Means. The features are extracted from the segmented images and the classification are done using various machine learning algorithm. The performances of the proposed approaches are evaluated based on their accuracy,; sensitivity, specificity and classification time.
EfficientNet and multi-path convolution with multi-head attention network for brain tumor grade classification
Isunuri, B. Venkateswarlu
Kakarla, Jagadeesh
Computers and Electrical Engineering2023Journal Article, cited 0 times
Website
REMBRANDT
Brain-Tumor-Progression
BRAIN
Classification
Convolutional Neural Network (CNN)
Transfer learning
Grade classification is a challenging task in brain tumor image classification. Contemporary models employ transfer learning technique to attain better performance. The existing models ignored the semantic features of a tumor during classification decisions. Moreover, contemporary research requires an optimized model to exhibit better performance on larger datasets. Thus, we propose an EfficientNet and multi-path convolution with a multi-head attention network for the grade classification. We used a pre-trained EfficientNetB4 in the feature extraction phase. Then, a multi-path convolution with multi-head attention network performs a feature enhancement task. Finally, features obtained from the above step are classified using a fully connected double dense network. We utilize TCIA repository datasets to generate a three-class (normal/low-grade/high-grade) classification dataset. Our model achieves 98.35% accuracy and 97.32% Jaccard coefficient. The proposed model achieves superior performance than its competing models in all key metrics. Further, we achieve similar performance on a noisy dataset.
Ensemble coupled convolution network for three-class brain tumor grade classification
Isunuri, Bala Venkateswarlu
Kakarla, Jagadeesh
Multimedia Tools and Applications2023Journal Article, cited 2 times
Website
REMBRANDT
Convolutional Neural Network (CNN)
Transfer learning
Feature Extraction
Classification
The brain tumor grade classification is one of the prevalent tasks in brain tumor image classification. The existing models have employed transfer learning and are unable to preserve semantic features. Moreover, the results are reported on small datasets with pre-trained models. Thus, there is a need for an optimized model that can exhibit superior performance on larger datasets. We have proposed an efficientnet and coupled convolution network for the grade classification of brain magnetic resonance images. The feature extraction is performed using a pre-trained EfficientNetB0. Then, we have proposed a coupled convolution network for feature enhancement. Finally, enhanced features are classified using a fully connected dense network. We have utilized a global average pooling and dropout layers to avoid model overfitting. We have evaluated the proposed model on the REMBRANDT dataset and have achieved 96.95% accuracy. The proposed model outperforms existing pre-trained models and state-of-the-art models in vital metrics.
Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities
Itakura, Haruka
Achrol, Achal S
Mitchell, Lex A
Loya, Joshua J
Liu, Tiffany
Westbroek, Erick M
Feroze, Abdullah H
Rodriguez, Scott
Echegaray, Sebastian
Azad, Tej D
Science Translational Medicine2015Journal Article, cited 90 times
Website
TCGA-GBM
MRI
radiomic features
Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks
Iuga, A. I.
Carolus, H.
Hoink, A. J.
Brosch, T.
Klinder, T.
Maintz, D.
Persigehl, T.
Baessler, B.
Pusken, M.
BMC Med Imaging2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Computer Aided Detection (CADe)
Segmentation
Deep Learning
Computed Tomography (CT)
BACKGROUND: In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS: The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS: The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) >/= 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS: The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.
Journal of Student Research2021Journal Article, cited 0 times
Website
TCGA-GBM
Algorithm Development
Magnetic Resonance Imaging (MRI)
Pathomics
Radiomics
Digital pathology
Machine Learning
Computer Aided Diagnosis (CADx)
Cancer is the common name used to categorize a collection of diseases. In the United States, there were an estimated 1.8 million new cancer cases and 600,000 cancer deaths in 2020. Though it has been proven that an early diagnosis can significantly reduce cancer mortality, cancer screening is inaccessible to much of the world’s population. Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. A literature search with the Google Scholar and PubMed databases from January 2020 to June 2021 determined that currently, no machine learning model (n=0/417) has an accuracy of 90% or higher in diagnosing multiple cancers. We propose our model HOPE, the Heuristic Oncological Prognosis Evaluator, a transfer learning diagnostic tool for the screening of patients with common cancers. By applying this approach to magnetic resonance (MRI) and digital whole slide pathology images, HOPE 2.0 demonstrates an overall accuracy of 95.52% in classifying brain, breast, colorectal, and lung cancer. HOPE 2.0 is a unique state-of-the-art model, as it possesses the ability to analyze multiple types of image data (radiology and pathology) and has an accuracy higher than existing models. HOPE 2.0 may ultimately aid in accelerating the diagnosis of multiple cancer types, resulting in improved clinical outcomes compared to previous research that focused on singular cancer diagnosis.
NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization: Explanation of the software platform developed for 3D models visualization related with medical images using Augmented and Virtual Reality technology
The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies.
A divide and conquer approach to maximise deep learning mammography classification accuracies
Jaamour, A.
Myles, C.
Patel, A.
Chen, S. J.
McMillan, L.
Harris-Birtill, D.
PLoS One2023Journal Article, cited 0 times
Website
CBIS-DDSM
Female
*Deep Learning
Mammography/methods
*Breast Neoplasms/diagnostic imaging
Convolutional Neural Network (CNN)
mini-MIAS
BREAST
Breast cancer claims 11,400 lives on average every year in the UK, making it one of the deadliest diseases. Mammography is the gold standard for detecting early signs of breast cancer, which can help cure the disease during its early stages. However, incorrect mammography diagnoses are common and may harm patients through unnecessary treatments and operations (or a lack of treatment). Therefore, systems that can learn to detect breast cancer on their own could help reduce the number of incorrect interpretations and missed cases. Various deep learning techniques, which can be used to implement a system that learns how to detect instances of breast cancer in mammograms, are explored throughout this paper. Convolution Neural Networks (CNNs) are used as part of a pipeline based on deep learning techniques. A divide and conquer approach is followed to analyse the effects on performance and efficiency when utilising diverse deep learning techniques such as varying network architectures (VGG19, ResNet50, InceptionV3, DenseNet121, MobileNetV2), class weights, input sizes, image ratios, pre-processing techniques, transfer learning, dropout rates, and types of mammogram projections. This approach serves as a starting point for model development of mammography classification tasks. Practitioners can benefit from this work by using the divide and conquer results to select the most suitable deep learning techniques for their case out-of-the-box, thus reducing the need for extensive exploratory experimentation. Multiple techniques are found to provide accuracy gains relative to a general baseline (VGG19 model using uncropped 512 x 512 pixels input images with a dropout rate of 0.2 and a learning rate of 1 x 10-3) on the Curated Breast Imaging Subset of DDSM (CBIS-DDSM) dataset. These techniques involve transfer learning pre-trained ImagetNet weights to a MobileNetV2 architecture, with pre-trained weights from a binarised version of the mini Mammography Image Analysis Society (mini-MIAS) dataset applied to the fully connected layers of the model, coupled with using weights to alleviate class imbalance, and splitting CBIS-DDSM samples between images of masses and calcifications. Using these techniques, a 5.6% gain in accuracy over the baseline model was accomplished. Other deep learning techniques from the divide and conquer approach, such as larger image sizes, do not yield increased accuracies without the use of image pre-processing techniques such as Gaussian filtering, histogram equalisation and input cropping.
Pathological categorization of lung carcinoma from multimodality images using convolutional neural networks
Jacob, Chinnu
Menon, Gopakumar Chandrasekhara
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Lung-PET-CT-Dx
Abstract Accurate diagnosis and treatment of lung carcinoma depend on its pathological type and staging. Normally, pathological analysis is performed either by needle biopsy or surgery. Therefore, a noninvasive method to detect pathological types would be a good alternative. Hence, this work aims at categorizing different types of lung cancer from multimodality images. The proposed approach involves two stages. Initially, a Blind/Referenceless Image Spatial Quality Evaluator‐based approach is adopted to extract the slices having lung abnormalities from the dataset. The slices then are transferred to a novel shallow convolutional neural network model to detect adenocarcinoma, squamous cell carcinoma, and small cell carcinoma from multimodality images. The classifier efficacy is then investigated by comparing precision, recall, area under curve, and accuracy with pretrained models and existing methods. The results narrate that the suggested system outperformed with a testing accuracy of 95% in Positron emission tomography/computed tomography (PET/CT), 93% in CT images of the Lung‐PET‐CT‐DX dataset, and 98% in the Lung3 dataset. Furthermore, a kappa score of 0.92 in PET/CT of Lung‐PETCT‐DX and 0.98 in CT of Lung3 exhibited the effectiveness of the presented system in the field of lung cancer classification.
Periodicity counting in videos with unsupervised learning of cyclic embeddings
Jacquelin, Nicolas
Vuillemot, Romain
Duffner, Stefan
Pattern Recognition Letters2022Journal Article, cited 0 times
4D-Lung
We introduce a context-agnostic unsupervised method to count periodicity in videos. Current methods estimate periodicity for a specific type of application (e.g. some repetitive human motion). We propose a novel method that provides a powerful generalisation ability since it is not biased towards specific visual features. It is thus applicable to a range of diverse domains that require no adaptation, by relying on a deep neural network that is trained completely unsupervised. More specifically, it is trained to transform the periodic temporal data into some lower-dimensional latent encoding in such a way that it forms a cyclic path in this latent space. We also introduce a novel algorithm that is able to reliably detect and count periods in complex time series. Despite being unsupervised and facing supervised methods with complex architectures, our experimental results demonstrate that our approach is able to reach state-of-the-art performance for periodicity counting on the challenging QUVA video benchmark.
Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection
The task of localizing and categorizing objects in medical images often remains formulated as a semantic segmentation problem. This approach, however, only indirectly solves the coarse localization task by predicting pixel-level scores, requiring ad-hoc heuristics when mapping back to object-level scores. State-of-the-art object detectors on the other hand, allow for individual object scoring in an end-to-end fashion, while ironically trading in the ability to exploit the full pixel-wise supervision signal. This can be particularly disadvantageous in the setting of medical image analysis, where data sets are notoriously small. In this paper, we propose Retina U-Net, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U-Net architecture widely used for semantic segmentation in medical images. The proposed architecture recaptures discarded supervision signals by complementing object detection with an auxiliary task in the form of semantic segmentation without introducing the additional complexity of previously proposed two-stage detectors. We evaluate the importance of full segmentation supervision on two medical data sets, provide an in-depth analysis on a series of toy experiments and show how the corresponding performance gain grows in the limit of small data sets. Retina U-Net yields strong detection performance only reached by its more complex two-staged counterparts. Our framework including all methods implemented for operation on 2D and 3D images is available at github.com/pfjaeger/medicaldetectiontoolkit.
Quantitative imaging in radiation oncology: An emerging science and clinical service
MGA-Net: A novel mask-guided attention neural network for precision neonatal brain imaging
Jafrasteh, B.
Lubian-Lopez, S. P.
Trimarco, E.
Ruiz, M. R.
Barrios, C. R.
Almagro, Y. M.
Benavente-Fernandez, I.
Neuroimage2024Journal Article, cited 0 times
Website
QIN GBM Treatment Response
Brain volume estimation
Deep learning
Mask guided attention
Multimodal image processing
U-net architecture
In this study, we introduce MGA-Net, a novel mask-guided attention neural network, which extends the U-net model for precision neonatal brain imaging. MGA-Net is designed to extract the brain from other structures and reconstruct high-quality brain images. The network employs a common encoder and two decoders: one for brain mask extraction and the other for brain region reconstruction. A key feature of MGA-Net is its high-level mask-guided attention module, which leverages features from the brain mask decoder to enhance image reconstruction. To enable the same encoder and decoder to process both MRI and ultrasound (US) images, MGA-Net integrates sinusoidal positional encoding. This encoding assigns distinct positional values to MRI and US images, allowing the model to effectively learn from both modalities. Consequently, features learned from a single modality can aid in learning a modality with less available data, such as US. We extensively validated the proposed MGA-Net on diverse and independent datasets from varied clinical settings and neonatal age groups. The metrics used for assessment included the DICE similarity coefficient, recall, and accuracy for image segmentation; structural similarity for image reconstruction; and root mean squared error for total brain volume estimation from 3D ultrasound images. Our results demonstrate that MGA-Net significantly outperforms traditional methods, offering superior performance in brain extraction and segmentation while achieving high precision in image reconstruction and volumetric analysis. Thus, MGA-Net represents a robust and effective preprocessing tool for MRI and 3D ultrasound images, marking a significant advance in neuroimaging that enhances both research and clinical diagnostics in the neonatal period and beyond.
Stanford DRO Toolkit: Digital Reference Objects for Standardization of Radiomic Features
Jaggi, Akshay
Mattonen, Sarah A.
McNitt-Gray, Michael
Napel, Sandy
Tomography2020Journal Article, cited 0 times
CC-Radiomics-Phantom
DRO-Toolkit
Several institutions have developed image feature extraction software to compute quantitative descriptors of medical images for radiomics analyses. With radiomics increasingly proposed for use in research and clinical contexts, new techniques are necessary for standardizing and replicating radiomics findings across software implementations. We have developed a software toolkit for the creation of 3D digital reference objects with customizable size, shape, intensity, texture, and margin sharpness values. Using user-supplied input parameters, these objects are defined mathematically as continuous functions, discretized, and then saved as DICOM objects. Here, we present the definition of these objects, parameterized derivations of a subset of their radiomics values, computer code for object generation, example use cases, and a user-downloadable sample collection used for the examples cited in this paper.
Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.
Brain tumour segmentation is a crucial task in medical imaging that involves identifying and delineating the boundaries of tumour tissues in the brain from MRI scans. Accurate segmentation plays an indispensable role in the diagnosis, treatment planning, and monitoring of patients with brain tumours. This study presents a novel approach to address the class imbalance prevalent in brain tumour segmentation using a shared-encoder multi-class segmentation framework. The proposed method involves training a single encoder class learner and multiple decoder class learners, which are designed to learn feature representation of a certain class subset, in addition to a shared encoder between them that extracts common features across all classes. The outputs of the complement-class learners are combined and propagated to a meta-learner to obtain the final segmentation map. The authors evaluate their method on a publicly available brain tumour segmentation dataset (BraTS20) and assess performance against the 2D U-Net model trained on all classes using standard evaluation metrics for multi-class semantic segmentation. The IoU and DSC scores for the proposed architecture stands at 0.644 and 0.731, respectively, as compared to 0.604 and 0.690 obtained by the base models. Furthermore, our model exhibits significant performance boosts in individual classes, as evidenced by the DSC scores of 0.588, 0.734, and 0.684 for the necrotic tumour core, peritumoral edema, and the GD-enhancing tumour classes, respectively. In contrast, the 2D-Unet model yields DSC scores of 0.554, 0.699, and 0.641 for the same classes, respectively. The approach exhibits notable performance gains in segmenting the T1-Gd class, which not only poses a formidable challenge in terms of segmentation but also holds paramount clinical significance for radiation therapy.
Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers
Jain, Rajan
Poisson, Laila
Narang, Jayant
Gutman, David
Scarpace, Lisa
Hwang, Scott N
Holder, Chad
Wintermark, Max
Colen, Rivka R
Kirby, Justin
Freymann, John
Brat, Daniel J
Jaffe, Carl
Mikkelsen, Tom
Radiology2013Journal Article, cited 99 times
Website
Radiomics
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
molecular subtype
PURPOSE: To correlate tumor blood volume, measured by using dynamic susceptibility contrast material-enhanced T2*-weighted magnetic resonance (MR) perfusion studies, with patient survival and determine its association with molecular subclasses of glioblastoma (GBM). MATERIALS AND METHODS: This HIPAA-compliant retrospective study was approved by institutional review board. Fifty patients underwent dynamic susceptibility contrast-enhanced T2*-weighted MR perfusion studies and had gene expression data available from the Cancer Genome Atlas. Relative cerebral blood volume (rCBV) (maximum rCBV [rCBV(max)] and mean rCBV [rCBV(mean)]) of the contrast-enhanced lesion as well as rCBV of the nonenhanced lesion (rCBV(NEL)) were measured. Patients were subclassified according to the Verhaak and Phillips classification schemas, which are based on similarity to defined genomic expression signature. We correlated rCBV measures with the molecular subclasses as well as with patient overall survival by using Cox regression analysis. RESULTS: No statistically significant differences were noted for rCBV(max), rCBV(mean) of contrast-enhanced lesion or rCBV(NEL) between the four Verhaak classes or the three Phillips classes. However, increased rCBV measures are associated with poor overall survival in GBM. The rCBV(max) (P = .0131) is the strongest predictor of overall survival regardless of potential confounders or molecular classification. Interestingly, including the Verhaak molecular GBM classification in the survival model clarifies the association of rCBV(mean) with patient overall survival (hazard ratio: 1.46, P = .0212) compared with rCBV(mean) alone (hazard ratio: 1.25, P = .1918). Phillips subclasses are not predictive of overall survival nor do they affect the predictive ability of rCBV measures on overall survival. CONCLUSION: The rCBV(max) measurements could be used to predict patient overall survival independent of the molecular subclasses of GBM; however, Verhaak classifiers provided additional information, suggesting that molecular markers could be used in combination with hemodynamic imaging biomarkers in the future.;
Correlation of perfusion parameters with genes related to angiogenesis regulation in glioblastoma: a feasibility study
Jain, R
Poisson, L
Narang, J
Scarpace, L
Rosenblum, ML
Rempel, S
Mikkelsen, T
American Journal of Neuroradiology2012Journal Article, cited 39 times
Website
Glioblastoma Multiforme (GBM)
BRAIN
TCGA
Radiomics
Radiogenomics
PET/CT
BACKGROUND AND PURPOSE: Integration of imaging and genomic data is critical for a better understanding of gliomas, particularly considering the increasing focus on the use of imaging biomarkers for patient survival and treatment response. The purpose of this study was to correlate CBV and PS measured by using PCT with the genes regulating angiogenesis in GBM. MATERIALS AND METHODS: Eighteen patients with WHO grade IV gliomas underwent pretreatment PCT and measurement of CBV and PS values from enhancing tumor. Tumor specimens were analyzed by TCGA by using Human Gene Expression Microarrays and were interrogated for correlation between CBV and PS estimates across the genome. We used the GO biologic process pathways for angiogenesis regulation to select genes of interest. RESULTS: We observed expression levels for 92 angiogenesis-associated genes (332 probes), 19 of which had significant correlation with PS and 9 of which had significant correlation with CBV (P < .05). Proangiogenic genes such as TNFRSF1A (PS = 0.53, P = .024), HIF1A (PS = 0.62, P = .0065), KDR (CBV = 0.60, P = .0084; PS = 0.59, P = .0097), TIE1 (CBV = 0.54, P = .022; PS = 0.49, P = .039), and TIE2/TEK (CBV = 0.58, P = .012) showed a significant positive correlation; whereas antiangiogenic genes such as VASH2 (PS = -0.72, P = .00011) showed a significant inverse correlation. CONCLUSIONS: Our findings are provocative, with some of the proangiogenic genes showing a positive correlation and some of the antiangiogenic genes showing an inverse correlation with tumor perfusion parameters, suggesting a molecular basis for these imaging biomarkers; however, this should be confirmed in a larger patient population.
Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor
Jain, R.
Poisson, L. M.
Gutman, D.
Scarpace, L.
Hwang, S. N.
Holder, C. A.
Wintermark, M.
Rao, A.
Colen, R. R.
Kirby, J.
Freymann, J.
Jaffe, C. C.
Mikkelsen, T.
Flanders, A.
Radiology2014Journal Article, cited 86 times
Website
Radiogenomics
VASARI
BRAIN
Genomics
Glioblastoma
Magnetic Resonance Imaging (MRI)
PURPOSE: To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS: An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS: Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION: Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.
Comput Biol Med2021Journal Article, cited 1 times
Website
LIDC-IDRI
Algorithm Development
LUNG
Computed Tomography (CT)
*Generative adversarial network
Segmentation
Machine Learning
Lung nodule segmentation is an exciting area of research for the effective detection of lung cancer. One of the significant challenges in detecting lung cancer is Accuracy, which is affected due to the visual deviations and heterogeneity in the lung nodules. Hence, to improve the segmentation process's Accuracy, a Salp Shuffled Shepherd Optimization Algorithm-based Generative Adversarial Network (SSSOA-based GAN) model is developed in this research for lung nodule segmentation. The SSSOA is the hybrid optimization algorithm developed by integrating the Salp Swarm Algorithm (SSA) and shuffled shepherd optimization algorithm (SSOA). The artefacts in the input Computed Tomography (CT) image are removed by performing pre-processing with the help of a Gaussian filter. The pre-processed image is subjected to lung lobe segmentation, which is done with the help of deep joint segmentation for segmenting the appropriate regions. The lung nodule segmentation is performed using the GAN. The GAN is trained using the SSSOA to effectively segment the lung nodule from the lung lobe image. The metrics, such as Dice Coefficient, Accuracy, and Jaccard Similarity, are used to evaluate the performance. The developed SSSOA-based GAN method obtained a maximum Accuracy of 0.9387, a maximum Dice Coefficient of 0.7986, and a maximum Jaccard Similarity of 0.8026, respectively, compared with the existing lung nodule segmentation method.
Integrative analysis of diffusion-weighted MRI and genomic data to inform treatment of glioblastoma
Jajamovich, Guido H
Valiathan, Chandni R
Cristescu, Razvan
Somayajula, Sangeetha
Journal of Neuro-Oncology2016Journal Article, cited 4 times
Website
TCGA-GBM
Radiogenomics
Classification
Gene expression profiling from glioblastoma (GBM) patients enables characterization of cancer into subtypes that can be predictive of response to therapy. An integrative analysis of imaging and gene expression data can potentially be used to obtain novel biomarkers that are closely associated with the genetic subtype and gene signatures and thus provide a noninvasive approach to stratify GBM patients. In this retrospective study, we analyzed the expression of 12,042 genes for 558 patients from The Cancer Genome Atlas (TCGA). Among these patients, 50 patients had magnetic resonance imaging (MRI) studies including diffusion weighted (DW) MRI in The Cancer Imaging Archive (TCIA). We identified the contrast enhancing region of the tumors using the pre- and post-contrast T1-weighted MRI images and computed the apparent diffusion coefficient (ADC) histograms from the DW-MRI images. Using the gene expression data, we classified patients into four molecular subtypes, determined the number and composition of genes modules using the gap statistic, and computed gene signature scores. We used logistic regression to find significant predictors of GBM subtypes. We compared the predictors for different subtypes using Mann-Whitney U tests. We assessed detection power using area under the receiver operating characteristic (ROC) analysis. We computed Spearman correlations to determine the associations between ADC and each of the gene signatures. We performed gene enrichment analysis using Ingenuity Pathway Analysis (IPA). We adjusted all p values using the Benjamini and Hochberg method. The mean ADC was a significant predictor for the neural subtype. Neural tumors had a significantly lower mean ADC compared to non-neural tumors ([Formula: see text]), with mean ADC of [Formula: see text] and [Formula: see text] for neural and non-neural tumors, respectively. Mean ADC showed an area under the ROC of 0.75 for detecting neural tumors. We found eight gene modules in the GBM cohort. The mean ADC was significantly correlated with the gene signature related with dendritic cell maturation ([Formula: see text], [Formula: see text]). Mean ADC could be used as a biomarker of a gene signature associated with dendritic cell maturation and to assist in identifying patients with neural GBMs, known to be resistant to aggressive standard of care.;
Explainable Lung Nodule Malignancy Classification from CT Scans
We present an AI-assisted approach for classification of malignancy of lung nodules in CT scans for explainable AI-assisted lung cancer screening. We evaluate this explainable classification to estimate lung nodule malignancy against the LIDC-IDRI dataset. The LIDC-IDRI dataset includes biomarkers from Radiologist’s annotations thereby providing a training dataset for nodule malignancy suspicion and other findings. The algorithm employs a 3D Convolutional Neural Network (CNN) to predict both the malignancy suspicion level as well as the biomarker attributes. Some biomarkers such as malignancy and subtlety are ordinal in nature, but others such as internal structure and calcification are categorical. Our approach is uniquely able to predict a multitude of fields such as to not only estimate malignancy but many other correlated biomarker variables. We evaluate the malignancy classification algorithm in several ways including presentation of the accuracy of malignancy screening, as well as comparable metrics for biomarker fields.
Enhanced-Quality Gan (EQ-GAN) on Lung CT Scans: Toward Truth and Potential Hallucinations
Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients' health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.
Prediction of liver Dmean for proton beam therapy using deep learning and contour-based data augmentation
Jampa-Ngern, S.
Kobashi, K.
Shimizu, S.
Takao, S.
Nakazato, K.
Shirato, H.
J Radiat Res2021Journal Article, cited 0 times
Website
TCGA-LIHC
Deep Learning
LIVER
Computed Tomography (CT)
The prediction of liver Dmean with 3-dimensional radiation treatment planning (3DRTP) is time consuming in the selection of proton beam therapy (PBT), and deep learning prediction generally requires large and tumor-specific databases. We developed a simple dose prediction tool (SDP) using deep learning and a novel contour-based data augmentation (CDA) approach and assessed its usability. We trained the SDP to predict the liver Dmean immediately. Five and two computed tomography (CT) data sets of actual patients with liver cancer were used for the training and validation. Data augmentation was performed by artificially embedding 199 contours of virtual clinical target volume (CTV) into CT images for each patient. The data sets of the CTVs and OARs are labeled with liver Dmean for six different treatment plans using two-dimensional calculations assuming all tissue densities as 1.0. The test of the validated model was performed using 10 unlabeled CT data sets of actual patients. Contouring only of the liver and CTV was required as input. The mean relative error (MRE), the mean percentage error (MPE) and regression coefficient between the planned and predicted Dmean was 0.1637, 6.6%, and 0.9455, respectively. The mean time required for the inference of liver Dmean of the six different treatment plans for a patient was 4.47+/-0.13 seconds. We conclude that the SDP is cost-effective and usable for gross estimation of liver Dmean in the clinic although the accuracy should be improved further if we need the accuracy of liver Dmean to be compatible with 3DRTP.
Non-invasive tumor genotyping using radiogenomic biomarkers, a systematic review and oncology-wide pathway analysis
Jansen, Robin W
van Amstel, Paul
Martens, Roland M
Kooi, Irsan E
Wesseling, Pieter
de Langen, Adrianus J
Menke-Van der Houven, Catharina W
Oncotarget2018Journal Article, cited 0 times
Website
Radiogenomics
meta-analysis
Analysis of ensemble majority voting approach for acute lymphoblastic leukemia detection using svm trained on white blood cell abnormalities in images
Januardo, Bryan
Putradinata, Harley
Moniaga, Jurike V.
Nabiilah, Ghinaa Zain
Procedia Computer Science2024Journal Article, cited 0 times
C-NMC 2019
Leukemia is a cancer that attacks and infects white blood cells which can hinder the capability for someone with leukemia to fight infections, which may cause severe complications or even death. While Acute Lymphoblastic Leukemia is a certain type of leukemia that is the most prevalent childhood cancer. Detecting this disease is a repetitive activity that can take a lot of time and resources, meanwhile Acute Lymphoblastic Leukemia has a fast growth rate. In this study, we will try to classify leukemia cancer using machine learning based on the images of white blood cells provided. This method could provide early diagnosis and reduce the burden on hematologist-oncologist by optimizing the resources that are available. This research will use the ensemble classifier concept by combining several SVM models like linear, polynomial, and RBF. That have been trained, then combined into one singular ensemble model. By combining these models, we hope to improve the classification performance by minimizing the drawbacks of using certain SVM kernels. The results of this classification obtained an accuracy performance of 70.01%.
Lung Cancer Detection: A Classification Approach Utilizing Oversampling and Support Vector Machines
Jara-Gavilanes, Adolfo
Robles-Bykbaev, Vladimir
SN Computer Science2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Algorithm Development
Support Vector Machine (SVM)
Random Forest
Computer Aided Detection (CADe)
computer Aided Diagnosis (CADx)
Positron Emission Tomography (PET)
Lung cancer is the type of cancer that causes the most deaths each year. It is also cancer with the lowest survival rate. This represents a health problem worldwide. Lung cancer has two subtypes: Non-Small Cell Lung Cancer (NSCLC) and Small Cell Lung Cancer (SCLC). For doctors, it can be hard to detect and differentiate them. Therefore, in this work, we present a method to help doctors with this issue. It consists of three phases: image preprocessing is the first phase. It starts gathering the data. After that, PET scans are selected. Then, all the scans are converted to grayscale images, and finally, all the images are joined to create a video from each patient’s scan. Next, the data extraction phase starts. In this phase, some frames are extracted from each video, and they are flattened and blended to create a row of information from each frame. Thus, a dataframe is created where each row represents a patient, and each column is a pixel value. To obtain better results, an oversampling technique is applied. In this manner, the classes are balanced. Following this, a dimensionality reduction technique is applied to reduce the number of columns produced by the previous steps and to check if this technique improves the results yielded by each model. Subsequently, the model evaluation phase begins. At this stage, two models are created: a Support Vector Machine (SVM), and a Random Forest. Ultimately, the findings are unveiled, revealing that the SVM emerged as the top-performing model, boasting an impressive 97% accuracy, 98% precision, and 97% sensitivity. Eventually, this method can be applied to detect and classify different diseases that involve PET scans.
Wavelet Convolution Neural Network for Classification of Spiculated Findings in Mammograms
Jasionowska, Magdalena
Gacek, Aleksandra
2019Book Section, cited 0 times
CBIS-DDSM
Wavelet
Convolutional Neural Network (CNN)
Breast cancer
Computer Aided Detection (CADe)
Mammogram
The subject of this paper is computer-aided recognition of spiculated findings in low-contrast noisy mammograms, such as architectural distortions and spiculated masses. The issue of computer-aided detection still remains unresolved, especially for architectural distortions. The methodology applied was based on wavelet convolution neural network. The originality of the proposed method lies in the way of input image creation. The input images were created as the maximum value maps based on three wavelet decomposition subbands (HL,LH,HH), each describing local details in the original image. Moreover, two types of convolution neural network architecture were optimized and empirically verified. The experimental study was conducted on the basis of 1585 regions of interest (512 ; 512 pixels) taken from the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), containing both normal (1191) and abnormal (406) breast tissue images including clinically confirmed architectural distortions (141) and spiculated masses (265). With the use of wavelet convolutional neural network with a reverse bioorthogonal wavelet, the recognition accuracy of both types of pathologies reached over 87%, whereas the recognition accuracy for architectural distortions was 85% and for spiculated masses - 88%.
ALNett: A cluster layer deep convolutional neural network for acute lymphoblastic leukemia classification
Jawahar, M.
H, S.
L, J. A.
Gandomi, A. H.
Comput Biol Med2022Journal Article, cited 0 times
Website
C-NMC 2019
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Deep learning
Leukemia
Transfer learning
Acute Lymphoblastic Leukemia (ALL) is cancer in which bone marrow overproduces undeveloped lymphocytes. Over 6500 cases of ALL are diagnosed every year in the United States in both adults and children, accounting for around 25% of pediatric cancers, and the trend continues to rise. With the advancements of AI and big data analytics, early diagnosis of ALL can be used to aid the clinical decisions of physicians and radiologists. This research proposes a deep neural network-based (ALNett) model that employs depth-wise convolution with different dilation rates to classify microscopic white blood cell images. Specifically, the cluster layers encompass convolution and max-pooling followed by a normalization process that provides enriched structural and contextual details to extract robust local and global features from the microscopic images for the accurate prediction of ALL. The performance of the model was compared with various pre-trained models, including VGG16, ResNet-50, GoogleNet, and AlexNet, based on precision, recall, accuracy, F1 score, loss accuracy, and receiver operating characteristic (ROC) curves. Experimental results showed that the proposed ALNett model yielded the highest classification accuracy of 91.13% and an F1 score of 0.96 with less computational complexity. ALNett demonstrated promising ALL categorization and outperformed the other pre-trained models.
Multistage Lung Cancer Detection and Prediction Using Deep Learning
Jawarkar, Jay
Solanki, Nishit
Vaishnav, Meet
Vichare, Harsh
Degadwala, Sheshang
International Journal of Scientific Research in Science, Engineering and Technology2021Journal Article, cited 0 times
Website
TCGA-LUAD
Machine Learning
K Nearest Neighbor (KNN)
Random forest classifier
LUNG
Radiomics
Earlier, the progression of the descending lung was the primary driver of the chaos that runs across the world between the two people, with more than a million people dies per year goes by. The cellular breakdown in the lungs has been greatly transferred to the inconvenience that people have looked at for a very predictable amount of time. When an entity suffers a lung injury, they have erratic cells that clump together to form a cyst. A dangerous tumor is a social affair involving terrifying, enhanced cells that can interfere with and strike tissue near them. The area of lung injury in the onset period became necessary. As of now, various systems that undergo a preparedness profile and basic learning methodologies are used for lung risk imaging. For this, CT canal images are used to see and save the adverse lung improvement season from these handles. In this paper, we present an unambiguous method for seeing lung patients in a painful stage. We have considered the shape and surface features of CT channel pictures for the sales. The perspective is done using undeniable learning methodologies and took a gender at their outcome.; Keywords : Decision Tree, KNN, RF, DF, Machine Learning
Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics
Jayaraj, D.
Sathiamoorthy, S.
2019Conference Paper, cited 0 times
LIDC-IDRI
LUNG
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.
SVM kernel Methods with Data Normalization for Lung Cancer Survivability Prediction Application
Jenipher, V. Nisha
Radhika, S.
2021Conference Paper, cited 0 times
NSCLC-Radiomics
Head-Neck-Radiomics-HN1
NSCLC-Radiomics-Interobserver1
RIDER-LungCT-Seg
NSCLC-Radiomics-Genomics
Cancer is a threatening disease for the human race affecting most people around the world. The topmost reason for cancer demise across the globe is lung cancer and therefore there were many algorithms applied in predicting the survival rate of lung cancer patients. As a result of which the survival rate of lung cancer patients is increasing gradually. Support Vector Machine (SVM) technique have high accuracy than other technique in prediction. The performance of the SVM algorithm depends on the kernel function. In this paper, a comparison of three different kernel functions predicting the survival rate of a lung cancer patient with an efficient normalization technique is studied. Experiments are conducted in the dataset obtained from Cancer Imaging Archive (TCIA). Along with SVM kernel functions, five machine learning techniques were also used in predicting the survival rate of lung cancer. RBF_SVM with normalized data produced high accuracy of 97.72% compared to another algorithm. Various performance metrics such as accuracy, precision, recall, F1 score are used to evaluate the performance of the SVM kernel function.
Lung tumor cell classification with lightweight mobileNetV2 and attention-based SCAM enhanced faster R-CNN
Jenipher, V. Nisha
Radhika, S.
Evolving Systems2024Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
LIDC-IDRI
U-Net
Classification
MobileNetV2
Segmentation
Algorithm Development
Early and precise detection of lung tumor cell is paramount for providing adequate medication and increasing the survivability of the patients. To achieve this, the Enhanced Faster R-CNN with MobileNetV2 and SCAM framework is bestowed for improving the diagnostic accuracy of lung tumor cell classification. The U-Net architecture optimized by Stochastic Gradient Descent (SGD) is employed to carry out clinical image segmentation. The developed approach leverages the advantage of the lightweight design MobileNetV2 backbone network and the attention mechanism called Spatial and Channel Attention Module (SCAM) for improving the feature extraction as well as the feature representation and localization process of lung tumor cell. The proposed method integrated a MobileNetV2 backbone network due to its lightweight design for deriving valuable features of the input clinical images to reduce the complexity of the network architecture. Moreover, it also incorporates the attention module SCAM for the creation of spatially and channel wise informative features to enhance the lung tumor cell features representation and also its localization to concentrate on important locations. To assess the efficacy of the method, several high performance lung tumor cell classification techniques ECNN, Lung-Retina Net, CNN-SVM, CCDC-HNN, and MTL-MGAN, and datasets including Lung-PET-CT-Dx dataset, LIDC-IDRI dataset, and Chest CT-Scan images dataset are taken to carry out experimental evaluation. By conducting the comprehensive comparative analysis for different metrics with respect to different methods, the proposed method obtains the impressive performance rate with accuracy of 98.6%, specificity of 96.8%, sensitivity of 97.5%, and precision of 98.2%. Furthermore, the experimental outcomes also reveal that the proposed method reduces the complexity of the network and obtains improved diagnostic outcomes with available annotated data.
Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier
Jensen, C.
Carl, J.
Boesen, L.
Langkilde, N. C.
Ostergaard, L. R.
J Appl Clin Med Phys2019Journal Article, cited 0 times
Website
SPIE-AAPM PROSTATEx Challenge
PROSTATE
K Nearest Neighbor (KNN)
Classification
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.
Lung nodule detection from CT scans using 3D convolutional neural networks without candidate selection
Deep-learning soft-tissue decomposition in chest radiography using fast fuzzy C-means clustering with CT datasets
Jeon, Duhee
Lim, Younghwan
Lee, Minjae
Kim, Guna
Cho, Hyosung
Journal of Instrumentation2023Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
Image denoising
Fuzzy C-means
Algorithm Development
X-ray chest classification
Chest radiography is the most routinely used X-ray imaging technique for screening and diagnosing lung and chest disease, such as lung cancer and pneumonia. However, the clinical interpretation of the hidden and obscured anatomy in chest X-ray images remains challenging because of the bony structures overlapping the lung area. Thus, multi-perspective imaging with a high radiation dose is often required. In this study, to address this problem, we propose a deep-learning soft-tissue decomposition method using fast fuzzy C-means (FFCM) clustering with computed tomography (CT) datasets. In this method, FFCM clustering is used to decompose a CT image into bone and soft-tissue components, which are synthesized into digitally reconstructed radiographs (DRRs) to obtain large amounts of X-ray decomposition datasets as ground truths for training. In the training stage, chest and soft-tissue DRRs are used as input and label data, respectively, for training the network. During the testing, a chest X-ray image is fed into the trained network to output the corresponding soft-tissue image component. To verify the efficacy of the proposed method, we conducted a feasibility study on clinical CT datasets available from the AAPM Lung CT Challenge. According to our results, the proposed method effectively yielded soft-tissue decomposition from chest X-ray images; this is encouraging for reducing the visual complexity of chest X-ray images. Consequently, the finding of our feasibility study indicate that the proposed method can offer a promising outcome for this purpose.
Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT
Jeon, Kyung Nyeo
Goo, Jin Mo
Lee, Chang Hyun
Lee, Youkyung
Choo, Ji Yung
Lee, Nyoung Keun
Shim, Mi-Suk
Lee, In Sun
Kim, Kwang Gi
Gierada, David S
Investigative radiology2012Journal Article, cited 51 times
Website
NLST
lung
LDCT
Depth estimation from monocular endoscopy using simulation and image transfer approach
Jeong, B. H.
Kim, H. K.
Son, Y. D.
Comput Biol Med2024Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Deep learning
Depth estimation
Endoscopy
Generative Adversarial Network (GAN)
Simulation-to-real transfer
Obtaining accurate distance or depth information in endoscopy is crucial for the effective utilization of navigation systems. However, due to space constraints, incorporating depth cameras into endoscopic systems is often impractical. Our goal is to estimate depth images directly from endoscopic images using deep learning. This study presents a three-step methodology for training a depth-estimation network model. Initially, simulated endoscopy images and corresponding depth maps are generated using Unity based on a colon surface model obtained from segmented computed tomography colonography data. Subsequently, a cycle generative adversarial network model is employed to enhance the realism of the simulated endoscopy images. Finally, a deep learning model is trained using the synthesized endoscopy images and depth maps to estimate depths accurately. The performance of the proposed approach is evaluated and compared against prior studies utilizing unsupervised training methods. The results demonstrate the superior precision of the proposed technique in estimating depth images within endoscopy. The proposed depth estimation method holds promise for advancing the field by enabling enhanced navigation, improved lesion marking capabilities, and ultimately leading to better clinical outcomes.
Brain Tumor Segmentation Using a 3D FCN with Multi-scale Loss
Jesson, Andrew
Arbel, Tal
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this work, we use a 3D Fully Connected Network (FCN) architecture for brain tumor segmentation. Our method includes a multi-scale loss function on predictions given at each resolution of the FCN. Using this approach, the higher resolution features can be combined with the initial segmentation at a lower resolution so that the FCN models context in both the image and label domains. The model is trained using a multi-scale loss function and a curriculum on sample weights is employed to address class imbalance. We achieved competitive results during the testing phase of the BraTS 2017 Challenge for segmentation with Dice scores of 0.710, 0.860, and 0.783 for enhancing tumor, whole tumor, and tumor core, respectively.
CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance
Jesson, Andrew
Guizard, Nicolas
Ghalehjegh, Sina Hamidi
Goblot, Damien
Soudan, Florian
Chapados, Nicolas
2017Conference Proceedings, cited 18 times
Website
LIDC-IDRI
LUNA16 Challenge
Computer Aided Detection (CAD)
Segmentation
Classification
Algorithm Development
We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem.
External Validation of Robust Radiomic Signature to Predict 2-Year Overall Survival in Non-Small-Cell Lung Cancer
Jha, A. K.
Sherkhane, U. B.
Mthun, S.
Jaiswar, V.
Purandare, N.
Prabhash, K.
Wee, L.
Rangarajan, V.
Dekker, A.
J Digit Imaging2023Journal Article, cited 0 times
NSCLC-Radiomics
Computed Tomography (CT)
Radiomic feature
LUNG
Classification
Random Forest
Lung cancer is the second most fatal disease worldwide. In the last few years, radiomics is being explored to develop prediction models for various clinical endpoints in lung cancer. However, the robustness of radiomic features is under question and has been identified as one of the roadblocks in the implementation of a radiomic-based prediction model in the clinic. Many past studies have suggested identifying the robust radiomic feature to develop a prediction model. In our earlier study, we identified robust radiomic features for prediction model development. The objective of this study was to develop and validate the robust radiomic signatures for predicting 2-year overall survival in non-small cell lung cancer (NSCLC). This retrospective study included a cohort of 300 stage I-IV NSCLC patients. Institutional 200 patients' data were included for training and internal validation and 100 patients' data from The Cancer Image Archive (TCIA) open-source image repository for external validation. Radiomic features were extracted from the CT images of both cohorts. The feature selection was performed using hierarchical clustering, a Chi-squared test, and recursive feature elimination (RFE). In total, six prediction models were developed using random forest (RF-Model-O, RF-Model-B), gradient boosting (GB-Model-O, GB-Model-B), and support vector(SV-Model-O, SV-Model-B) classifiers to predict 2-year overall survival (OS) on original data as well as balanced data. Model validation was performed using 10-fold cross-validation, internal validation, and external validation. Using a multistep feature selection method, the overall top 10 features were chosen. On internal validation, the two random forest models (RF-Model-O, RF-Model-B) displayed the highest accuracy; their scores on the original and balanced datasets were 0.81 and 0.77 respectively. During external validation, both the random forest models' accuracy was 0.68. In our study, robust radiomic features showed promising predictive performance to predict 2-year overall survival in NSCLC.
Image Domain Multi-Material Decomposition Noise Suppression Through Basis Transformation and Selective Filtering
Ji, X.
Zhuo, X.
Lu, Y.
Mao, W.
Zhu, S.
Quan, G.
Xi, Y.
Lyu, T.
Chen, Y.
IEEE J Biomed Health Inform2024Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Spectral CT can provide material characterization ability to offer more precise material information for diagnosis purposes. However, the material decomposition process generally leads to amplification of noise which significantly limits the utility of the material basis images. To mitigate such problem, an image domain noise suppression method was proposed in this work. The method performs basis transformation of the material basis images based on a singular value decomposition. The noise variances of the original spectral CT images were incorporated in the matrix to be decomposed to ensure that the transformed basis images are statistically uncorrelated. Due to the difference in noise amplitudes in the transformed basis images, a selective filtering method was proposed with the low-noise transformed basis image as guidance. The method was evaluated using both numerical simulation and real clinical dual-energy CT data. Results demonstrated that compared with existing methods, the proposed method performs better in preserving the spatial resolution and the soft tissue contrast while suppressing the image noise. The proposed method is also computationally efficient and can realize real-time noise suppression for clinical spectral CT images.
ResDAC-Net: a novel pancreas segmentation model utilizing residual double asymmetric spatial kernels
Ji, Z.
Liu, J.
Mu, J.
Zhang, H.
Dai, C.
Yuan, N.
Ganchev, I.
Med Biol Eng Comput2024Journal Article, cited 0 times
Pancreas-CT
Medical Decathlon
Image segmentation
Medical image processing
Pancreatic segmentation
ResDAC-Net
Adjacent layer feature fusion block
Convolutional Neural Network (CNN)
The pancreas not only is situated in a complex abdominal background but is also surrounded by other abdominal organs and adipose tissue, resulting in blurred organ boundaries. Accurate segmentation of pancreatic tissue is crucial for computer-aided diagnosis systems, as it can be used for surgical planning, navigation, and assessment of organs. In the light of this, the current paper proposes a novel Residual Double Asymmetric Convolution Network (ResDAC-Net) model. Firstly, newly designed ResDAC blocks are used to highlight pancreatic features. Secondly, the feature fusion between adjacent encoding layers fully utilizes the low-level and deep-level features extracted by the ResDAC blocks. Finally, parallel dilated convolutions are employed to increase the receptive field to capture multiscale spatial information. ResDAC-Net is highly compatible to the existing state-of-the-art models, according to three (out of four) evaluation metrics, including the two main ones used for segmentation performance evaluation (i.e., DSC and Jaccard index).
H2NF-Net for Brain Tumor Segmentation Using Multimodal MR Imaging: 2nd Place Solution to BraTS Challenge 2020 Segmentation Task
Jia, Haozhe
Cai, Weidong
Huang, Heng
Xia, Yong
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
In this paper, we propose a Hybrid High-resolution and Non-local Feature Network (H2NF-Net) to segment brain tumor in multimodal MR images. Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions and combines the predictions together as the final segmentation. We trained and evaluated our model on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset. The results on the test set show that the combination of the single and cascaded models achieved average Dice scores of 0.78751, 0.91290, and 0.85461, as well as Hausdorff distances (95%) of 26.57525, 4.18426, and 4.97162 for the enhancing tumor, whole tumor, and tumor core, respectively. Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
Learning multi-scale synergic discriminative features for prostate image segmentation
Jia, Haozhe
Cai, Weidong
Huang, Heng
Xia, Yong
Pattern Recognition2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Although deep convolutional neural networks (DCNNs) have been proposed for prostate MR image segmentation, the effectiveness of these methods is often limited by inadequate semantic discrimination and spatial context modeling. To address these issues, we propose a Multi-scale Synergic Discriminative Network (MSD-Net), which includes a shared encoder, a segmentation decoder, and a boundary detection decoder. We further design the cascaded pyramid convolutional block and residual refinement block, and incorporate them and the channel attention block into MSD-Net to exploit the multi-scale spatial contextual information and semantically consistent features of the gland. We also fuse the features from two decoders to boost the segmentation performance, and introduce the synergic multi-task loss to impose the consistence constraint on the joint segmentation and boundary detection. We evaluated MSD-Net against several prostate segmentation methods on three public datasets and achieved an improved accuracy. Our results indicate that the proposed MSD-Net outperforms existing methods with setting the new state-of-the-art for prostate segmentation in magnetic resonance images.
DADFN: dynamic adaptive deep fusion network based on imaging genomics for prediction recurrence of lung cancer
Jia, Liye
Wu, Wei
Hou, Guojie
Zhang, Yanan
Zhao, Juanjuan
Qiang, Yan
Wang, Long
Physics in Medicine and Biology2023Journal Article, cited 0 times
NSCLC Radiogenomics
Objective. Recently, imaging genomics has increasingly shown great potential for predicting postoperative recurrence of lung cancer patients. However, prediction methods based on imaging genomics have some disadvantages such as small sample size, high-dimensional information redundancy and poor multimodal fusion efficiency. This study aim to develop a new fusion model to overcome these challenges.Approach. In this study, a dynamic adaptive deep fusion network (DADFN) model based on imaging genomics is proposed for predicting recurrence of lung cancer. In this model, the 3D spiral transformation is used to augment the dataset, which better retains the 3D spatial information of the tumor for deep feature extraction. The intersection of genes screened by LASSO, F-test and CHI-2 selection methods is used to eliminate redundant data and retain the most relevant gene features for the gene feature extraction. A dynamic adaptive fusion mechanism based on the cascade idea is proposed, and multiple different types of base classifiers are integrated in each layer, which can fully utilize the correlation and diversity between multimodal information to better fuse deep features, handcrafted features and gene features.Main results. The experimental results show that the DADFN model achieves good performance, and its accuracy and AUC are 0.884 and 0.863, respectively. This indicates that the model is effective in predicting lung cancer recurrence.Significance. The proposed model has the potential to help physicians to stratify the risk of lung cancer patients and can be used to identify patients who may benefit from a personalized treatment option.
A fine-grained image classification algorithm based on self-supervised learning and multi-feature fusion of blood cells
Leukemia is a prevalent and widespread blood disease, and its early diagnosis is crucial for effective patient treatment. Diagnosing leukemia types heavily relies on pathologists’ morphological examination of blood cell images. However, this process is tedious and time-consuming, and the diagnostic results are subjective, leading to potential misdiagnosis and underdiagnosis. This paper proposes a blood cell image classification method that combines MAE with an enhanced Vision Transformer to tackle these challenges. Initially, pre-training occurs on two datasets, TMAMD and Red4, using the MAE self-supervised learning algorithm. Subsequently, the pre-training weights are transferred to our improved model.This paper introduces feature fusion of the outputs from each layer of the Transformer encoder to maximize the utilization of features extracted from lower layers, such as color, contour, and texture of blood cells, along with deeper semantic features. Furthermore, the dynamic margins for the subcenter Arcface Loss function are employed to enhance the model’s fine-grained feature representation by achieving inter-class dispersion and intra-class aggregation. Models trained using our method achieved state-of-the-art results on both the TMAMD dataset and Red4 dataset, with classification accuracies of 93.51% and 81.41%, respectively. This achievement is expected to be a valuable reference for physicians in their clinical diagnoses.
BiTr-Unet: A CNN-Transformer Combined Network for MRI Brain Tumor Segmentation
Jia, Q.
Shu, H.
Brainlesion2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Brain Tumor
Deep Learning
Multi-modal Image Segmentation
Vision Transformer
Convolutional neural networks (CNNs) have achieved remarkable success in automatically segmenting organs or lesions on 3D medical images. Recently, vision transformer networks have exhibited exceptional performance in 2D image classification tasks. Compared with CNNs, transformer networks have an appealing advantage of extracting long-range features due to their self-attention algorithm. Therefore, we propose a CNN-Transformer combined model, called BiTr-Unet, with specific modifications for brain tumor segmentation on multi-modal MRI scans. Our BiTr-Unet achieves good performance on the BraTS2021 validation dataset with median Dice score 0.9335, 0.9304 and 0.8899, and median Hausdor_ distance 2.8284, 2.2361 and 1.4142 for the whole tumor, tumor core, and enhancing tumor, respectively. On the BraTS2021 testing dataset, the corresponding results are 0.9257, 0.9350 and 0.8874 for Dice score, and 3, 2.2361 and 1.4142 for Hausdorff distance. The code is publicly available at https://github.com/JustaTinyDot/BiTr-Unet.
Wearable Mechatronic Ultrasound-Integrated AR Navigation System for Lumbar Puncture Guidance
Jiang, Baichuan
Wang, Liam
Xu, Keshuai
Hossbach, Martin
Demir, Alican
Rajan, Purnima
Taylor, Russell H.
Moghekar, Abhay
Foroughi, Pezhman
Kazanzides, Peter
Boctor, Emad M.
IEEE Transactions on Medical Robotics and Bionics2023Journal Article, cited 0 times
COVID-19-NY-SBU
As one of the most commonly performed spinal interventions in routine clinical practice, lumbar punctures are usually done with only hand palpation and trial-and-error. Failures can prolong procedure time and introduce complications such as cerebrospinal fluid leaks and headaches. Therefore, an effective needle insertion guidance method is desired. In this work, we present a complete lumbar puncture guidance system with the integration of (1) a wearable mechatronic ultrasound imaging device, (2) volume-reconstruction and bone surface estimation algorithms and (3) two alternative augmented reality user interfaces for needle guidance, including a HoloLens-based and a tablet-based solution. We conducted a quantitative evaluation of the end-to-end navigation accuracy, which shows that our system can achieve an overall needle navigation accuracy of 2.83 mm and 2.76 mm for the Tablet-based and the HoloLens-based solutions, respectively. In addition, we conducted a preliminary user study to qualitatively evaluate the effectiveness and ergonomics of our system on lumbar phantoms. The results show that users were able to successfully reach the target in an average of 1.12 and 1.14 needle insertion attempts for Tablet-based and HoloLens-based systems, respectively, exhibiting the potential to reduce the failure rates of lumbar puncture procedures with the proposed lumbar-puncture guidance.
An improved attentive residue multi-dilated network for thermal noise removal in magnetic resonance images
Jiang, Bowen
Yue, Tao
Hu, Xuemei
2024Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Magnetic resonance imaging (MRI) technology is crucial in the medical field, but the thermal noise in the reconstructed MR images may interfere with the clinical diagnosis. Removing the thermal noise in MR images mainly contains two challenges. First, thermal noise in an MR image obeys Rician distribution, where the statistical features are not consistent in different regions of the image. In this case, conventional denoising methods like spatial convolutional filtering will not be appropriate to deal with it. Second, details and edge information in the image may get damaged while smoothing the noise. This paper proposes a novel deep-learning model to denoise MR images. First, the model learns a binary mask to separate the background and signal regions of the noised image, making the noise left in the signal region obey a unified statistical distribution. Second, the model is designed as an attentive residual multi-dilated network (ARM-Net), composed of a multi-branch structure, and supplemented with a frequency-domain-optimizable discrete cosine transform module. In this way, the deep-learning model will be more effective in removing the noise while maintaining the details of the original image. Furthermore, we have also made improvements on the original ARM-Net baseline to establish a new model called ARM-Net v2, which is more efficient and effective. Experimental results illustrate that over the BraTS 2018 dataset, our method achieves the PSNR of 39.7087 and 32.6005 at noise levels of 5% and 20%, which realizes the state-of-the-art performance among existing MR image denoising methods.
Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas
Jiang, Chendan
Kong, Ziren
Liu, Sirui
Feng, Shi
Zhang, Yiwei
Zhu, Ruizhe
Chen, Wenlin
Wang, Yuekun
Lyu, Yuelei
You, Hui
Zhao, Dachun
Wang, Renzhi
Wang, Yu
Ma, Wenbin
Feng, Feng
Eur J Radiol2019Journal Article, cited 0 times
TCGA-LGG
Radiomics
Radiogenomics
Classification
Magnetic Resonance Imaging (MRI)
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.
Learning efficient, explainable and discriminative representations for pulmonary nodules classification
Jiang, Hanliang
Shen, Fuhao
Gao, Fei
Han, Weidong
Pattern Recognition2021Journal Article, cited 0 times
LIDC-IDRI
Automatic pulmonary nodules classification is significant for early diagnosis of lung cancers. Recently, deep learning techniques have enabled remarkable progress in this field. However, these deep models are typically of high computational complexity and work in a black-box manner. To combat these challenges, in this work, we aim to build an efficient and (partially) explainable classification model. Specially, we use neural architecture search (NAS) to automatically search 3D network architectures with excellent accuracy/speed trade-off. Besides, we use the convolutional block attention module (CBAM) in the networks, which helps us understand the reasoning process. During training, we use A-Softmax loss to learn angularly discriminative representations. In the inference stage, we employ an ensemble of diverse neural networks to improve the prediction accuracy and robustness. We conduct extensive experiments on the LIDC-IDRI database. Compared with previous state-of-the-art, our model shows highly comparable performance by using less than 1/40 parameters. Besides, empirical study shows that the reasoning process of learned networks is in conformity with physicians’ diagnosis. Related code and results have been released at: https://github.com/fei-hdu/NAS-Lung.
Improving the Pulmonary Nodule Classification Based on KPCA-CNN Model
Jiang, Peichen
Highlights in Science, Engineering and Technology2022Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Convolutional Neural Network (CNN)
Principal component analysis (PCA)
LUNG
Segmentation
LUNA16 Challenge
Lung cancer mortality, the main cause of cancer-associated death all over the world, can be reduced by screening risky patients with low-dose computed tomography (CT) scans for lung cancer. In CT screening, radiologists will have to examine millions of CT pictures, putting a great load on them. Convolutional neural networks (CNNs) with deep convolutions have the potential to improve screening efficiency. In the examination of lung cancer screening CT images, estimating the chance of a malignant nodule in a specific location on a CT scan is a critical step. Low dimensional convolutional neural networks and other methods are unable to provide sufficient estimation for this task, even though the most advanced 3-dimensional CNN (3D-CNN) has extremely high computing requirements. This article presents a novel strategy for reducing false positives in automatic pulmonary nodule diagnosis from 3-dimensional CT imaging by merging a kernel Principal Component Analysis (kPCA) approach with a 2-dimensional CNN (2D-CNN). To recreate 3-dimensional CT images, the kPCA method is utilized, with the goal of reducing the dimension of data, minimizing noise from raw sensory data while maintaining neoplastic information. The CNN can diagnose new CT scans with an accuracy of up to 90% when trained with the regenerated data, which is better than existing 2D-CNNs and on par with the best 3D-CNNs. The short duration of training, and certain accuracy shows the potential of the kPCA-CNN to adapt to CT scans with different parameters in practice. The study shows that the kPCA-CNN modeling technique can improve the efficiency of lung cancer diagnosis.
Profiling regulatory T lymphocytes within the tumor microenvironment of breast cancer via radiomics
OBJECTIVE: To generate an image-driven biomarker (Rad_score) to predict tumor-infiltrating regulatory T lymphocytes (Treg) in breast cancer (BC). METHODS: Overall, 928 BC patients were enrolled from the Cancer Genome Atlas (TCGA) for survival analysis; MRI (n = 71 and n = 30 in the training and validation sets, respectively) from the Cancer Imaging Archive (TCIA) were retrieved and subjected to repeat least absolute shrinkage and selection operator for feature reduction. The radiomic scores (rad_score) for Treg infiltration estimation were calculated via support vector machine (SVM) and logistic regression (LR) algorithms, and validated on the remaining patients. RESULTS: Landmark analysis indicated Treg infiltration was a risk factor for BC patients in the first 5 years and after 10 years of diagnosis (p = 0.007 and 0.018, respectively). Altogether, 108 radiomic features were extracted from MRI images, 4 of which remained for model construction. Areas under curves (AUCs) of the SVM model were 0.744 (95% CI 0.622-0.867) and 0.733 (95% CI 0.535-0.931) for training and validation sets, respectively, while for the LR model, AUCs were 0.771 (95% CI 0.657-0.885) and 0.724 (95% CI 0.522-0.926). The calibration curves indicated good agreement between prediction and true value (p > 0.05), and DCA shows the high clinical utility of the radiomic model. Rad_score was significantly correlated with immune inhibitory genes like CTLA4 and PDCD1. CONCLUSIONS: High Treg infiltration is a risk factor for patients with BC. The Rad_score formulated on radiomic features is a novel tool to predict Treg abundance in the tumor microenvironment.
A benchmark of deep learning approaches to predict lung cancer risk using national lung screening trial cohort
Deep learning (DL) methods have demonstrated remarkable effectiveness in assisting with lung cancer risk prediction tasks using computed tomography (CT) scans. However, the lack of comprehensive comparison and validation of state-of-the-art (SOTA) models in practical settings limits their clinical application. This study aims to review and analyze current SOTA deep learning models for lung cancer risk prediction (malignant-benign classification). To evaluate our model's general performance, we selected 253 out of 467 patients from a subset of the National Lung Screening Trial (NLST) who had CT scans without contrast, which are the most commonly used, and divided them into training and test cohorts. The CT scans were preprocessed into 2D-image and 3D-volume formats according to their nodule annotations. We evaluated ten 3D and eleven 2D SOTA deep learning models, which were pretrained on large-scale general-purpose datasets (Kinetics and ImageNet) and radiological datasets (3DSeg-8, nnUnet and RadImageNet), for their lung cancer risk prediction performance. Our results showed that 3D-based deep learning models generally perform better than 2D models. On the test cohort, the best-performing 3D model achieved an AUROC of 0.86, while the best 2D model reached 0.79. The lowest AUROCs for the 3D and 2D models were 0.70 and 0.62, respectively. Furthermore, pretraining on large-scale radiological image datasets did not show the expected performance advantage over pretraining on general-purpose datasets. Both 2D and 3D deep learning models can handle lung cancer risk prediction tasks effectively, although 3D models generally have superior performance than their 2D competitors. Our findings highlight the importance of carefully selecting pretrained datasets and model architectures for lung cancer risk prediction. Overall, these results have important implications for the development and clinical integration of DL-based tools in lung cancer screening.
Preoperative CT Radiomics Predicting the SSIGN Risk Groups in Patients With Clear Cell Renal Cell Carcinoma: Development and Multicenter Validation
Jiang, Yi
Li, Wuchao
Huang, Chencui
Tian, Chong
Chen, Qi
Zeng, Xianchun
Cao, Yin
Chen, Yi
Yang, Yintong
Liu, Heng
Bo, Yonghua
Luo, Chenggong
Li, Yiming
Zhang, Tijiang
Wang, Rongping
Frontiers in Oncology2020Journal Article, cited 0 times
TCGA-KIRC
Objective: The stage, size, grade, and necrosis (SSIGN) score can facilitate the assessment of tumor aggressiveness and the personal management for patients with clear cell renal cell carcinoma (ccRCC). However, this score is only available after the postoperative pathological evaluation. The aim of this study was to develop and validate a CT radiomic signature for the preoperative prediction of SSIGN risk groups in patients with ccRCC in multicenters. Methods: In total, 330 patients with ccRCC from three centers were classified into the training, external validation 1, and external validation 2 cohorts. Through consistent analysis and the least absolute shrinkage and selection operator, a radiomic signature was developed to predict the SSIGN low-risk group (scores 0-3) and intermediate- to high-risk group (score ≥ 4). An image feature model was developed according to the independent image features, and a fusion model was constructed integrating the radiomic signature and the independent image features. Furthermore, the predictive performance of the above models for the SSIGN risk groups was evaluated with regard to their discrimination, calibration, and clinical usefulness. Results: A radiomic signature consisting of sixteen relevant features from the nephrographic phase CT images achieved a good calibration (all Hosmer-Lemeshow p > 0.05) and favorable prediction efficacy in the training cohort [area under the curve (AUC): 0.940, 95% confidence interval (CI): 0.884-0.973] and in the external validation cohorts (AUC: 0.876, 95% CI: 0.811-0.942; AUC: 0.928, 95% CI: 0.844-0.975, respectively). The radiomic signature performed better than the image feature model constructed by intra-tumoral vessels (all p < 0.05) and showed similar performance with the fusion model integrating radiomic signature and intra-tumoral vessels (all p > 0.05) in terms of the discrimination in all cohorts. Moreover, the decision curve analysis verified the clinical utility of the radiomic signature in both external cohorts. Conclusion: Radiomic signature could be used as a promising non-invasive tool to predict SSIGN risk groups and to facilitate preoperative clinical decision-making for patients with ccRCC.
SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer
Jiang, Y.
Zhang, Y.
Lin, X.
Dong, J.
Cheng, T.
Liang, J.
Brain Sci2022Journal Article, cited 0 times
Website
BraTS 2019
BraTS 2020
BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
3d convolutional neural network (CNN)
Swin Transformer
Segmentation
Brain tumor semantic segmentation is a critical medical image processing work, which aids clinicians in diagnosing patients and determining the extent of lesions. Convolutional neural networks (CNNs) have demonstrated exceptional performance in computer vision tasks in recent years. For 3D medical image tasks, deep convolutional neural networks based on an encoder-decoder structure and skip-connection have been frequently used. However, CNNs have the drawback of being unable to learn global and remote semantic information well. On the other hand, the transformer has recently found success in natural language processing and computer vision as a result of its usage of a self-attention mechanism for global information modeling. For demanding prediction tasks, such as 3D medical picture segmentation, local and global characteristics are critical. We propose SwinBTS, a new 3D medical picture segmentation approach, which combines a transformer, convolutional neural network, and encoder-decoder structure to define the 3D brain tumor semantic segmentation job as a sequence-to-sequence prediction challenge in this research. To extract contextual data, the 3D Swin Transformer is utilized as the network's encoder and decoder, and convolutional operations are employed for upsampling and downsampling. Finally, we achieve segmentation results using an improved Transformer module that we built for increasing detail feature extraction. Extensive experimental results on the BraTS 2019, BraTS 2020, and BraTS 2021 datasets reveal that SwinBTS outperforms state-of-the-art 3D algorithms for brain tumor segmentation on 3D MRI scanned images.
Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning
Jiang, Zhuoran
Chen, Yingxuan
Zhang, Yawei
Ge, Yun
Yin, Fang-Fang
Ren, Lei
IEEE Transactions on Medical Imaging2019Journal Article, cited 0 times
4D-Lung
Cone-Beam Computed Tomography
Deep Learning
Edges tend to be over-smoothed in total variation (TV) regularized under-sampled images. In this paper, symmetric residual convolutional neural network (SR-CNN), a deep learning based model, was proposed to enhance the sharpness of edges and detailed anatomical structures in under-sampled cone-beam computed tomography (CBCT). For training, CBCT images were reconstructed using TV-based method from limited projections simulated from the ground truth CT, and were fed into SR-CNN, which was trained to learn a restoring pattern from under-sampled images to the ground truth. For testing, under-sampled CBCT was reconstructed using TV regularization and was then augmented by SR-CNN. Performance of SR-CNN was evaluated using phantom and patient images of various disease sites acquired at different institutions both qualitatively and quantitatively using structure similarity (SSIM) and peak signal-to-noise ratio (PSNR). SR-CNN substantially enhanced image details in the TV-based CBCT across all experiments. In the patient study using real projections, SR-CNN augmented CBCT images reconstructed from as low as 120 half-fan projections to image quality comparable to the reference fully-sampled FDK reconstruction using 900 projections. In the tumor localization study, improvements in the tumor localization accuracy were made by the SR-CNN augmented images compared with the conventional FDK and TV-based images. SR-CNN demonstrated robustness against noise levels and projection number reductions and generalization for various disease sites and datasets from different institutions. Overall, the SR-CNN-based image augmentation technique was efficient and effective in considerably enhancing edges and anatomical structures in under-sampled 3D/4D-CBCT, which can be very valuable for image-guided radiotherapy.
Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge 2019 Segmentation Task
Jiang, Zeyu
Ding, Changxing
Liu, Minfeng
Tao, Dacheng
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
Segmentation
BRAIN
In this paper, we devise a novel two-stage cascaded U-Net to segment the substructures of brain tumors from coarse to fine. The network is trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019 training dataset. Experimental results on the testing set demonstrate that the proposed method achieved average Dice scores of 0.83267, 0.88796 and 0.83697, as well as Hausdorff distances (95%) of 2.65056, 4.61809 and 4.13071, for the enhancing tumor, whole tumor and tumor core, respectively. The approach won the 1st place in the BraTS 2019 challenge segmentation task, with more than 70 teams participating in the challenge.
Brain Tumor Segmentation in Multi-parametric Magnetic Resonance Imaging Using Model Ensembling and Super-resolution
Jiang, Zhifan
Zhao, Can
Liu, Xinyang
Linguraru, Marius George
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation in MRI offers critical quantitative imaging data to characterize and improve prognosis. The International Brain Tumor Segmentation (BraTS) Challenge provides a unique opportunity to encourage machine learning solutions to address this challenging task. This year, the 10th edition of BraTS collected a multi-institutional multi-parametric MRI dataset of 2040 cases with typical heterogeneity in large multi-domain imaging datasets. In this paper we present a strategy ensembling four parallelly-trained models to increase the stability and performance of our neural network-based tumor segmentation. Particularly, image intensity normalization and multi-parametric MRI super-resolution techniques are used in ensembled pipelines. The evaluation of our solution on 570 unseen testing cases resulted in Dice scores of 86.28, 87.12 and 92.10, and Hausdorff distance of 14.36, 17.48 and 5.37 mm for the enhancing tumor, tumor core and whole tumor, respectively.
Predicting the Stage of Non-small Cell Lung Cancer with Divergence Neural Network Using Pre-treatment Computed Tomography
Determining the stage of non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging includes a professional interpretation of imaging, thus we aimed to build an automatic process with deep learning (DL). We proposed an end-to-end DL method that uses pre-treatment computer tomography images to classify the early- and advanced-stage of NSCLC. DL models were developed and tested to classify the early- and advanced-stage using training (n = 58), validation (n = 7), and testing (n = 17) cohorts obtained from public domains. The network consists of three parts of encoder, decoder, and classification layer. Encoder and decoder layers are trained to reconstruct original images. Classification layers are trained to classify early- and advanced-stage NSCLC patients with a dense layer. Other machine learning-based approaches were compared. Our model achieved accuracy of 0.8824, sensitivity of 1.0, specificity of 0.6, and area under the curve (AUC) of 0.7333 compared with other approaches (AUC 0.5500 - 0.7167) in the test cohort for classifying between early- and advanced-stages. Our DL model to classify NSCLC patients into early-stage and advanced-stage showed promising results and could be useful in future NSCLC research.
Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset
Jin, Hyeongmin
Kim, Jong Hyo
Journal of Signal Processing Systems2020Journal Article, cited 1 times
Website
RIDER Lung PET-CT
National Lung Screening Trial (NLST)
Radiomics
PHANTOM
Computed Tomography (CT)
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.
Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task
Jin, Weina
Fatehi, Mostafa
Guo, Ru
Hamarneh, Ghassan
Artificial intelligence in medicine2024Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Artificial Intelligence
Glioma
Clinical evaluation evidence and model explainability are key gatekeepers to ensure the safe, accountable, and effective use of artificial intelligence (AI) in clinical settings. We conducted a clinical user-centered evaluation with 35 neurosurgeons to assess the utility of AI assistance and its explanation on the glioma grading task. Each participant read 25 brain MRI scans of patients with gliomas, and gave their judgment on the glioma grading without and with the assistance of AI prediction and explanation. The AI model was trained on the BraTS dataset with 88.0% accuracy. The AI explanation was generated using the explainable AI algorithm of SmoothGrad, which was selected from 16 algorithms based on the criterion of being truthful to the AI decision process. Results showed that compared to the average accuracy of 82.5±8.7% when physicians performed the task alone, physicians' task performance increased to 87.7±7.3% with statistical significance (p-value = 0.002) when assisted by AI prediction, and remained at almost the same level of 88.5±7.0% (p-value = 0.35) with the additional assistance of AI explanation. Based on quantitative and qualitative results, the observed improvement in physicians' task performance assisted by AI prediction was mainly because physicians' decision patterns converged to be similar to AI, as physicians only switched their decisions when disagreeing with AI. The insignificant change in physicians' performance with the additional assistance of AI explanation was because the AI explanations did not provide explicit reasons, contexts, or descriptions of clinical features to help doctors discern potentially incorrect AI predictions. The evaluation showed the clinical utility of AI to assist physicians on the glioma grading task, and identified the limitations and clinical usage gaps of existing explainable AI techniques for future improvement.
Guidelines and evaluation of clinical explainable AI in medical image analysis
Jin, W.
Li, X.
Fatehi, M.
Hamarneh, G.
Med Image Anal2023Journal Article, cited 0 times
BraTS 2020
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Humans
*Artificial Intelligence
*Benchmarking
Clinical Relevance
Evidence Gaps
Explainable AI evaluation
Interpretable machine learning
Medical image analysis
Multi-modal medical image
Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.
Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
Jin, W.
Li, X.
Fatehi, M.
Hamarneh, G.
MethodsX2023Journal Article, cited 1 times
Website
Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. * Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. * Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. * We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.
Feature Gradient Flow for Interpreting Deep Neural Networks in Head and Neck Cancer Prediction
Jin, Yinzhu
Garneau, Jonathan C.
Fletcher, P. Thomas
2022Conference Paper, cited 0 times
HEAD-NECK-PET-CT
This paper introduces feature gradient flow, a new technique for interpreting deep learning models in terms of features that are understandable to humans. The gradient flow of a model locally defines nonlinear coordinates in the input data space representing the information the model is using to make its decisions. Our idea is to measure the agreement of interpretable features with the gradient flow of a model. To then evaluate the importance of a particular feature to the model, we compare that feature’s gradient flow measure versus that of a baseline noise feature. We then develop a technique for training neural networks to be more interpretable by adding a regularization term to the loss function that encourages the model gradients to align with those of chosen interpretable features. We test our method in a convolutional neural network prediction of distant metastasis of head and neck cancer from a computed tomography dataset from the Cancer Imaging Archive.
Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening
Jinsakul, Natinai
Tsai, Cheng-Fa
Tsai, Chia-En
Wu, Pensee
Mathematics2019Journal Article, cited 0 times
TCGA-COAD
Deep Learning
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.
On the Use of WebAssembly for Rendering and Segmenting Medical Images
Jodogne, Sébastien
2023Book Section, cited 0 times
LCTSC
Rendering medical images is a critical step in a variety of medical applications, from diagnosis to therapy. There is a growing need for advanced viewers that can display the fusion of multiple layers, such as contours, annotations, doses, or segmentation masks, on the top of image slices extracted from volumes. Such viewers obviously necessitate complex software components. But desktop viewers are often developed using technologies that are different from those used for Web viewers, which results in a lack of code reuse and shared expertise between development teams. Furthermore, the rise of artificial intelligence in radiology calls for Web viewers that integrate deep learning models and that can be used outside of a clinical environment, for instance to evaluate algorithms or to train skilled workers. In this paper, we show how the emerging WebAssembly standard can be used to tackle these challenges by sharing the same code base between heavyweight viewers and zero-footprint viewers. Moreover, we introduce a fully functional Web viewer that is entirely developed using WebAssembly and that can be used in research projects or in teleradiology applications. Finally, we demonstrate that deep convolutional neural networks for image segmentation can be executed entirely inside a Web browser thanks to WebAssembly, without any dedicated computing infrastructure. The source code associated with this paper is released as free and open-source software.
DeepNet model empowered cuckoo search algorithm for the effective identification of lung cancer nodules
John, Grace
Baskar, S
2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Introduction: Globally, lung cancer is a highly harmful type of cancer. An efficient diagnosis system can enable pathologists to recognize the type and nature of lung nodules and the mode of therapy to increase the patient's chance of survival. Hence, implementing an automatic and reliable system to segment lung nodules from a computed tomography (CT) image is useful in the medical industry.
Methods: This study develops a novel fully convolutional deep neural network (hereafter called DeepNet) model for segmenting lung nodules from CT scans. This model includes an encoder/decoder network that achieves pixel-wise image segmentation. The encoder network exploits a Visual Geometry Group (VGG-19) model as a base architecture, while the decoder network exploits 16 upsampling and deconvolution modules. The encoder used in this model has a very flexible structural design that can be modified and trained for any resolution based on the size of input scans. The decoder network upsamples and maps the low-resolution attributes of the encoder. Thus, there is a considerable drop in the number of variables used for the learning process as the network recycles the pooling indices of the encoder for segmentation. The Thresholding method and the cuckoo search algorithm determines the most useful features when categorizing cancer nodules.
Results and discussion: The effectiveness of the intended DeepNet model is cautiously assessed on the real-world database known as The Cancer Imaging Archive (TCIA) dataset and its effectiveness is demonstrated by comparing its representation with some other modern segmentation models in terms of selected performance measures. The empirical analysis reveals that DeepNet significantly outperforms other prevalent segmentation algorithms with 0.962 ± 0.023% of volume error, 0.968 ± 0.011 of dice similarity coefficient, 0.856 ± 0.011 of Jaccard similarity index, and 0.045 ± 0.005s average processing time.
Prostate cancer prediction from multiple pretrained computer vision model
John, Jisha
Ravikumar, Aswathy
Abraham, Bejoy
Health and Technology2021Journal Article, cited 0 times
Website
PROSTATEx
PROSTATE
Deep Learning
DenseNet
Computer Aided Detection (CADe)
Radiomics
Classification
The prostate gland found among men is a male reproductive gland responsible for separating a thin alkaline fluid that forms a major portion of the ejaculate. The gland has the shape of a small walnut and the cancer caused in this gland is called Prostate Cancer. It has the second highest mortality rate according to studies. Therefore, its detection at the earlier stage when it is still confined to the prostate gland is life saving. This ensures a better chance of successful treatment. The existing preliminary screening approaches for its detection includes prostate specific antigen (PSA) blood test and digital rectal exam (DRE). In the proposed method we use two popular pretrained models for feature extraction, MobileNet and DenseNet. The extracted features are stacked and augmented and fed to a two-stage classifier that provides the prediction. The proposed system is found to have an accuracy of 93.3% and outperforms other traditional approaches.
Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone
Johnson Chacko, Lejo
Schmidbauer, Dominik T
Handschuh, Stephan
Reka, Alen
Fritscher, Karl D
Raudaschl, Patrik
Saba, Rami
Handler, Michael
Schier, Peter P
Baumgarten, Daniel
Fischer, Natalie
Pechriggl, Elisabeth J
Brenner, Erich
Hoermann, Romed
Glueckert, Rudolf
Schrott-Fischer, Anneliese
Frontiers in Neuroscience2018Journal Article, cited 4 times
Website
Vestibular Labyrinth
modeling
Computed Tomography (CT)
Stable posture and body movement in humans is dictated by the precise functioning of the ampulla organs in the semi-circular canals. Statistical analysis of the interrelationship between bony and membranous compartments within the semi-circular canals is dependent on the visualization of soft tissue structures. Thirty-one human inner ears were prepared, post-fixed with osmium tetroxide and decalcified for soft tissue contrast enhancement. High resolution X-ray microtomography images at 15 mum voxel-size were manually segmented. This data served as templates for centerline generation and cross-sectional area extraction. Our estimates demonstrate the variability of individual specimens from averaged centerlines of both bony and membranous labyrinth. Centerline lengths and cross-sectional areas along these lines were identified from segmented data. Using centerlines weighted by the inverse squares of the cross-sectional areas, plane angles could be quantified. The fit planes indicate that the bony labyrinth resembles a Cartesian coordinate system more closely than the membranous labyrinth. A widening in the membranous labyrinth of the lateral semi-circular canal was observed in some of the specimens. Likewise, the cross-sectional areas in the perilymphatic spaces of the lateral canal differed from the other canals. For the first time we could precisely describe the geometry of the human membranous labyrinth based on a large sample size. Awareness of the variations in the canal geometry of the membranous and bony labyrinth would be a helpful reference in designing electrodes for future vestibular prosthesis and simulating fluid dynamics more precisely.;
Computational modeling of tumor invasion from limited and diverse data in Glioblastoma
Jonnalagedda, P.
Weinberg, B.
Min, T. L.
Bhanu, S.
Bhanu, B.
Comput Med Imaging Graph2024Journal Article, cited 0 times
Website
TCGA-GBM
Generative Adversarial Network (GAN)
Glioblastoma
Magnetic Resonance Imaging (MRI)
Radiogenomic analysis
Tumor microenvironment
For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations - which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.
A First Step Towards an Algorithm for Breast Cancer Reoperation Prediction Using Machine Learning and Mammographic Images
Abstract; Cancer is the second leading cause of death worldwide and 30% of all cancer cases among women are breast cancer. A popular treatment is breast-conserving surgery, where only a part of the breast is surgically removed. Surgery is expensive and has a significant impact on the body, and on some women, a reoperation is needed. The aim of this thesis was to see if there is a possibility to predict whether a person will be in need of reoperation with the help of whole mammographic images and deep learning.; The data used in this thesis were collected from two different open sources: (1) The Chinese Mammography Database (CMMD) where 1052 benign images and 1090 malignant images were used. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) where 182 benign images and 145 malignant images were used. With those images, both a simple convolutional neural network (CNN) and a transfer learning network using the pre-trained model MobileNet were trained to classify the images as benign or malignant. All the networks were evaluated using learning curves, confusion matrix, accuracy, sensitivity, specificity, AUC and a ROC-curve.; The highest results obtained belonged to a transfer learning network that used the pre-trained model MobileNet and trained on the CMMD data set. It got an AUC value of 0.599.; Sammanfattning; Cancer är idag det näst vanligaste dödsorsaken i världen, där 30% av alla cancerfall bland kvinnor är bröstcancer. En vanlig behandling är bröstbevarande operation, där en bit av bröstet kirurgiskt tas bort. Operationer är både dyrt och har en betydande inverkan på kroppen och för vissa kvinnor krävs en omoperation efter den första operationen. Syftet med detta arbete har varit att undersöka möjligheten att förutsäga om en person kommer att vara i behov av en omoperation med hjälp av hela mammografibilder och maskininlärning. ; Datan som användes i arbetet hämtades från två olika öppna källor: (1) The Chinese Mammography Database (CMMD) där 1052 benigna bilder och 1090 maligna bilder användes. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) får 182 benigna bilder och 145 maligna bilder användes. Med dessa bilder tränades både ett enkelt konvoluionellt nätverk och ett överförningsinlärningsnätverk med den för-tränade modellen MobileNet för att klassificera bilderna som benigna eller maligna. Alla nätverken utvärderades med inlärningskurvor, confusion matrix, nog grannhet, känslighet, specificitet och en ROC-kurva.; De högsta resultaten som erhölls var ett AUC-värde på 0.599 och tillhörde ett överföringsinlärning nätverk som använt den för-tränade modellen MobileNet och tränat på CMMD-datauppsättningen.
Spatial mapping of tumor heterogeneity in whole-body PET-CT: a feasibility study
Jonsson, H.
Ahlstrom, H.
Kullberg, J.
Biomed Eng Online2023Journal Article, cited 0 times
BACKGROUND: Tumor heterogeneity is recognized as a predictor of treatment response and patient outcome. Quantification of tumor heterogeneity across all scales may therefore provide critical insight that ultimately improves cancer management. METHODS: An image registration-based framework for the study of tumor heterogeneity in whole-body images was evaluated on a dataset of 490 FDG-PET-CT images of lung cancer, lymphoma, and melanoma patients. Voxel-, lesion- and subject-level features were extracted from the subjects' segmented lesion masks and mapped to female and male template spaces for voxel-wise analysis. Resulting lesion feature maps of the three subsets of cancer patients were studied visually and quantitatively. Lesion volumes and lesion distances in subject spaces were compared with resulting properties in template space. The strength of the association between subject and template space for these properties was evaluated with Pearson's correlation coefficient. RESULTS: Spatial heterogeneity in terms of lesion frequency distribution in the body, metabolic activity, and lesion volume was seen between the three subsets of cancer patients. Lesion feature maps showed anatomical locations with low versus high mean feature value among lesions sampled in space and also highlighted sites with high variation between lesions in each cancer subset. Spatial properties of the lesion masks in subject space correlated strongly with the same properties measured in template space (lesion volume, R = 0.986, p < 0.001; total metabolic volume, R = 0.988, p < 0.001; maximum within-patient lesion distance, R = 0.997, p < 0.001). Lesion volume and total metabolic volume increased on average from subject to template space (lesion volume, 3.1 +/- 52 ml; total metabolic volume, 53.9 +/- 229 ml). Pair-wise lesion distance decreased on average by 0.1 +/- 1.6 cm and maximum within-patient lesion distance increased on average by 0.5 +/- 2.1 cm from subject to template space. CONCLUSIONS: Spatial tumor heterogeneity between subsets of interest in cancer cohorts can successfully be explored in whole-body PET-CT images within the proposed framework. Whole-body studies are, however, especially prone to suffer from regional variation in lesion frequency, and thus statistical power, due to the non-uniform distribution of lesions across a large field of view.
An image registration method for voxel-wise analysis of whole-body oncological PET-CT
Whole-body positron emission tomography-computed tomography (PET-CT) imaging in oncology provides comprehensive information of each patient's disease status. However, image interpretation of volumetric data is a complex and time-consuming task. In this work, an image registration method targeted towards computer-aided voxel-wise analysis of whole-body PET-CT data was developed. The method used both CT images and tissue segmentation masks in parallel to spatially align images step-by-step. To evaluate its performance, a set of baseline PET-CT images of 131 classical Hodgkin lymphoma (cHL) patients and longitudinal image series of 135 head and neck cancer (HNC) patients were registered between and within subjects according to the proposed method. Results showed that major organs and anatomical structures generally were registered correctly. Whole-body inverse consistency vector and intensity magnitude errors were on average less than 5 mm and 45 Hounsfield units respectively in both registration tasks. Image registration was feasible in time and the nearly automatic pipeline enabled efficient image processing. Metabolic tumor volumes of the cHL patients and registration-derived therapy-related tissue volume change of the HNC patients mapped to template spaces confirmed proof-of-concept. In conclusion, the method established a robust point-correspondence and enabled quantitative visualization of group-wise image features on voxel level.
Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours
Jordan, P.
Adamson, P. M.
Bhattbhatt, V.
Beriwal, S.
Shen, S.
Radermecker, O.
Bose, S.
Strain, L. S.
Offe, M.
Fraley, D.
Principi, S.
Ye, D. H.
Wang, A. S.
Van Heteren, J.
Vo, N. J.
Schmidt, T. G.
Med Phys2022Journal Article, cited 0 times
Website
Pediatric-CT-SEG
PURPOSE: Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS: The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES: The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS: This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.
Interactive 3D Virtual Colonoscopic Navigation For Polyp Detection From CT Images
Joseph, Jinu
Kumar, Rajesh
Chandran, Pournami S
Vidya, PV
Procedia Computer Science2017Journal Article, cited 0 times
Website
Graph Neural Network Model for Prediction of Non-Small Cell Lung Cancer Lymph Node Metastasis Using Protein-Protein Interaction Network and (18)F-FDG PET/CT Radiomics
Ju, H.
Kim, K.
Kim, B. I.
Woo, S. K.
Int J Mol Sci2024Journal Article, cited 2 times
Website
NSCLC Radiogenomics
Humans
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/genetics
Protein Interaction Maps
Lymphatic Metastasis/diagnostic imaging
Positron Emission Tomography Computed Tomography
Fluorodeoxyglucose F18
Radiomics
*Lung Neoplasms/diagnostic imaging/genetics
Neural Networks
Computer
18f-fdg pet
Ct
Gnn
Nsclc
protein-protein interaction
radiogenomics
The image texture features obtained from (18)F-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-FDG PET/CT) images of non-small cell lung cancer (NSCLC) have revealed tumor heterogeneity. A combination of genomic data and radiomics may improve the prediction of tumor prognosis. This study aimed to predict NSCLC metastasis using a graph neural network (GNN) obtained by combining a protein-protein interaction (PPI) network based on gene expression data and image texture features. (18)F-FDG PET/CT images and RNA sequencing data of 93 patients with NSCLC were acquired from The Cancer Imaging Archive. Image texture features were extracted from (18)F-FDG PET/CT images and area under the curve receiver operating characteristic curve (AUC) of each image feature was calculated. Weighted gene co-expression network analysis (WGCNA) was used to construct gene modules, followed by functional enrichment analysis and identification of differentially expressed genes. The PPI of each gene module and genes belonging to metastasis-related processes were converted via a graph attention network. Images and genomic features were concatenated. The GNN model using PPI modules from WGCNA and metastasis-related functions combined with image texture features was evaluated quantitatively. Fifty-five image texture features were extracted from (18)F-FDG PET/CT, and radiomic features were selected based on AUC (n = 10). Eighty-six gene modules were clustered by WGCNA. Genes (n = 19) enriched in the metastasis-related pathways were filtered using DEG analysis. The accuracy of the PPI network, derived from WGCNA modules and metastasis-related genes, improved from 0.4795 to 0.5830 (p < 2.75 x 10(-12)). Integrating PPI of four metastasis-related genes with (18)F-FDG PET/CT image features in a GNN model elevated its accuracy over a without image feature model to 0.8545 (95% CI = 0.8401-0.8689, p-value < 0.02). This model demonstrated significant enhancement compared to the model using PPI and (18)F-FDG PET/CT derived from WGCNA (p-value < 0.02), underscoring the critical role of metastasis-related genes in prediction model. The enhanced predictive capability of the lymph node metastasis prediction GNN model for NSCLC, achieved through the integration of comprehensive image features with genomic data, demonstrates promise for clinical implementation.
Estimation of an Image Biomarker for Distant Recurrence Prediction in NSCLC Using Proliferation-Related Genes
Ju, H. M.
Kim, B. C.
Lim, I.
Byun, B. H.
Woo, S. K.
Int J Mol Sci2023Journal Article, cited 0 times
Website
This study aimed to identify a distant-recurrence image biomarker in NSCLC by investigating correlations between heterogeneity functional gene expression and fluorine-18-2-fluoro-2-deoxy-D-glucose positron emission tomography ((18)F-FDG PET) image features of NSCLC patients. RNA-sequencing data and (18)F-FDG PET images of 53 patients with NSCLC (19 with distant recurrence and 34 without recurrence) from The Cancer Imaging Archive and The Cancer Genome Atlas Program databases were used in a combined analysis. Weighted correlation network analysis was performed to identify gene groups related to distant recurrence. Genes were selected for functions related to distant recurrence. In total, 47 image features were extracted from PET images as radiomics. The relationship between gene expression and image features was estimated using a hypergeometric distribution test with the Pearson correlation method. The distant recurrence prediction model was validated by a random forest (RF) algorithm using image texture features and related gene expression. In total, 37 gene modules were identified by gene-expression pattern with weighted gene co-expression network analysis. The gene modules with the highest significance were selected (p-value < 0.05). Nine genes with high protein-protein interaction and area under the curve (AUC) were identified as hub genes involved in the proliferation function, which plays an important role in distant recurrence of cancer. Four image features (GLRLM_SRHGE, GLRLM_HGRE, SUVmean, and GLZLM_GLNU) and six genes were identified to be correlated (p-value < 0.1). AUCs (accuracy: 0.59, AUC: 0.729) from the 47 image texture features and AUCs (accuracy: 0.767, AUC: 0.808) from hub genes were calculated using the RF algorithm. AUCs (accuracy: 0.783, AUC: 0.912) from the four image texture features and six correlated genes and AUCs (accuracy: 0.738, AUC: 0.779) from only the four image texture features were calculated using the RF algorithm. The four image texture features validated by heterogeneity group gene expression were found to be related to cancer heterogeneity. The identification of these image texture features demonstrated that advanced prediction of NSCLC distant recurrence is possible using the image biomarker.
ONCOhabitats Glioma Segmentation Model
Juan-Albarracín, Javier
Fuster-Garcia, Elies
del Mar Álvarez-Torres, María
Chelebian, Eduard
García-Gómez, Juan M.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Automatic segmentation
ONCOhabitats is an open online service that provides a fully automatic analysis of tumor vascular heterogeneity in gliomas based on multiparametric MRI. Having a model capable of accurately segment pathological tissues is critical to generate a robust analysis of vascular heterogeneity. In this study we present the segmentation model embedded in ONCOhabitats and its performance obtained on the BRATS 2019 dataset. The model implements an residual-Inception U-Net convolutional neural network, incorporating several pre- and post- processing stages. A relabeling strategy has been applied to improve the segmentation of the necrosis of high-grade gliomas and the non-enhancing tumor of low-grade gliomas. The model was trained using 335 cases from the BraTS 2019 challenge training dataset and evaluated with 125 cases from the validation set and 166 cases from the test set. The results on the validation dataset in terms of the mean/median Dice coefficient are 0.73/0.85 in the enhancing tumor region, 0.90/0.92 in the whole tumor, and 0.78/0.89 in the tumor core. The Dice results obtained in the independent test are 0.78/0.84, 0.88/0.92 and 0.83/0.92 respectively for the same sub-compartments of the lesion.
Brain Tumor Segmentation Using Dual-Path Attention U-Net in 3D MRI Images
Jun, Wen
Haoxiang, Xu
Wang, Zhang
2021Book Section, cited 0 times
BraTS-TCGA-LGG
BraTS-TCGA-GBM
BraTS 2020
Segmentation
Challenge
U-Net
3d convolutional neural network (CNN)
Semantic segmentation plays an essential role in brain tumor diagnosis and treatment planning. Yet, manual segmentation is a time-consuming task. That fact leads to hire the Deep Neural Networks to segment brain tumor. In this work, we proposed a variety of 3D U-Net, which can achieve comparable segmentation accuracy with less graphic memory cost. To be more specific, our model employs a modified attention block to refine the feature map representation along the skip-connection bridge, which consists of parallelly connected spatial and channel attention blocks. Dice coefficients for enhancing tumor, whole tumor, and tumor core reached 0.752, 0.879 and 0.779 respectively on the BRATS- 2020 valid dataset.
Denoising of computed tomography using bilateral median based autoencoder network
Juneja, M.
Joshi, S.
Singla, N.
Ahuja, S.
Saini, S. K.
Thakur, N.
Jindal, P.
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
Pancreas-CT
autoencoders
bilateral median
ct scan
denoising
filtering
pancreatic cancer
noise-reduction
images
filter
Denoising of Computed tomography (CT) images is a critical aspect of image processing that is expected to improve the performance of Computer-aided diagnosis (CAD) systems. However, the use of complex imaging modalities such as CT imaging to ascertain pancreatic cancer is vulnerable to gaussian and poisson noises, making image denoising an imperative step for the accurate performance of CAD systems. This paper presents a Bilateral median based autoencoder network (BMAuto-Net) constructed with intermediate batch normalization layers and dropout factors to eliminate gaussian noise from the CT images. The skip connections adjoining the network, prevent performance degradation that generally occurs in most autoencoder architectures. Based on the presented study, BMAuto-Net is reckoned to outperform other traditional filters and autoencoders. The performance measurement of the proposed architecture is performed using the peak signal-to-noise ratio (PSNR), mean squared error (MSE), and structured similarity index (SSIM) metric values. The Cancer imaging archive (TCIA) dataset consisting of 19 000 CT images is used to validate the performance of the architecture with average PSNR values of 30.01, 30.53, and 30.52, MSE values of 98.23, 98.87, and 98.94, and SSIM of values of 0.67, 0.60, and 0.57 for noise factors (NFs) of 0.1, 0.3, and 0.5 respectively.
Algorithmic transparency and interpretability measures improve radiologists' performance in BI-RADS 4 classification
Jungmann, F.
Ziegelmayer, S.
Lohoefer, F. K.
Metz, S.
Muller-Leisse, C.
Englmaier, M.
Makowski, M. R.
Kaissis, G. A.
Braren, R. F.
Eur Radiol2022Journal Article, cited 0 times
CBIS-DDSM
Algorithms
Artificial intelligence
Perception
Radiologists
Trust
OBJECTIVE: To evaluate the perception of different types of AI-based assistance and the interaction of radiologists with the algorithm's predictions and certainty measures. METHODS: In this retrospective observer study, four radiologists were asked to classify Breast Imaging-Reporting and Data System 4 (BI-RADS4) lesions (n = 101 benign, n = 99 malignant). The effect of different types of AI-based assistance (occlusion-based interpretability map, classification, and certainty) on the radiologists' performance (sensitivity, specificity, questionnaire) were measured. The influence of the Big Five personality traits was analyzed using the Pearson correlation. RESULTS: Diagnostic accuracy was significantly improved by AI-based assistance (an increase of 2.8% +/- 2.3%, 95 %-CI 1.5 to 4.0 %, p = 0.045) and trust in the algorithm was generated primarily by the certainty of the prediction (100% of participants). Different human-AI interactions were observed ranging from nearly no interaction to humanization of the algorithm. High scores in neuroticism were correlated with higher persuasibility (Pearson's r = 0.98, p = 0.02), while higher consciousness and change of accuracy showed an inverse correlation (Pearson's r = -0.96, p = 0.04). CONCLUSION: Trust in the algorithm's performance was mostly dependent on the certainty of the predictions in combination with a plausible heatmap. Human-AI interaction varied widely and was influenced by personality traits. KEY POINTS: * AI-based assistance significantly improved the diagnostic accuracy of radiologists in classifying BI-RADS 4 mammography lesions. * Trust in the algorithm's performance was mostly dependent on the certainty of the prediction in combination with a reasonable heatmap. * Personality traits seem to influence human-AI collaboration. Radiologists with specific personality traits were more likely to change their classification according to the algorithm's prediction than others.
Analyzing the Quality and Challenges of Uncertainty Estimations for Brain Tumor Segmentation
Jungo, Alain
Balsiger, Fabian
Reyes, Mauricio
Frontiers in Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation of brain tumors has the potential to enable volumetric measures and high-throughput analysis in the clinical setting. Reaching this potential seems almost achieved, considering the steady increase in segmentation accuracy. However, despite segmentation accuracy, the current methods still do not meet the robustness levels required for patient-centered clinical use. In this regard, uncertainty estimates are a promising direction to improve the robustness of automated segmentation systems. Different uncertainty estimation methods have been proposed, but little is known about their usefulness and limitations for brain tumor segmentation. In this study, we present an analysis of the most commonly used uncertainty estimation methods in regards to benefits and challenges for brain tumor segmentation. We evaluated their quality in terms of calibration, segmentation error localization, and segmentation failure detection. Our results show that the uncertainty methods are typically well-calibrated when evaluated at the dataset level. Evaluated at the subject level, we found notable miscalibrations and limited segmentation error localization (e.g., for correcting segmentations), which hinder the direct use of the voxel-wise uncertainties. Nevertheless, voxel-wise uncertainty showed value to detect failed segmentations when uncertainty estimates are aggregated at the subject level. Therefore, we suggest a careful usage of voxel-wise uncertainty measures and highlight the importance of developing solutions that address the subject-level requirements on calibration and segmentation error localization.
Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research
Junior, José Raniery Ferreira
Oliveira, Marcelo Costa
de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging2016Journal Article, cited 14 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
Radiographic assessment of contrast enhancement and T2/FLAIR mismatch sign in lower grade gliomas: correlation with molecular groups
Juratli, Tareq A
Tummala, Shilpa S
Riedl, Angelika
Daubner, Dirk
Hennig, Silke
Penson, Tristan
Zolal, Amir
Thiede, Christian
Schackert, Gabriele
Krex, Dietmar
Journal of Neuro-Oncology2018Journal Article, cited 0 times
Website
TCGA-LGG
IDH mutation
MRI
Radiogenomics
1p/19q co-deletion
Automated size-specific dose estimates using deep learning image processing
Juszczyk, Jan
Badura, Pawel
Czajkowska, Joanna
Wijata, Agata
Andrzejewski, Jacek
Bozek, Pawel
Smolinski, Michal
Biesok, Marta
Sage, Agata
Rudzki, Marcin
Wieclawek, Wojciech
Medical Image Analysis2020Journal Article, cited 0 times
Head-Neck Cetuximab
An automated vendor-independent system for dose monitoring in computed tomography (CT) medical examinations involving ionizing radiation is presented in this paper. The system provides precise size-specific dose estimates (SSDE) following the American Association of Physicists in Medicine regulations. Our dose management can operate on incomplete DICOM header metadata by retrieving necessary information from the dose report image by using optical character recognition. For the determination of the patient's effective diameter and water equivalent diameter, a convolutional neural network is employed for the semantic segmentation of the body area in axial CT slices. Validation experiments for the assessment of the SSDE determination and subsequent stages of our methodology involved a total of 335 CT series (60 352 images) from both public databases and our clinical data. We obtained the mean body area segmentation accuracy of 0.9955 and Jaccard index of 0.9752, yielding a slice-wise mean absolute error of effective diameter below 2 mm and water equivalent diameter at 1 mm, both below 1%. Three modes of the SSDE determination approach were investigated and compared to the results provided by the commercial system GE DoseWatch in three different body region categories: head, chest, and abdomen. Statistical analysis was employed to point out some significant remarks, especially in the head category.
Computer-aided diagnostic system kinds and pulmonary nodule detection efficacy
Kadhim, Omar Raad
Motlak, Hassan Jassim
Abdalla, Kasim Karam
International Journal of Electrical and Computer Engineering (IJECE)2022Journal Article, cited 0 times
Website
LIDC-IDRI
Classification
Computer Aided Detection (CADe)
Feature Extraction
LUNG
This paper summarizes the literature on computer-aided detection (CAD) systems used to identify and diagnose lung nodules in images obtained with computed tomography (CT) scanners. The importance of developing such systems lies in the fact that the process of manually detecting lung nodules is painstaking and sequential work for radiologists, as it takes a long time. Moreover, the pulmonary nodules have multiple appearances and shapes, and the large number of slices generated by the scanner creates great difficulty in accurately locating the lung nodules. The handcraft nodules detection process can be caused by messing some nodules spicily when these nodules' diameter be less than 10 mm. So, the CAD system is an essential assistant to the radiologist in this case of nodule detection, and it contributed to reducing time consumption in nodules detection; moreover, it applied more accuracy in this field. The objective of this paper is to follow up on current and previous work on lung cancer detection and lung nodule diagnosis. This literature dealt with a group of specialized systems in this field quickly and showed the methods used in them. It dealt with an emphasis on a system based on deep learning involving neural convolution networks.
Histopathological carcinoma classification using parallel, cross‐concatenated and grouped convolutions deep neural network
Kadirappa, Ravindranath
Subbian, Deivalakshmi
Ramasamy, Pandeeswari
Ko, Seok‐Bum
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
TCGA-LIHC
Abstract Cancer is more alarming in modern days due to its identification at later stages. Among cancers, lung, liver and colon cancers are the leading cause of untimely death. Manual cancer identification from histopathological images is time‐consuming and labour‐intensive. Thereby, computer‐aided decision support systems are desired. A deep learning model is proposed in this paper to accurately identify cancer. Convolutional neural networks have shown great ability to identify the significant patterns for cancer classification. The proposed Parallel, Cross Concatenated and Grouped Convolutions Deep Neural Network (PC 2 GCDN 2 ) has been developed to obtain accurate patterns for classification. To prove the robustness of the model, it is evaluated on the KMC and TCGA‐LIHC liver dataset, LC25000 dataset for lung and colon cancer classification. The proposed PC 2 GCDN 2 model outperforms states‐of‐the‐art methods. The model provides 5.5% improved accuracy compared to the LiverNet proposed by Aatresh et. al on the KMC dataset. On the LC25000 dataset, 2% improvement is observed compared to existing models. Performance evaluation metrics like Sensitivity, Specificity, Recall, F1‐Score and Intersection‐Over‐Union are used to evaluate the performance. To the best of our knowledge, PC 2 GCDN 2 can be considered as gold standard for multiple histopathology image classification. PC 2 GCDN is able to classify the KMC and TCGA‐LIHC liver dataset with 96.4% and 98.6% accuracy, respectively, which are the best results obtained till now. The performance has been superior on LC25000 dataset with 99.5% and 100% classification accuracy on lung and colon dataset, by utilizing less than 0.5 million parameters.
Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics
Kadoya, Noriyuki
Tanaka, Shohei
Kajikawa, Tomohiro
Tanabe, Shunpei
Abe, Kota
Nakajima, Yujiro
Yamamoto, Takaya
Takahashi, Noriyoshi
Takeda, Kazuya
Dobashi, Suguru
Takeda, Ken
Nakane, Kazuaki
Jingu, Keiichi
Med Phys2020Journal Article, cited 0 times
Website
NSCLC Radiogenomics
RIDER Lung CT
QIN LUNG CT
Radiomics
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.
Extraction of Tumour in Breast MRI using Joint Thresholding and Segmentation – A Study
Breast Cancer (BC) is one of the harsh conditions, which largely affects the women group. Due to its significance, a range of procedures are available for premature detection and early treatment to save the patient. The clinical level diagnosis of BC will be done using; (i) Image supported detection and (ii) Core-Needle-Biopsy (CNB) assisted confirmation. The proposed work aim to develop a computerized scheme to detect the Breast-Tumor-Section (BTS) from the breast MRI slices. This work implements a joint thresholding and segmentation methodology to enhance and extract the BTS from the 2D MRI slices. A tri-level thresholding based on Slime-Mould-Algorithm and Shannon's-Entropy (SMA+SE) is implemented to enhance the BTS and Watershed-Segmentation (WS) is implemented to mine the BTS. After extracting the BTS, a study between the BTS and Ground-Truth image is performed and the necessary Image-Performance-Values (IPV) are computed. In this work the axial, coronal and sagittal slices of 2D breast MRI are separately examined and the attained results are presented.
Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapur’s thresholding: a study
Kadry, Seifedine
Rajinikanth, V
Raja, N Sri Madhava
Hemanth, D Jude
Hannon, Naeem MS
Raj, Alex Noel Joseph
Evolutionary Intelligence2021Journal Article, cited 0 times
TCGA-GBM
Segmentation
TwoPath U-Net for Automatic Brain Tumor Segmentation from Multimodal MRI Data
Kaewrak, Keerati
Soraghan, John
Di Caterina, Gaetano
Grose, Derek
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
A novel encoder-decoder deep learning network called TwoPath U-Net for multi-class automatic brain tumor segmentation task is presented. The network uses cascaded local and global feature extraction paths in the down-sampling path of the network which allows the network to learn different aspects of both the low-level feature and high-level features. The proposed network architecture using a full image and patches input technique was used on the BraTS2020 training dataset. We tested the network performance using the BraTS2019 validation dataset and obtained the mean dice score of 0.76, 0.64, and 0.58 and the Hausdorff distance 95% of 25.05, 32.83, and 37.57 for the whole tumor, tumor core and enhancing tumor regions.
Detection of lung tumor using dual tree complex wavelet transform and co‐active adaptive neuro fuzzy inference system classification approach
Kailasam, Manoj Senthil
Thiagarajan, MeeraDevi
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
LIDC-IDRI
Wavelet
Computed Tomography (CT)
Automatic segmentation
LUNG
The automatic detection and location of the tumor regions in lung images is more important to provide timely medical treatments to patients in order to save their lives. In this article, machine learning-based lung tumor detection, classification and segmentation algorithm is proposed. The tumor classification phase first smooth the source lung computed tomography image using adaptive median filter and then discrete time complex wavelet transform (DT-CWT) is applied on this smoothed lung image to decompose the entire image into a number of sub-bands. Along with the decomposed sub-bands, DWT, pattern, and co-occurrence features are computed and classified using co-active adaptive neuro fuzzy inference system (CANFIS). The tumor segmentation phase uses morphological functions on this classified abnormal lung image to locate the tumor regions. The multi-evaluation parameters are used to evaluate the proposed method. This method is compared with the other state-of-the-art methods on the same lung image from open-access dataset.
Deep Learning Enhanced CNN with Bio-Inspired Techniques and BCE For Effective Lung Nodules Detection & Classification For Accurate Diagnosis
Kalaivani, D
Dheepa, G
Indian Journal of Science and Technology2024Journal Article, cited 0 times
Lung-PET-CT-Dx
Design and Implementation of the Pre-Clinical DICOM Standard in Multi-Cohort Murine Studies
Kalen, Joseph D.
Clunie, David A.
Liu, Yanling
Tatum, James L.
Jacobs, Paula M.
Kirby, Justin
Freymann, John B.
Wagner, Ulrike
Smith, Kirk E.
Suloway, Christian
Doroshow, James H.
Tomography2021Journal Article, cited 0 times
PDMR-425362-245-T
The small animal imaging Digital Imaging and Communications in Medicine (DICOM) acquisition context structured report (SR) was developed to incorporate pre-clinical data in an established DICOM format for rapid queries and comparison of clinical and non-clinical datasets. Established terminologies (i.e., anesthesia, mouse model nomenclature, veterinary definitions, NCI Metathesaurus) were utilized to assist in defining terms implemented in pre-clinical imaging and new codes were added to integrate the specific small animal procedures and handling processes, such as housing, biosafety level, and pre-imaging rodent preparation. In addition to the standard DICOM fields, the small animal SR includes fields specific to small animal imaging such as tumor graft (i.e., melanoma), tissue of origin, mouse strain, and exogenous material, including the date and site of injection. Additionally, the mapping and harmonization developed by the Mouse-Human Anatomy Project were implemented to assist co-clinical research by providing cross-reference human-to-mouse anatomies. Furthermore, since small animal imaging performs multi-mouse imaging for high throughput, and queries for co-clinical research requires a one-to-one relation, an imaging splitting routine was developed, new Unique Identifiers (UID's) were created, and the original patient name and ID were saved for reference to the original dataset. We report the implementation of the small animal SR using MRI datasets (as an example) of patient-derived xenograft mouse models and uploaded to The Cancer Imaging Archive (TCIA) for public dissemination, and also implemented this on PET/CT datasets. The small animal SR enhancement provides researchers the ability to query any DICOM modality pre-clinical and clinical datasets using standard vocabularies and enhances co-clinical studies.
Artificial intelligence applications in radiotherapy: The role of the FAIR data principles.
Radiotherapy is one of the main treatment modalities used for cancer. Nowadays, due to emerging artificial intelligence (AI) technologies, radiotherapy has become a broader field. This thesis investigated how AI can make the life of doctors, physicists and researchers easier. This thesis also showed that clinical routine tasks, such as quality assurance tests, can be automated. Researchers can reuse machine-readable data, while physicists can validate and improve novel treatment techniques such as proton therapy. The abovementioned three pillars contribute to the improvement of patients care (personalised radiotherapy). In conclusion, this technological revolution requires a re-thinking of the original professional figures in radiotherapy and the design of AI studies. This thesis concluded to the fact that radiotherapy professionals and researchers can improve their ability to perform tasks, having AI as a supplementary helping tool.
FAIR-compliant clinical, radiomics and DICOM metadata of RIDER, interobserver, Lung1 and head-Neck1 TCIA collections
Kalendralis, Petros
Shi, Zhenwei
Traverso, Alberto
Choudhury, Ananya
Sloep, Matthijs
Zhovannik, Ivan
Starmans, Martijn P A
Grittner, Detlef
Feltens, Peter
Monshouwer, Rene
Klein, Stefan
Fijten, Rianne
Aerts, Hugo
Dekker, Andre
van Soest, Johan
Wee, Leonard
Med Phys2020Journal Article, cited 0 times
Website
Radiomics
NSCLC-Radiomics
RIDER Lung CT
Head-Neck-Radiomics-HN1
NSCLC-Radiomics- Interobserver1
Imaging features
PURPOSE: One of the most frequently cited radiomics investigations showed that features automatically extracted from routine clinical images could be used in prognostic modeling. These images have been made publicly accessible via The Cancer Imaging Archive (TCIA). There have been numerous requests for additional explanatory metadata on the following datasets - RIDER, Interobserver, Lung1, and Head-Neck1. To support repeatability, reproducibility, generalizability, and transparency in radiomics research, we publish the subjects' clinical data, extracted radiomics features, and digital imaging and communications in medicine (DICOM) headers of these four datasets with descriptive metadata, in order to be more compliant with findable, accessible, interoperable, and reusable (FAIR) data management principles. ACQUISITION AND VALIDATION METHODS: Overall survival time intervals were updated using a national citizens registry after internal ethics board approval. Spatial offsets of the primary gross tumor volume (GTV) regions of interest (ROIs) associated with the Lung1 CT series were improved on the TCIA. GTV radiomics features were extracted using the open-source Ontology-Guided Radiomics Analysis Workflow (O-RAW). We reshaped the output of O-RAW to map features and extraction settings to the latest version of Radiomics Ontology, so as to be consistent with the Image Biomarker Standardization Initiative (IBSI). Digital imaging and communications in medicine metadata was extracted using a research version of Semantic DICOM (SOHARD, GmbH, Fuerth; Germany). Subjects' clinical data were described with metadata using the Radiation Oncology Ontology. All of the above were published in Resource Descriptor Format (RDF), that is, triples. Example SPARQL queries are shared with the reader to use on the online triples archive, which are intended to illustrate how to exploit this data submission. DATA FORMAT: The accumulated RDF data are publicly accessible through a SPARQL endpoint where the triples are archived. The endpoint is remotely queried through a graph database web application at http://sparql.cancerdata.org. SPARQL queries are intrinsically federated, such that we can efficiently cross-reference clinical, DICOM, and radiomics data within a single query, while being agnostic to the original data format and coding system. The federated queries work in the same way even if the RDF data were partitioned across multiple servers and dispersed physical locations. POTENTIAL APPLICATIONS: The public availability of these data resources is intended to support radiomics features replication, repeatability, and reproducibility studies by the academic community. The example SPARQL queries may be freely used and modified by readers depending on their research question. Data interoperability and reusability are supported by referencing existing public ontologies. The RDF data are readily findable and accessible through the aforementioned link. Scripts used to create the RDF are made available at a code repository linked to this submission: https://gitlab.com/UM-CDS/FAIR-compliant_clinical_radiomics_and_DICOM_metadata.
Making radiotherapy more efficient with FAIR data
Kalendralis, Petros
Sloep, Matthijs
van Soest, Johan
Dekker, Andre
Fijten, Rianne
Physica Medica2021Journal Article, cited 0 times
NSCLC-Radiomics
Given the rapid growth of artificial intelligence (AI) applications in radiotherapy and the related transformations toward the data-driven healthcare domain, this article summarizes the need and usage of the FAIR (Findable, Accessible, Interoperable, Reusable) data principles in radiotherapy. This work introduces the FAIR data concept, presents practical and relevant use cases and the future role of the different parties involved. The goal of this article is to provide guidance and potential applications of FAIR to various radiotherapy stakeholders, focusing on the central role of medical physicists.
Multicenter CT phantoms public dataset for radiomics reproducibility tests
Kalendralis, Petros
Traverso, Alberto
Shi, Zhenwei
Zhovannik, Ivan
Monshouwer, Rene
Starmans, Martijn P A
Klein, Stefan
Pfaehler, Elisabeth
Boellaard, Ronald
Dekker, Andre
Wee, Leonard
Med Phys2019Journal Article, cited 0 times
Credence-Cartridge-Radiomics-Phantom
Algorithm Development
Reproducibility
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.
Pulmonary Nodule Classification in Lung Cancer from 3D Thoracic CT Scans Using fastai and MONAI
Kaliyugarasan, Satheshkumar
Lundervold, Arvid
Lundervold, Alexander Selvikvåg
International Journal of Interactive Multimedia and Artificial Intelligence2021Journal Article, cited 0 times
Website
LIDC-IDRI
Computed Tomography (CT)
Supervised
Classification
Convolutional Neural Network (CNN)
Jupyter notebook
We construct a convolutional neural network to classify pulmonary nodules as malignant or benign in the context of lung cancer. To construct and train our model, we use our novel extension of the fastai deep learning framework to 3D medical imaging tasks, combined with the MONAI deep learning library. We train and evaluate the model using a large, openly available data set of annotated thoracic CT scans. Our model achieves a nodule classification accuracy of 92.4% and a ROC AUC of 97% when compared to a “ground truth” based on ; multiple human raters subjective assessment of malignancy. We further evaluate our approach by predicting patient-level diagnoses of cancer, achieving a test set accuracy of 75%. This is higher than the 70% obtained by aggregating the human raters assessments. Class activation maps are applied to investigate the features used by our classifier, enabling a rudimentary level of explainability for what is otherwise close to “black box” predictions. As the classification of structures in chest CT scans is useful across a variety of diagnostic and prognostic tasks in radiology, our approach has broad applicability. As we aimed to construct a fully reproducible system that can be compared to new proposed methods and easily be adapted and extended, the full source code of our work is available at https://github.com/MMIV-ML/Lung-CT-fastai-2020.
Med-NCA: Robust and Lightweight Segmentation with Neural Cellular Automata
Kalkhof, John
González, Camila
Mukhopadhyay, Anirban
2023Book Section, cited 0 times
ISBI-MR-Prostate-2013
Access to the proper infrastructure is critical when performing medical image segmentation with Deep Learning. This requirement makes it difficult to run state-of-the-art segmentation models in resource-constrained scenarios like primary care facilities in rural areas and during crises. The recently emerging field of Neural Cellular Automata (NCA) has shown that locally interacting one-cell models can achieve competitive results in tasks such as image generation or segmentations in low-resolution inputs. However, they are constrained by high VRAM requirements and the difficulty of reaching convergence for high-resolution images. To counteract these limitations we propose Med-NCA, an end-to-end NCA training pipeline for high-resolution image segmentation. Our method follows a two-step process. Global knowledge is first communicated between cells across the downscaled image. Following that, patch-based segmentation is performed. Our proposed Med-NCA outperforms the classic UNet by 2% and 3% Dice for hippocampus and prostate segmentation, respectively, while also being 500 times smaller. We also show that Med-NCA is by design invariant with respect to image scale, shape and translation, experiencing only slight performance degradation even with strong shifts; and is robust against MRI acquisition artefacts. Med-NCA enables high-resolution medical image segmentation even on a Raspberry Pi B+, arguably the smallest device able to run PyTorch and that can be powered by a standard power bank.
Radiomics of Lung Nodules: A Multi-Institutional Study of Robustness and Agreement of Quantitative Imaging Features
Kalpathy-Cramer, J.
Mamomov, A.
Zhao, B.
Lu, L.
Cherezov, D.
Napel, S.
Echegaray, S.
Rubin, D.
McNitt-Gray, M.
Lo, P.
Sieren, J. C.
Uthoff, J.
Dilger, S. K.
Driscoll, B.
Yeung, I.
Hadjiiski, L.
Cha, K.
Balagurunathan, Y.
Gillies, R.
Goldgof, D.
Tomography: a journal for imaging research2016Journal Article, cited 19 times
Website
Radiomics
QIN
LUNG
Segmentation
RIDER Lung CT
Phantom FDA
NSCLC Radiogenomics
LIDC-IDRI
Radiomic features
lung cancer
reproducibility
A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study
Kalpathy-Cramer, Jayashree
Zhao, Binsheng
Goldgof, Dmitry
Gu, Yuhua
Wang, Xingwei
Yang, Hao
Tan, Yongqiang
Gillies, Robert
Napel, Sandy
Journal of Digital Imaging2016Journal Article, cited 18 times
Website
LUNG
Computed Tomography (CT)
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 mul to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.;
Imaging-based stratification of adult gliomas prognosticates survival and correlates with the 2021 WHO classification
Kamble, A. N.
Agrawal, N. K.
Koundal, S.
Bhargava, S.
Kamble, A. N.
Joyner, D. A.
Kalelioglu, T.
Patel, S. H.
Jain, R.
Neuroradiology2022Journal Article, cited 0 times
Website
REMBRANDT
VASARI
TCGA-GBM
TCGA-LGG
Glioblastoma Multiforme (GBM)
Glioma
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
Classification
BACKGROUND: Because of the lack of global accessibility, delay, and cost-effectiveness of genetic testing, there is a clinical need for an imaging-based stratification of gliomas that can prognosticate survival and correlate with the 2021-WHO classification. METHODS: In this retrospective study, adult primary glioma patients with pre-surgery/pre-treatment MRI brain images having T2, FLAIR, T1, T1 post-contrast, DWI sequences, and survival information were included in TCIA training-dataset (n = 275) and independent validation-dataset (n = 200). A flowchart for imaging-based stratification of adult gliomas(IBGS) was created in consensus by three authors to encompass all adult glioma types. Diagnostic features used were T2-FLAIR mismatch sign, central necrosis with peripheral enhancement, diffusion restriction, and continuous cortex sign. Roman numerals (I, II, and III) denote IBGS types. Two independent teams of three and two radiologists, blinded to genetic, histology, and survival information, manually read MRI into three types based on the flowchart. Overall survival-analysis was done using age-adjusted Cox-regression analysis, which provided both hazard-ratio (HR) and area-under-curve (AUC) for each stratification system(IBGS and 2021-WHO). The sensitivity and specificity of each IBSG type were analyzed with cross-table to identify the corresponding 2021-WHO genotype. RESULTS: Imaging-based stratification was statistically significant in predicting survival in both datasets with good inter-observer agreement (age-adjusted Cox-regression, AUC > 0.5, k > 0.6, p < 0.001). IBGS type-I, type-II, and type-III gliomas had good specificity in identifying IDHmut 1p19q-codel oligodendroglioma (training - 97%, validation - 85%); IDHmut 1p19q non-codel astrocytoma (training - 80%, validation - 85.9%); and IDHwt glioblastoma (training - 76.5%, validation- 87.3%) respectively (p-value < 0.01). CONCLUSIONS: Imaging-based stratification of adult diffuse gliomas predicted patient survival and correlated well with 2021-WHO glioma classification.
A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker
Kanas, Vasileios G
Zacharaki, Evangelia I
Davatzikos, Christos
Sgarbas, Kyriakos N
Megalooikonomou, Vasileios
Biomedical Signal Processing and Control2015Journal Article, cited 15 times
Website
Algorithm Development
BRAIN
Objective; Magnetic resonance imaging (MRI) is the primary imaging technique for evaluation of the brain tumor progression before and after radiotherapy or surgery. The purpose of the current study is to exploit conventional MR modalities in order to identify and segment brain images with neoplasms.; ; Methods; Four conventional MR sequences, namely, T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid attenuation inversion recovery, are combined with machine learning techniques to extract global and local information of brain tissues and model the healthy and neoplastic imaging profiles. Healthy tissue clustering, outlier detection and geometric and spatial constraints are applied to perform a first segmentation which is further improved by a modified multiparametric Random Walker segmentation method. The proposed framework is applied on clinical data from 57 brain tumor patients (acquired by different scanners and acquisition parameters) and on 25 synthetic MR images with tumors. Assessment is performed against expert-defined tissue masks and is based on sensitivity analysis and Dice coefficient.; ; Results; The results demonstrate that the proposed multiparametric framework differentiates neoplastic tissues with accuracy similar to most current approaches while it achieves lower computational cost and higher degree of automation.; ; Conclusion; This study might provide a decision-support tool for neoplastic tissue segmentation, which can assist in treatment planning for tumor resection or focused radiotherapy.
Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma
Kanas, Vasileios G
Zacharaki, Evangelia I
Thomas, Ginu A
Zinn, Pascal O
Megalooikonomou, Vasileios
Colen, Rivka R
Computer Methods and Programs in Biomedicine2017Journal Article, cited 16 times
Website
TCGA-GBM
Radiogenomics
BRAIN
Background and objective: The O6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation has been shown to be associated with improved outcomes in patients with glioblastoma (GBM) and may be a predictive marker of sensitivity to chemotherapy. However, determination of the MGMT promoter methylation status requires tissue obtained via surgical resection or biopsy. The aim of this study was to assess the ability of quantitative and qualitative imaging variables in predicting MGMT methylation status noninvasively.; ; Methods: A retrospective analysis of MR images from GBM patients was conducted. Multivariate prediction models were obtained by machine-learning methods and tested on data from The Cancer Genome Atlas (TCGA) database.; ; Results: The status of MGMT promoter methylation was predicted with an accuracy of up to 73.6%. Experimental analysis showed that the edema/necrosis volume ratio, tumor/necrosis volume ratio, edema volume, and tumor location and enhancement characteristics were the most significant variables in respect to the status of MGMT promoter methylation in GBM.; ; Conclusions: The obtained results provide further evidence of an association between standard preoperative MRI variables and MGMT methylation status in GBM.
Weakly-supervised learning for lung carcinoma classification using deep learning
Kanavati, Fahdi
Toyokawa, Gouji
Momosaki, Seiya
Rambeau, Michael
Kozuma, Yuka
Shoji, Fumihiro
Yamazaki, Koji
Takeo, Sadanori
Iizuka, Osamu
Tsuneki, Masayuki
Scientific RepoRtS2020Journal Article, cited 52 times
Website
TCGA-LUAD
TCGA-LUSC
CPTAC-LSCC
Pathology
Deep Learning
Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging
Kanber, B.
Ruffle, J.
Cardoso, J.
Ourselin, S.
Ciccarelli, O.
Neuroinformatics2019Journal Article, cited 0 times
BRAIN
Magnetic Resonance Imaging (MRI)
Classification
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.
LRR-CED: low-resolution reconstruction-aware convolutional encoder-decoder network for direct sparse-view CT image reconstruction
Kandarpa, V. S. S.
Perelli, A.
Bousse, A.
Visvikis, D.
Phys Med Biol2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Algorithm Development
*Image Processing
Computer-Assisted/methods
Phantoms
Imaging
*Tomography
X-Ray Computed/methods
X-Rays
deep learning
sparse-view CT
Objective. Sparse-view computed tomography (CT) reconstruction has been at the forefront of research in medical imaging. Reducing the total x-ray radiation dose to the patient while preserving the reconstruction accuracy is a big challenge. The sparse-view approach is based on reducing the number of rotation angles, which leads to poor quality reconstructed images as it introduces several artifacts. These artifacts are more clearly visible in traditional reconstruction methods like the filtered-backprojection (FBP) algorithm.Approach. Over the years, several model-based iterative and more recently deep learning-based methods have been proposed to improve sparse-view CT reconstruction. Many deep learning-based methods improve FBP-reconstructed images as a post-processing step. In this work, we propose a direct deep learning-based reconstruction that exploits the information from low-dimensional scout images, to learn the projection-to-image mapping. This is done by concatenating FBP scout images at multiple resolutions in the decoder part of a convolutional encoder-decoder (CED).; Main results. This approach is investigated on two different networks, based on Dense Blocks and U-Net to show that a direct mapping can be learned from a sinogram to an image. The results are compared to two post-processing deep learning methods (FBP-ConvNet and DD-Net) and an iterative method that uses a total variation (TV) regularization.; Significance. This work presents a novel method that uses information from both sinogram and low-resolution scout images for sparse-view CT image reconstruction. We also generalize this idea by demonstrating results with two different neural networks. This work is in the direction of exploring deep learning across the various stages of the image reconstruction pipeline involving data correction, domain transfer and image improvement.
A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction
Kang, E.
Min, J.
Ye, J. C.
Med Phys2017Journal Article, cited 568 times
Website
LDCT-and-Projection-data
*Radiation Dosage
Signal-To-Noise Ratio
Computed Tomography (CT)
Wavelet Analysis
Convolutional Neural Network (CNN)
Deep Learning
PURPOSE: Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. METHOD: We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. RESULTS: Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." CONCLUSIONS: To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research.
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia
Liu, Kui
Hou, Beibei
Zhang, Ningbo
PLoS One2017Journal Article, cited 7 times
Website
LIDC-IDRI
lung cancer
3d convolutional neural network (CNN)
The contribution of axillary lymph node volume to recurrence-free survival status in breast cancer patients with sub-stratification by molecular subtypes and pathological complete response
Kang, James
Li, Haifang
Cattell, Renee
Talanki, Varsha
Cohen, Jules A.
Bernstein, Clifford S.
Duong, Tim
Breast Cancer Research2020Journal Article, cited 0 times
Website
ISPY1/ACRIN 6657
Purpose This study sought to examine the contribution of axillary lymph node (LN) volume to recurrence-free survival (RFS) in breast cancer patients with sub-stratification by molecular subtypes, and full or nodal PCR.; ; Methods The largest LN volumes per patient at pre-neoadjuvant chemotherapy on standard clinical breast 1.5-Tesla MRI, 3 molecular subtypes, full, breast, and nodal PCR, and 10-year RFS were tabulated (N = 110 patients from MRIs of I-SPY-1 TRIAL). A volume threshold of two standard deviations was used to categorize large versus small LNs for sub stratification. In addition, “normal” node volumes were determined from a different cohort of 218 axillary LNs.; ; Results LN volume (4.07 ± 5.45 cm3) were significantly larger than normal axillary LN volumes (0.646 ± 0.657 cm3, P = 10− 16). Full and nodal pathologic complete response (PCR) was not dependent on pre-neoadjuvant chemotherapy nodal volume (P > .05). The HR+/HER2– group had smaller axillary LN volumes than the HER2 + and triple-negative groups (P < .05). Survival was not dependent on pre-treatment axillary LN volumes alone (P = .29). However, when substratified by PCR, the large LN group with full (P = .011) or nodal PCR (P = .0026) both showed better recurrence-free survival than the small LN group. There was significant difference in RFS when the small node group was separated by the 3 molecular subtypes (P = .036) but not the large node group (P = .97).; ; Conclusions This study found an interaction of axillary lymph node volume, pathological complete responses, and molecular subtypes that inform recurrence-free survival status. Improved characterization of the axillary lymph nodes has the potential to improve the management of breast cancer patients.
Multi-Institutional Validation of Deep Learning for Pretreatment Identification of Extranodal Extension in Head and Neck Squamous Cell Carcinoma
Kann, B. H.
Hicks, D. F.
Payabvash, S.
Mahajan, A.
Du, J.
Gupta, V.
Park, H. S.
Yu, J. B.
Yarbrough, W. G.
Burtness, B. A.
Husain, Z. A.
Aneja, S.
J Clin Oncol2020Journal Article, cited 5 times
Website
TCGA-HNSC
head and neck squamous cell carcinoma (HNSCC)
Deep Learning
Classification
PURPOSE: Extranodal extension (ENE) is a well-established poor prognosticator and an indication for adjuvant treatment escalation in patients with head and neck squamous cell carcinoma (HNSCC). Identification of ENE on pretreatment imaging represents a diagnostic challenge that limits its clinical utility. We previously developed a deep learning algorithm that identifies ENE on pretreatment computed tomography (CT) imaging in patients with HNSCC. We sought to validate our algorithm performance for patients from a diverse set of institutions and compare its diagnostic ability to that of expert diagnosticians. METHODS: We obtained preoperative, contrast-enhanced CT scans and corresponding pathology results from two external data sets of patients with HNSCC: an external institution and The Cancer Genome Atlas (TCGA) HNSCC imaging data. Lymph nodes were segmented and annotated as ENE-positive or ENE-negative on the basis of pathologic confirmation. Deep learning algorithm performance was evaluated and compared directly to two board-certified neuroradiologists. RESULTS: A total of 200 lymph nodes were examined in the external validation data sets. For lymph nodes from the external institution, the algorithm achieved an area under the receiver operating characteristic curve (AUC) of 0.84 (83.1% accuracy), outperforming radiologists' AUCs of 0.70 and 0.71 (P = .02 and P = .01). Similarly, for lymph nodes from the TCGA, the algorithm achieved an AUC of 0.90 (88.6% accuracy), outperforming radiologist AUCs of 0.60 and 0.82 (P < .0001 and P = .16). Radiologist diagnostic accuracy improved when receiving deep learning assistance. CONCLUSION: Deep learning successfully identified ENE on pretreatment imaging across multiple institutions, exceeding the diagnostic ability of radiologists with specialized head and neck experience. Our findings suggest that deep learning has utility in the identification of ENE in patients with HNSCC and has the potential to be integrated into clinical decision making.
Stress-testing pelvic autosegmentation algorithms using anatomical edge cases
Kanwar, Aasheesh
Merz, Brandon
Claunch, Cheryl
Rana, Shushan
Hung, Arthur
Thompson, Reid F.
2023Journal Article, cited 0 times
Prostate-Anatomical-Edge-Cases
Commercial autosegmentation has entered clinical use, however real-world performance may suffer in certain cases. We aimed to assess the influence of anatomic variants on performance. We identified 112 prostate cancer patients with anatomic variations (edge cases). Pelvic anatomy was autosegmented using three commercial tools. To evaluate performance, Dice similarity coefficients, and mean surface and 95% Hausdorff distances were calculated versus clinician-delineated references. Deep learning autosegmentation outperformed atlas-based and model-based methods. However, edge case performance was lower versus the normal cohort (0.12 mean DSC reduction). Anatomic variation presents challenges to commercial autosegmentation.
Brain Tumor Segmentation and Tractographic Feature Extraction from Structural MR Images for Overall Survival Prediction
Kao, Po-Yu
Ngo, Thuyen
Zhang, Angela
Chen, Jefferson W.
Manjunath, B. S.
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
This paper introduces a novel methodology to integrate human brain connectomics and parcellation for brain tumor segmentation and survival prediction. For segmentation, we utilize an existing brain parcellation atlas in the MNI152 1 mm space and map this parcellation to each individual subject data. We use deep neural network architectures together with hard negative mining to achieve the final voxel level classification. For survival prediction, we present a new method for combining features from connectomics data, brain parcellation information, and the brain tumor mask. We leverage the average connectome information from the Human Connectome Project and map each subject brain volume onto this common connectome space. From this, we compute tractographic features that describe potential neural disruptions due to the brain tumor. These features are then used to predict the overall survival of the subjects. The main novelty in the proposed methods is the use of normalized brain parcellation data and tractography data from the human connectome project for analyzing MR images for segmentation and survival prediction. Experimental results are reported on the BraTS2018 dataset.
Breast DCE-MRI Segmentation for Lesion Detection Using Clustering with Multi-verse Optimization Algorithm
Kar, Bikram
Si, Tapas
2021Book Section, cited 0 times
TCGA-BRCA
The highest number of deaths among all types of cancers in women is caused by breast cancer. Therefore, early detection and diagnosis of breast cancer are very much needed for its treatment. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is widely used for breast cancer diagnosis. In this paper, a segmentation method using modified hard-clustering technique with multi-verse optimizer (MVO) is proposed for the detection of breast lesion in DCE-MRI. The proposed method is termed as CMVO in this paper. First, MR images are denoised, and intensity inhomogeneities are corrected in the preprocessing steps. Then clustering technique is used in segmentation of the MR images. Finally, from the segmented images, lesions are extracted in the postprocessing step. The results of CMVO are compared with that of K-means algorithm and PSO-based hard clustering. The CMVO performs better than other methods in lesion detection in breast DCE-MRI.Kar, BikramSi, Tapas
Deep Learning-Based Radiomics for Prognostic Stratification of Low-Grade Gliomas Using a Multiple-Gene Signature
Karabacak, Mert
Ozkara, Burak B.
Senparlak, Kaan
Bisdas, Sotirios
Applied Sciences2023Journal Article, cited 0 times
Website
TCGA-LGG
Radiomics
Radiogenomics
Deep Learning
Glioma
Low-grade gliomas are a heterogeneous group of infiltrative neoplasms. Radiomics allows the characterization of phenotypes with high-throughput extraction of quantitative imaging features from radiologic images. Deep learning models, such as convolutional neural networks (CNNs), offer well-performing models and a simplified pipeline by automatic feature learning. In our study, MRI data were retrospectively obtained from The Cancer Imaging Archive (TCIA), which contains MR images for a subset of the LGG patients in The Cancer Genome Atlas (TCGA). Corresponding molecular genetics and clinical information were obtained from TCGA. Three genes included in the genetic signatures were WEE1, CRTAC1, and SEMA4G. A CNN-based deep learning model was used to classify patients into low and high-risk groups, with the median gene signature risk score as the cut-off value. The data were randomly split into training and test sets, with 61 patients in the training set and 20 in the test set. In the test set, models using T1 and T2 weighted images had an area under the receiver operating characteristic curve of 73% and 79%, respectively. In conclusion, we developed a CNN-based model to predict non-invasively the risk stratification provided by the prognostic gene signature in LGGs. Numerous previously discovered gene signatures and novel genetic identifiers that will be developed in the future may be utilized with this method.
Predicting the Grade of Clear Cell Renal Cell Carcinoma from CT Images Using Random Subspace-KNN and Random Forest Classifiers
Accurate and non-invasive determination of the International Society of Urological Pathology (ISUP) based tumor grade is important for the effective management of patients with clear cell renal cell carcinoma (cc-RCC). In this study, the radiomic analysis of 3D computed tomography (CT) images are used to determine ISUP grades of cc-RCC patients by exploring machine learning (ML) methods that can address small ISUP grade image datasets. 143 cc-RCC patient studies from The Cancer Imaging Archive (TCIA) USA were used in the study. 1133 radiomic features were extracted from the normalized 3D segmented CT images. Correlation coefficient analysis, Random Forest feature importance analysis and backward elimination methods were used consecutively to reduce the number of features. 15 out of 1133 features were selected. A k-nearest neighbors (KNN) classifier with random subspaces and a Random Forest classifier were implemented. Model performances were evaluated independently on the unused 20% of the original imbalanced data. ISUP grades were predicted by a KNN classifier under random subspaces with an accuracy of 90% and area under the curve (AUC) of 0.88 using the test data. Grades were predicted by a Random Forest classifier with an accuracy of 83% and AUC of 0.80 using the test data. In conclusion, ensemble classifiers can be used to predict the ISUP grade of cc-RCC tumors from CT images with sufficient reliability. Larger datasets and new types of features are currently being investigated.
An integrated method for detecting lung cancer via CT scanning via optimization, deep learning, and IoT data transmission
Karimullah, Shaik
Khan, Mudassir
Shaik, Fahimuddin
Alabduallah, Bayan
Almjally, Abrar
Frontiers in Oncology2024Journal Article, cited 0 times
Website
LIDC-IDRI
Radiogenomic correlation for prognosis in patients with glioblastoma multiformae
Probing into the genetic factors responsible for bladder cancer prognosis
Karunakaran, Kavinkumar Nirmala
Manoharan, Jeevitha Priya
Vidyalakshmi, Subramanian
2021Journal Article, cited 0 times
TCGA-BLCA
MicroRNAs are small non coding RNAs that can act as oncogenic suppressors and activators. Our in-silico study aims to identify the key miRNAs and their associated mRNA targets involved in bladder cancer progression. A total of seven differentially expressed miRNAs (DEMs) were found to be common between gene expression omnibus (GEO) datasets and The Cancer Genome Atlas (TCGA). The most significant DEM and its targets were validated using TCGA patient dataset. Pathway enrichment analysis and Protein - Protein network generation were done for the chosen mRNAs. Kaplan Meier survival plots were drawn for the miRNA and mRNAs. A significant down regulation of EIF3J and an up regulation of LYPLA1 were associated with poor prognosis in BLCA patients and hence EIF3J is suggested as a potential drug target. To conclude, hsa-miR-138-5p may act as a promising prognostic and a diagnostic bio marker for bladder cancer. Further experimental studies are required to support our results.
Secure medical image encryption with Walsh-Hadamard transform and lightweight cryptography algorithm
Kasim, Ömer
Med Biol Eng Comput2022Journal Article, cited 0 times
Website
REMBRANDT
Algorithms
*Computer Security
*Privacy
Medical image encryption
It is important to ensure the privacy and security of the medical images that are produced with electronic health records. Security is ensured by encrypting and transmitting the electronic health records, and privacy is provided according to the integrity of the data and the decryption of data with the user role. Both the security and privacy of medical images are provided with the innovative use of lightweight cryptology (LWC) and Walsh-Hadamard transform (WHT) in this study. Unlike the light cryptology algorithm used in encryption, the hex key in the algorithm is obtained in two parts. The first part is used as the public key and the second part as the user-specific private key. This eliminated the disadvantage of the symmetric encryption algorithm. After the encryption was performed with a two-part hex key, the Walsh-Hadamard transform was applied to the encrypted image. In the Walsh-Hadamard transform, the Hadamard matrix was rotated with certain angles according to the user role. This allowed the encoded medical image to be obtained as a vector. The proposed method was verified with the results of the number of pixel change rates and unified average changing intensity measurement parameters and histogram analysis. The results showed that the method is more successful than the lightweight cryptology method and the proposed methods in the literature to solve security and privacy of the data in medical applications with user roles.
The analysis of Magnetic Resonance Image has an important role in definite detection of Brain Tumor. The shape, location and size of tumor are examined by Radiology specialist to diagnose and plan treatment. In the intense work pace, it's not possible to get results quickly. At this scheme, unnoticed information can be recovered by an image processing algorithm. In this study, at database images which are collected from REMBRANT were cleared from noise, transformed with Karhunen Loeve Transform to gray level and segmented with Pott's Markov Random Field Model. This hybrid algorithm minimizes the data loss, contrast and noise problems. After segmentation stage, shape and statistical analysis are performed to get feature vector about Region of Interest. The images are classified as existing tumor or not existing tumor. The algorithm can recognize the presence of tumor with 100% and tumor's area with 95% accuracy. The results are reported to help the specialists.
Development and external validation of a deep learning-based computed tomography classification system for COVID-19
Kataoka, Yuki
Baba, Tomohisa
Ikenoue, Tatsuyoshi
Matsuoka, Yoshinori
Matsumoto, Junichi
Kumasawa, Junji
Tochitani, Kentaro
Funakoshi, Hiraku
Hosoda, Tomohiro
Kugimiya, Aiko
Shirano, Michinori
Hamabe, Fumiko
Iwata, Sachiyo
Kitamura, Yoshiro
Goto, Tsubasa
Hamaguchi, Shingo
Haraguchi, Takafumi
Yamamoto, Shungo
Sumikawa, Hiromitsu
Nishida, Koji
Nishida, Haruka
Ariyoshi, Koichi
Sugiura, Hiroaki
Nakagawa, Hidenori
Asaoka, Tomohiro
Yoshida, Naofumi
Oda, Rentaro
Koyama, Takashi
Iwai, Yui
Miyashita, Yoshihiro
Okazaki, Koya
Tanizawa, Kiminobu
Handa, Tomohiro
Kido, Shoji
Fukuma, Shingo
Tomiyama, Noriyuki
Hirai, Toyohiro
Ogura, Takashi
2022Journal Article, cited 0 times
CT Images in COVID-19
NSCLC-Radiomics
PleThora
BACKGROUND: We aimed to develop and externally validate a novel machine learning model that can classify CT image findings as positive or negative for SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR).
METHODS: We used 2,928 images from a wide variety of case-control type data sources for the development and internal validation of the machine learning model. A total of 633 COVID-19 cases and 2,295 non-COVID-19 cases were included in the study. We randomly divided cases into training and tuning sets at a ratio of 8:2. For external validation, we used 893 images from 740 consecutive patients at 11 acute care hospitals suspected of having COVID-19 at the time of diagnosis. The dataset included 343 COVID-19 patients. The reference standard was RT-PCR.
RESULTS: In external validation, the sensitivity and specificity of the model were 0.869 and 0.432, at the low-level cutoff, 0.724 and 0.721, at the high-level cutoff. Area under the receiver operating characteristic was 0.76.
CONCLUSIONS: Our machine learning model exhibited a high sensitivity in external validation datasets and may assist physicians to rule out COVID-19 diagnosis in a timely manner at emergency departments. Further studies are warranted to improve model specificity.
“Radiotranscriptomics”: A synergy of imaging and transcriptomics in clinical assessment
Katrib, Amal
Hsu, William
Bui, Alex
Xing, Yi
Quantitative Biology2016Journal Article, cited 0 times
Radiogenomics
Radiomic analysis identifies tumor subtypes associated with distinct molecular and microenvironmental factors in head and neck squamous cell carcinoma
Katsoulakis, Evangelia
Yu, Yao
Apte, Aditya P.
Leeman, Jonathan E.
Katabi, Nora
Morris, Luc
Deasy, Joseph O.
Chan, Timothy A.
Lee, Nancy Y.
Riaz, Nadeem
Hatzoglou, Vaios
Oh, Jung Hun
Oral Oncology2020Journal Article, cited 0 times
Website
TCGA-HNSC
Radiomics
Radiogenomics
Machine learning
Purpose To identify whether radiomic features from pre-treatment computed tomography (CT) scans can predict molecular differences between head and neck squamous cell carcinoma (HNSCC) using The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Methods 77 patients from the TCIA with HNSCC had imaging suitable for analysis. Radiomic features were extracted and unsupervised consensus clustering was performed to identify subtypes. Genomic data was extracted from the matched patients in the TCGA database. We explored relationships between radiomic features and molecular profiles of tumors, including the tumor immune microenvironment. A machine learning method was used to build a model predictive of CD8 + T-cells. An independent cohort of 83 HNSCC patients was used to validate the radiomic clusters. Results We initially extracted 104 two-dimensional radiomic features, and after feature stability tests and removal of volume dependent features, reduced this to 67 features for subsequent analysis. Consensus clustering based on these features resulted in two distinct clusters. The radiomic clusters differed by primary tumor subsite (p = 0.0096), HPV status (p = 0.0127), methylation-based clustering results (p = 0.0025), and tumor immune microenvironment. A random forest model using radiomic features predicted CD8 + T-cells independent of HPV status with R2 = 0.30 (p < 0.0001) on cross validation. Consensus clustering on the validation cohort resulted in two distinct clusters that differ in tumor subsite (p = 1.3 × 10-7) and HPV status (p = 4.0 × 10-7). Conclusion Radiomic analysis can identify biologic features of tumors such as HPV status and T-cell infiltration and may be able to provide other information in the near future to help with patient stratification.
An automated slice sorting technique for multi-slice computed tomography liver cancer images using convolutional network
Kaur, Amandeep
Chauhan, Ajay Pal Singh
Aggarwal, Ashwani Kumar
Expert Systems with Applications2021Journal Article, cited 1 times
Website
CT-ORG
LIVER
Classification
An early detection and diagnosis of liver cancer can help the radiation therapist in choosing the target area and the amount of radiation dose to be delivered to the patients. The radiologists usually spend a lot of time in selecting the most relevant slices from thousands of scans, which are usually obtained from multi-slice CT scanners. The purpose of this paper multi-organ classification of 3D CT images of liver cancer suspected patients by convolution network. A dataset consisting of 63503 CT images of liver cancer patients taken from The Cancer Imaging Archive (TCIA) has been used to validate the proposed method. The method is a CNN for classification of CT liver cancer images. The classification results in terms of accuracy, precision, sensitivity, specificity, true positive rate, false negative rate, and F1 score have been computed. The results manifest a high validation accuracy of 99.1%, when convolution network is trained with the data augmented volume slices as compared to accuracy of 98.7% with that obtained original volume slices. The overall test accuracy for data augmented volume slice dataset is 93.1% superior to other volume slices. The main contribution of this work is that it will help the radiation therapist to focus on a small subset of CT image data. This is achieved by segregating the whole set of 63503 CT images into three categories based on the likelihood of the spread of cancer to other organs in liver cancer suspected patients. Consequently, only 19453 CT images had liver visible in them, making rest of 44050 CT images less relevant for liver cancer detection. The proposed method will help in the rapid diagnosis and treatment of liver cancer patients.
A joint intensity and edge magnitude-based multilevel thresholding algorithm for the automatic segmentation of pathological MR brain images
Kaur, Taranjit
Saini, Barjinder Singh
Gupta, Savita
Neural Computing and Applications2016Journal Article, cited 1 times
Website
Radiomics
BraTS
ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques
Kavitha, MS
Shanthini, J
Sabitha, R
Journal of Medical Systems2019Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling
Kavitha, M. S.
Shanthini, J.
Bhavadharini, R. M.
Journal of Medical Imaging and Health Informatics2020Journal Article, cited 0 times
LIDC-IDRI
MATLAB
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.
Volumetric analysis framework for accurate segmentation and classification (VAF-ASC) of lung tumor from CT images
Lung tumor can be typically stated as the abnormal cell growth in lungs that may cause severe threat to patient health, since lung is a significant organ which comprises associated network of blood veins and lymphatic canals. The earlier detection and classification of lung tumor creates a greater impact on increasing the survival rate of patients. For analysis, the Computed Tomography (CT) lung images are broadly used, since it gives information about the various lung regions. The prediction of tumor contour, position, and volume plays an imperative role in accurate segmentation and classification of tumor cells. This will aid in successful tumor stage detection and treatment phases. With that concern, this paper develops a Volumetric Analysis Framework for Accurate Segmentation and Classification of lung tumors. The volumetric analysis framework comprises the estimation of length, thickness, and height of the detected tumor cell for achieving précised results. Though there are many models for tumor detection from 2D CT inputs, it is very important to develop a method for lung nodule separation from noisy background. For that, this paper connectivity and locality features of the lung image pixels. Moreover, morphological processing techniques are incorporated for removing the additional noises and airways. Tumor segmentation has been accomplished by the k-means clustering approach. Tumor Nodule Metastasis classification based-volumetric analysis is performed for accurate results. The Volumetric Analysis Framework provides better results with respect to factors such as accuracy rate of tumor diagnosis, reduced computation time, and appropriate tumor stage classification.
Radiological Atlas for Patient Specific Model Generation
The paper presents the development of a radiological atlas employed in an abdomen patient specific model verification.; ; After a patient specific model introduction, the development of a radiological atlas is discussed.; ; Unprocessed database, containing DICOM images and radiological diagnosis presented. This database is processed manually to retrieve the required information. Organs and pathologies are determined and each study is tagged with specific labels, e.g. ‘liver normal’, ‘liver tumor’, ‘liver cancer’, ‘spleen normal’, ‘spleen absence’, etc. Selected structures are additionally segmented. Masks are stored as gold standard.; ; Web service based network system is provided to permit PACS-driven retrieval of image data matching desired criteria. Image series as well as ground truth images may be retrieved for benchmark or model-development purposes. The database is evaluated.
Supervised Dimension-Reduction Methods for Brain Tumor Image Data Analysis
The purpose of this study was to construct a risk score for glioblastomas based on magnetic resonance imaging (MRI) data. Tumor identification requires multimodal voxel-based imaging data that are highly dimensional, and multivariate models with dimension reduction are desirable for their analysis. We propose a two-step dimension-reduction method using a radial basis function–supervised multi-block sparse principal component analysis (SMS–PCA) method. The method is first implemented through the basis expansion of spatial brain images, and the scores are then reduced through regularized matrix decomposition in order to produce simultaneous data-driven selections of related brain regions supervised by univariate composite scores representing linear combinations of covariates such as age and tumor location. An advantage of the proposed method is that it identifies the associations of brain regions at the voxel level, and supervision is helpful in the interpretation.
Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients
Kawahara, D.
Tsuneda, M.
Ozawa, S.
Okamoto, H.
Nakamura, M.
Nishio, T.
Nagata, Y.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
AAPM RT-MAC
*Deep Learning
*Head and Neck Neoplasms/diagnostic imaging/radiotherapy
Humans
Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging
Organs at Risk
Convolutional Neural Network (CNN)
Generative Adversarial Network (GAN)
deep learning
segmentation
PURPOSE: Adaptive radiotherapy requires auto-segmentation in patients with head and neck (HN) cancer. In the current study, we propose an auto-segmentation model using a generative adversarial network (GAN) on magnetic resonance (MR) images of HN cancer for MR-guided radiotherapy (MRgRT). MATERIAL AND METHODS: In the current study, we used a dataset from the American Association of Physicists in Medicine MRI Auto-Contouring (RT-MAC) Grand Challenge 2019. Specifically, eight structures in the MR images of HN region, namely submandibular glands, lymph node level II and level III, and parotid glands, were segmented with the deep learning models using a GAN and a fully convolutional network with a U-net. These images were compared with the clinically used atlas-based segmentation. RESULTS: The mean Dice similarity coefficient (DSC) of the U-net and GAN models was significantly higher than that of the atlas-based method for all the structures (p < 0.05). Specifically, the maximum Hausdorff distance (HD) was significantly lower than that in the atlas method (p < 0.05). Comparing the 2.5D and 3D U-nets, the 3D U-net was superior in segmenting the organs at risk (OAR) for HN patients. The DSC was highest for 0.75-0.85, and the HD was lowest within 5.4 mm of the 2.5D GAN model in all the OARs. CONCLUSIONS: In the current study, we investigated the auto-segmentation of the OAR for HN patients using U-net and GAN models on MR images. Our proposed model is potentially valuable for improving the efficiency of HN RT treatment planning.
eFis: A Fuzzy Inference Method for Predicting Malignancy of Small Pulmonary Nodules
Predicting malignancy of small pulmonary nodules from computer tomography scans is a difficult and important problem to diagnose lung cancer. This paper presents a rule based fuzzy inference method for predicting malignancy rating of small pulmonary nodules. We use the nodule characteristics provided by Lung Image Database Consortium dataset to determine malignancy rating. The proposed fuzzy inference method uses outputs of ensemble classifiers and rules from radiologist agreements on the nodules. The results are evaluated over classification accuracy performance and compared with single classifier methods. We observed that the preliminary results are very promising and system is open to development.
Malignancy prediction by using characteristic-based fuzzy sets: A preliminary study
The accurate detection of lung lesions as well as the precise measurement of their sizes on Computed Tomography (CT) images is known to be crucial for the response to therapy assessment of cancer patients. The goal of this study is to investigate the feasibility of using mobile tele-radiology for this task in order to improve efficiency in radiology. Lung CT Images were obtained from The Cancer Imaging Archive (TCIA). The Bland-Altman analysis method was used to compare and assess conventional radiology and mobile radiology based lesion size measurements. Percentage of correctly detected lesions at the right image locations was also recorded. Sizes of 183 lung lesions between 5 and 52 mm in CT images were measured by two experienced radiologists. Bland-Altman plots were drawn, and limits of agreements (LOA) were determined as 0.025 and 0.975 percentiles (−1.00, 0.00), (−1.39, 0.00). For lesions of 10 mm and higher, these intervals were found to be much smaller than the decision interval (−30% and +20%) recommended by the RECIST 1.1 criteria. In average, observers accurately detected 98.2% of the total 271 lesions on the medical monitor, while they detected 92.8% of the nodules on the iPhone.; ; In conclusion, mobile tele-radiology can be a feasible alternative for the accurate measurement of lung lesions on CT images. A higher resolution display technology such as iPad may be preferred in order to detect new small <5 mm lesions more accurately. Further studies are needed to confirm these results with more mobile technologies and types of lesions.; ; Keywords; Lung CT Lung lesions Lesion size measurement Tumor burden measurement Measurement uncertainties Tele-radiology Bland-Altman method Non-parametric method
The Combination of Low Skeletal Muscle Mass and High Tumor Interleukin-6 Associates with Decreased Survival in Clear Cell Renal Cell Carcinoma
Kays, J. K.
Koniaris, L. G.
Cooper, C. A.
Pili, R.
Jiang, G.
Liu, Y.
Zimmers, T. A.
Cancers (Basel)2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
KIDNEY
Classification
Clear cell renal carcinoma (ccRCC) is frequently associated with cachexia which is itself associated with decreased survival and quality of life. We examined relationships among body phenotype, tumor gene expression, and survival. Demographic, clinical, computed tomography (CT) scans and tumor RNASeq for 217 ccRCC patients were acquired from the Cancer Imaging Archive and The Cancer Genome Atlas (TCGA). Skeletal muscle and fat masses measured from CT scans and tumor cytokine gene expression were compared with survival by univariate and multivariate analysis. Patients in the lowest skeletal muscle mass (SKM) quartile had significantly shorter overall survival versus the top three SKM quartiles. Patients who fell into the lowest quartiles for visceral adipose mass (VAT) and subcutaneous adipose mass (SCAT) also demonstrated significantly shorter overall survival. Multiple tumor cytokines correlated with mortality, most strongly interleukin-6 (IL-6); high IL-6 expression was associated with significantly decreased survival. The combination of low SKM/high IL-6 was associated with significantly lower overall survival compared to high SKM/low IL-6 expression (26.1 months vs. not reached; p < 0.001) and an increased risk of mortality (HR = 5.95; 95% CI = 2.86-12.38). In conclusion, tumor cytokine expression, body composition, and survival are closely related, with low SKM/high IL-6 expression portending worse prognosis in ccRCC.
Computer-aided detection of brain tumors using image processing techniques
Computer-aided detection applications has managed to make significant contributions to medical world in today's technology. In this study, the detection of brain tumors in magnetic resonance images was performed. This study proposes a computer aided detection system that is based on morphological reconstruction and rule based detection of tumors that using the morphological features of the regions of interest. The steps involved in this study are: the pre-processing stage, the segmentation stage, the stage of identification of the region of interest and the stage of detection of tumors. With these methods applied on 497 magnetic resonance image slices of 10 patients, the performance of the computer aided detection system is achieved 84,26% accuracy.
Multi-institutional Prognostic Modeling in Head and Neck Cancer: Evaluating Impact and Generalizability of Deep Learning and Radiomics
Kazmierski, Michal
Welch, Mattea
Kim, Sejin
McIntosh, Chris
Rey-McIntyre, Katrina
Huang, Shao Hui
Patel, Tirth
Tadic, Tony
Milosevic, Michael
Liu, Fei-Fei
Ryczkowski, Adam
Kazmierska, Joanna
Ye, Zezhong
Plana, Deborah
Aerts, Hugo J.W.L.
Kann, Benjamin H.
Bratman, Scott V.
Hope, Andrew J.
Haibe-Kains, Benjamin
Cancer Research Communications2023Journal Article, cited 0 times
Head-Neck-Radiomics-HN1
HNSCC
radiomics
Deep Learning
Artificial intelligence (AI) and machine learning (ML) are becoming critical in developing and deploying personalized medicine and targeted clinical trials. Recent advances in ML have enabled the integration of wider ranges of data including both medical records and imaging (radiomics). However, the development of prognostic models is complex as no modeling strategy is universally superior to others and validation of developed models requires large and diverse datasets to demonstrate that prognostic models developed (regardless of method) from one dataset are applicable to other datasets both internally and externally. Using a retrospective dataset of 2,552 patients from a single institution and a strict evaluation framework that included external validation on three external patient cohorts (873 patients), we crowdsourced the development of ML models to predict overall survival in head and neck cancer (HNC) using electronic medical records (EMR) and pretreatment radiological images. To assess the relative contributions of radiomics in predicting HNC prognosis, we compared 12 different models using imaging and/or EMR data. The model with the highest accuracy used multitask learning on clinical data and tumor volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction, outperforming models relying on clinical data only, engineered radiomics, or complex deep neural network architecture. However, when we attempted to extend the best performing models from this large training dataset to other institutions, we observed significant reductions in the performance of the model in those datasets, highlighting the importance of detailed population-based reporting for AI/ML model utility and stronger validation frameworks. We have developed highly prognostic models for overall survival in HNC using EMRs and pretreatment radiological images based on a large, retrospective dataset of 2,552 patients from our institution.Diverse ML approaches were used by independent investigators. The model with the highest accuracy used multitask learning on clinical data and tumor volume.External validation of the top three performing models on three datasets (873 patients) with significant differences in the distributions of clinical and demographic variables demonstrated significant decreases in model performance.ML combined with simple prognostic factors outperformed multiple advanced CT radiomics and deep learning methods. ML models provided diverse solutions for prognosis of patients with HNC but their prognostic value is affected by differences in patient populations and require extensive validation.
Prostate Cancer Diagnosis Based on Cascaded Convolutional Neural Networks
LIU Ke-wen
LIU Zi-long
WANG Xiang-yu
CHEN Li
LI Zhao
WU Guang-yao
LIU Chao-yang
Chinese Journal of Magnetic Resonance2020Journal Article, cited 1 times
Website
PROSTATEx
Magnetic Resonance Imaging (MRI)
Prostate cancer (PCa)
Computer Aided Detection (CADe)
Classification
Interpreting magnetic resonance imaging (MRI) data by radiologists is time consuming and demands special expertise. Diagnosis of prostate cancer (PCa) with deep learning can also be time consuming and data storage consuming. This work presents an automated method for PCa detection based on cascaded convolutional neural network (CNN), including pre-network and post-network. The pre-network is based on a Faster-RCNN and trained with prostate images in order to separate the prostate from nearby tissues; the ResNet-based post-network is for PCa diagnosis, which is connected by bottlenecks and improved by applying batch normalization (BN) and global average pooling (GAP). The experimental results demonstrated that the cascaded CNN proposed had a good classification results on the in-house datasets, with less training time and computation resources.
Arterial input function and tracer kinetic model-driven network for rapid inference of kinetic maps in Dynamic Contrast-Enhanced MRI (AIF-TK-net)
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.
Chemoradiotherapy treatment increases cardiac and aortic [18F]FDG uptake ratios in lung cancer patients
Khaing, Phyo
Newby, David
Tavares, Adriana
2024Journal Article, cited 0 times
ACRIN-NSCLC-FDG-PET
[18F]Fluorodeoxyglucose (FDG) positron emission tomography (PET) is an indispensable non-invasive imaging tool to aid diagnosis, prognostication, and therapeutic monitoring in oncology, but it can also evaluate cardiovascular inflammation1,2. Previously, cardiac metabolic changes using [18F]FDG with chemoradiotherapy have been explored3 but changes in the rest of the cardiovascular system remain undetermined. To investigate the cardiovascular metabolic changes pre- and post-chemoradiotherapy in stage 3b non-small cell lung cancer (NSCLC) patients. A retrospective analysis of 26 patients (43-82 years) with stage 3b NSCLC from the American College of Radiology Imaging Network (ACRIN 6668) trial was performed3. All available pre- and post-treatment imaging were retrieved from the Cancer Imaging Archive (TCIA)4 and regions of interest for the left ventricle and calcified large arteries were contoured on all PET-CT scans using PMOD version 4.0 software. The mean, maximum standard uptake value (SUVmean and SUVmax) and target-to-background ratio (TBR) were quantified on PMOD. At approximately 14 weeks post-chemoradiotherapy, there was a higher SUVmean (mean difference 0.81, p=0.01) and SUVmax (mean difference 2.20, p=0.006) in the left ventricle compared with pretreatment scans. TBR for aorta was higher post-treatment (mean difference 0.06, p=0.005). However, SUVmean for carotid arteries and right brachiocephalic artery was reduced post-chemoradiotherapy (mean difference 0.15, p=0.008 and mean difference 0.28, p=0.03). A reduction in SUVmax was also seen for aortic arch (mean difference 0.58, p=0.03) and right brachiocephalic artery (mean difference 0.42, p=0.03 and mean difference 0.42, p=0.03). Cardiovascular glucose metabolism is selectively increased in the left ventricle and aorta post-chemoradiotherapy. Changes in glucose metabolism of atherosclerotic plaques vary across different vessels throughout the body. Please click on the 'PDF' for the full abstract!
Categorized contrast enhanced mammography dataset for diagnostic and artificial intelligence research
Khaled, Rana
Helal, Maha
Alfarghaly, Omar
Mokhtar, Omnia
Elkorany, Abeer
El Kassas, Hebatalla
Fahmy, Aly
Scientific Data2022Journal Article, cited 0 times
CDD-CESM
Contrast-enhanced spectral mammography (CESM) is a relatively recent imaging modality with increased diagnostic accuracy compared to digital mammography (DM). New deep learning (DL) models were developed that have accuracies equal to that of an average radiologist. However, most studies trained the DL models on DM images as no datasets exist for CESM images. We aim to resolve this limitation by releasing a Categorized Digital Database for Low energy and Subtracted Contrast Enhanced Spectral Mammography images (CDD-CESM) to evaluate decision support systems. The dataset includes 2006 images, with an average resolution of 2355 × 1315, consisting of 310 mass images, 48 architectural distortion images, 222 asymmetry images, 238 calcifications images, 334 mass enhancement images, 184 non-mass enhancement images, 159 postoperative images, 8 post neoadjuvant chemotherapy images, and 751 normal images, with 248 images having more than one finding. This is the first dataset to incorporate data selection, segmentation annotation, medical reports, and pathological diagnosis for all cases. Moreover, we propose and evaluate a DL-based technique to automatically segment abnormal findings in images.
A U-Net Ensemble for breast lesion segmentation in DCE MRI
Khaled, R
Vidal, Joel
Vilanova, Joan C
Martí, Robert
Computers in Biology and Medicine2022Journal Article, cited 0 times
Website
TCGA-BRCA
U-Net
Breast cancer
Segmentation
Dce-mri
Deep learning
3D-MRI Brain Tumor Detection Model Using Modified Version of Level Set Segmentation Based on Dragonfly Algorithm
Khalil, H. A.
Darwish, S.
Ibrahim, Y. M.
Hassan, O. F.
Symmetry-Basel2020Journal Article, cited 31 times
Website
BraTS 2017
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
Segmentation
Accurate brain tumor segmentation from 3D Magnetic Resonance Imaging (3D-MRI) is an important method for obtaining information required for diagnosis and disease therapy planning. Variation in the brain tumor's size, structure, and form is one of the main challenges in tumor segmentation, and selecting the initial contour plays a significant role in reducing the segmentation error and the number of iterations in the level set method. To overcome this issue, this paper suggests a two-step dragonfly algorithm (DA) clustering technique to extract initial contour points accurately. The brain is extracted from the head in the preprocessing step, then tumor edges are extracted using the two-step DA, and these extracted edges are used as an initial contour for the MRI sequence. Lastly, the tumor region is extracted from all volume slices using a level set segmentation method. The results of applying the proposed technique on 3D-MRI images from the multimodal brain tumor segmentation challenge (BRATS) 2017 dataset show that the proposed method for brain tumor segmentation is comparable to the state-of-the-art methods.
Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists
Khan, M. A.
Ashraf, I.
Alhaisoni, M.
Damasevicius, R.
Scherer, R.
Rehman, A.
Bukhari, S. A. C.
Diagnostics (Basel)2020Journal Article, cited 216 times
Website
BraTS 2015
BraTS 2017
BraTS 2018
Partial least squares
Deep learning
Radiomic features
Transfer learning
Algorithm Development
Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.
Classification of Cancer Microscopic Images via Convolutional Neural Networks
Khan, Mohammad Azam
Choo, Jaegul
2019Book Section, cited 0 times
C-NMC 2019
Machine Learning
This paper describes our approach for the classification of normal versus malignant cells in B-ALL white blood cancer microscopic images: ISBI 2019—classification of leukemic B-lymphoblast cells from normal B-lymphoid precursors from blood smear microscopic images. We leverage a state of the art convolutional neural network pretrained with the ImageNet dataset and applied several data augmentation and hyperparameters optimization strategies. Our method obtains an F1 score of 0.83 for the final test set in the competition.
VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images
Khan, M. A.
Rajinikanth, V.
Satapathy, S. C.
Taniar, D.
Mohanty, J. R.
Tariq, U.
Damasevicius, R.
Diagnostics (Basel)2021Journal Article, cited 0 times
LIDC-IDRI
Lung-PET-CT-Dx
VGG-SegNet
deep learning
lung CT images
nodule detection
pre-trained VGG19
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.
Automatic Segmentation and Shape, Texture-based Analysis of Glioma Using Fully Convolutional Network
Lower-grade glioma is a type of brain tumor that is usually found in the human brain and spinal cord. Early detection and accurate diagnosis of lower-grade glioma can reduce the fatal risk of the affected patients. An essential step for lower-grade glioma analysis is MRI Image Segmentation. Manual segmentation processes are time-consuming and depend on the expertise of the pathologist. In this study, three different deep-learning-based automatic segmentation models were used to segment the tumor-affected region from the MRI slice. The segmentation accuracy of the three models-U-Net, FCN, and U-Net with ResNeXt50 backbone were respectively 80%, 84%, and 91%. Two shape-based features- (angular standard deviation, marginal fluctuation) and six texture-based features (entropy, local binary pattern, homogeneity, contrast, correlation, energy) were extracted from the segmented images to find the association with seven existing genomic data types. It was found out that there was a significant association between the genomic data type-microRNA cluster and texture-based feature-entropy case and genomic data type-RNA sequence cluster with shape-based feature-angular standard deviation case. In both of these cases, the p values were observed less than 0.05 for the Fisher exact test.
Preliminary Detection and Analysis of Lung Cancer on CT images using MATLAB: A Cost-effective Alternative
Khan, Md Daud Hossain
Ahmed, Mansur
Bach, Christian
Journal of Biomedical Engineering and Medical Imaging2016Journal Article, cited 0 times
LUNG
MATLAB
Computer Aided Detection (CADe)
Non-Small Cell Lung Cancer (NSCLC)
Computed Tomography (CT)
Cancer is the second leading cause of death worldwide. Lung cancer possesses the highest mortality, with non-small cell lung cancer (NSCLC) being its most prevalent subtype of lung cancer. Despite gradual reduction in incidence, approximately 585720 new cancer patients were diagnosed in 2014, with majority from low-and-middleincome countries (LMICs). Limited availability of diagnostic equipment, poorly trained medical staff, late revelation of symptoms and classification of the exact lung cancer subtype and overall poor patient access to medical providers result in late or terminal stage diagnosis and delay of treatment. Therefore, the need for an economic, simple, fast computed image-processing system to aid decisions regarding staging and resection, especially for LMICs is clearly imminent. In this study, we developed a preliminary program using MATLAB that accurately detects cancer cells in CT images of lungs of affected patients, measures area of region of interest (ROI) or tumor mass and helps determine nodal spread. A preset value for nodal spread was used, which can be altered accordingly.
Achieving enhanced accuracy and strength performance with parallel programming for invariant affine point cloud registration
Khan, Usman
Yasin, Amanullah
Jalal, Ahmed
Abid, Muhammad
Multimedia Tools and Applications2022Journal Article, cited 0 times
RIDER PHANTOM PET-CT
Affine-transform of tomographic images maps pixels from image to world coordinates. However, affine transform application on each pixel consumes much time. Extraction of the point cloud of interest from the background is another challenge. The benchmark algorithms use approximations, therefore, compromising accuracy. Because of this fact, there arises a need for affine registration for 3D reconstruction. In this work, we present a computationally efficient affine registration of Digital Imaging and COmmunications in Medicine (DICOM) images. We introduce a novel GPU accelerated hierarchical clustering algorithm using Gaussian thresholding of inter-coordinate distances followed by maximal mutual information score merging for clutter removal. We also show that the reconstructed 3d models using our methodology have a best-case minimum error of 0.18 cm against physical measurements and have higher structural strength. This algorithm should apply to reconstruction, 3D printing, virtual reality, and 3D visualization.
Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network
Khan, Zia
Yahya, Norashikin
Alsaih, Khaled
Meriaudeau, Fabrice
2019Conference Paper, cited 0 times
PROSTATE
Segmentation
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.
Noninvasive Grading of Glioma Tumor Using Magnetic Resonance Imaging with Convolutional Neural Networks
Khawaldeh, Saed
Pervaiz, Usama
Rafiq, Azhar
Alkhawaldeh, Rami S.
Applied Sciences2017Journal Article, cited 187 times
Website
REMBRANDT
Machine Learning
In recent years, Convolutional Neural Networks (ConvNets) have rapidly emerged as a widespread machine learning technique in a number of applications especially in the area of medical image classification and segmentation. In this paper, we propose a novel approach that uses ConvNet for classifying brain medical images into healthy and unhealthy brain images. The unhealthy images of brain tumors are categorized also into low grades and high grades. In particular, we use the modified version of the Alex Krizhevsky network (AlexNet) deep learning architecture on magnetic resonance images as a potential tumor classification technique. The classification is performed on the whole image where the labels in the training set are at the image level rather than the pixel level. The results showed a reasonable performance in characterizing the brain medical images with an accuracy of 91.16%.
3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme
Application of Homomorphic Encryption on Neural Network in Prediction of Acute Lymphoid Leukemia
Khilji, Ishfaque Qamar
Saha, Kamonashish
Amin, Jushan
Iqbal, Muhammad
International Journal of Advanced Computer Science and Applications2020Journal Article, cited 0 times
C_NMC_2019 Dataset: ALL Challenge dataset of ISBI 2019
Acute lymphoblastic leukemia (ALL)
Pathology
Convolutional Neural Network (CNN)
Classification
Computer Aided Diagnosis (CADx)
Machine Learning
Machine learning is now becoming a widely used mechanism and applying it in certain sensitive fields like medical and financial data has only made things easier. Accurate Diagnosis of cancer is essential in treating it properly. Medical tests regarding cancer in recent times are quite expensive and not available in many parts of the world. CryptoNets, on the other hand, is an exhibit of the use of Neural-Networks over data encrypted with Homomorphic Encryption. This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions in case of Acute Lymphoid Leukemia (ALL). By using CryptoNets, the patients or doctors in need of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider (hospital or model owner). Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted all throughout the process and finally sending the prediction to the user who can decrypt the results. During the process the service provider (hospital or the model owner) gains no knowledge about the data that was used or the result since everything is encrypted throughout the process. Our work proposes a Neural Network model which will be able to predict ALL-Acute Lymphoid Leukemia with approximate 80% accuracy using the C_NMC Challenge dataset. Prior to building our own model, we used the dataset and pre-process it using a different approach. We then ran on different machine learning and Neural Network models like VGG16, SVM, AlexNet, ResNet50 and compared the validation accuracies of these models with our own model which lastly gives better accuracy than the rest of the models used. We then use our own pre-trained Neural Network to make predictions using CryptoNets. We were able to achieve an encrypted prediction of about 78% which is close to what we achieved when validating our own CNN model that has a validation accuracy of 80% for prediction of Acute Lymphoid Leukemia (ALL).
Comput Biol Med2021Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
Computed Tomography (CT)
Histopathology
Non-Small Cell Lung Cancer (NSCLC)
Radiomics
PyRadiomics
Wavelet
OBJECTIVE: The aim of this study was to identify the most important features and assess their discriminative power in the classification of the subtypes of NSCLC. METHODS: This study involved 354 pathologically proven NSCLC patients including 134 squamous cell carcinoma (SCC), 110 large cell carcinoma (LCC), 62 not other specified (NOS), and 48 adenocarcinoma (ADC). In total, 1433 radiomics features were extracted from 3D volumes of interest drawn on the malignant lesion identified on CT images. Wrapper algorithm and multivariate adaptive regression splines were implemented to identify the most relevant/discriminative features. A multivariable multinomial logistic regression was employed with 1000 bootstrapping samples based on the selected features to classify four main subtypes of NSCLC. RESULTS: The results revealed that the texture features, specifically gray level size zone matrix features (GLSZM), were the significant indicators of NSCLC subtypes. The optimized classifier achieved an average precision, recall, F1-score, and accuracy of 0.710, 0.703, 0.706, and 0.865, respectively, based on the selected features by the wrapper algorithm. CONCLUSIONS: Our CT radiomics approach demonstrated impressive potential for the classification of the four main histological subtypes of NSCLC, It is anticipated that CT radiomics could be useful in treatment planning and precision medicine.
Stable and discriminating radiomic predictor of recurrence in early stage non-small cell lung cancer: Multi-site study
Khorrami, Mohammadhadi
Bera, Kaustav
Leo, Patrick
Vaidya, Pranjal
Patil, Pradnya
Thawani, Rajat
Velu, Priya
Rajiah, Prabhakar
Alilou, Mehdi
Choi, Humberto
Feldman, Michael D
Gilkeson, Robert C
Linden, Philip
Fu, Pingfu
Pass, Harvey
Velcheti, Vamsidhar
Madabhushi, Anant
2020Journal Article, cited 0 times
NSCLC Radiogenomics-Stanford
OBJECTIVES: To evaluate whether combining stability and discriminability criteria in building radiomic classifiers will improve the prognosis of cancer recurrence in early stage non-small cell lung cancer on non-contrast computer tomography (CT).
MATERIALS AND METHODS: CT scans of 610 patients with early stage (IA, IB, IIA) NSCLC from four independent cohorts were evaluated. A total of 350 patients from Cleveland Clinic Foundation and University of Pennsylvania were divided into two equal sets for training (D1) and validation set (D2). 80 patients from The Cancer Genome Atlas Lung Adenocarcinoma and Squamous Cell Carcinoma and 195 patients from The Cancer Imaging Archive, were used as independent second (D3) and third (D4) validation sets. A linear discriminant analysis (LDA) classifier was built based on the most stable and discriminate features. In addition, a radiomic risk score (RRS) was generated by using least absolute shrinkage and selection operator, Cox regression model to predict time to progression (TTP) following surgery.
RESULTS: A feature selection strategy focusing on both feature discriminability and stability resulted in the classifier having a higher discriminability on validation datasets compared to the discriminability alone criteria in discriminating cancer recurrence (D2, AUC of 0.75 vs. 0.65; D3, 0.74 vs. 0.62; D4, 0.76 vs. 0.63). The RRS generated by most stable-discriminating features was significantly associated with TTP compared to discriminating alone criteria (HR = 1.66, C-index of 0.72 vs. HR = 1.04, C-index of 0.62).
CONCLUSION: Accounting for both stability and discriminability yielded a more generalizable classifier for predicting cancer recurrence and TTP in early stage NSCLC.
Tumor segmentation via enhanced area growth algorithm for lung CT images
Khorshidi, A.
BMC Med Imaging2023Journal Article, cited 0 times
NSCLC-Radiomics
LIDC-IDRI
Algorithm Development
Segmentation
Image denoising
Humans
*Tomography
X-Ray Computed/methods
Algorithms
*Lung Neoplasms/diagnostic imaging
Lung/diagnostic imaging
Acceptance rate
Accuracy
Automatic thresholding
Comparison quantity
Computed Tomography (CT)
Contrast augmentation
Edge improvement
Enhance area growth
MATLAB
Start point
Tumor borders
BACKGROUND: Since lung tumors are in dynamic conditions, the study of tumor growth and its changes is of great importance in primary diagnosis. METHODS: Enhanced area growth (EAG) algorithm is introduced to segment the lung tumor in 2D and 3D modes on 60 patients CT images from four different databases by MATLAB software. The contrast augmentation, color intensity and maximum primary tumor radius determination, thresholding, start and neighbor points' designation in an array, and then modifying the points in the braid on average are the early steps of the proposed algorithm. To determine the new tumor boundaries, the maximum distance from the color-intensity center point of the primary tumor to the modified points is appointed via considering a larger target region and new threshold. The tumor center is divided into different subsections and then all previous stages are repeated from new designated points to define diverse boundaries for the tumor. An interpolation between these boundaries creates a new tumor boundary. The intersections with the tumor boundaries are firmed for edge correction phase, after drawing diverse lines from the tumor center at relevant angles. Each of the new regions is annexed to the core region to achieve a segmented tumor surface by meeting certain conditions. RESULTS: The multipoint-growth-starting-point grouping fashioned a desired consequence in the precise delineation of the tumor. The proposed algorithm enhanced tumor identification by more than 16% with a reasonable accuracy acceptance rate. At the same time, it largely assurances the independence of the last outcome from the starting point. By significance difference of p < 0.05, the dice coefficients were 0.80 +/- 0.02 and 0.92 +/- 0.03, respectively, for primary and enhanced algorithms. Lung area determination alongside automatic thresholding and also starting from several points along with edge improvement may reduce human errors in radiologists' interpretation of tumor areas and selection of the algorithm's starting point. CONCLUSIONS: The proposed algorithm enhanced tumor detection by more than 18% with a sufficient acceptance ratio of accuracy. Since the enhanced algorithm is independent of matrix size and image thickness, it is very likely that it can be easily applied to other contiguous tumor images. TRIAL REGISTRATION: PAZHOUHAN, PAZHOUHAN98000032. Registered 4 January 2021, http://pazhouhan.gerums.ac.ir/webreclist/view.action?webreclist_code=19300.
Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment
Kihara, S.
Koike, Y.
Takegawa, H.
Anetai, Y.
Nakamura, S.
Tanigawa, N.
Koizumi, M.
Med Dosim2022Journal Article, cited 0 times
Website
OPC-Radiomics
Clinical target volume
Deep learning
Head and neck cancer
Radiotherapy
Segmentation
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 +/- 0.03 and 0.76 +/- 0.05) and a significantly lower mean AHD value (3.0 +/- 0.5 mm vs 3.5 +/- 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.
Synthesis of Hybrid Data Consisting of Chest Radiographs and Tabular Clinical Records Using Dual Generative Models for COVID-19 Positive Cases
Kikuchi, T.
Hanaoka, S.
Nakao, T.
Takenaga, T.
Nomura, Y.
Mori, H.
Yoshikawa, T.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
COVID-19-NY-SBU
Auto-encoding GAN
Covid-19
Generative Adversarial Network (GAN)
Data sharing
Synthetic data generation
To generate synthetic medical data incorporating image-tabular hybrid data by merging an image encoding/decoding model with a table-compatible generative model and assess their utility. We used 1342 cases from the Stony Brook University Covid-19-positive cases, comprising chest X-ray radiographs (CXRs) and tabular clinical data as a private dataset (pDS). We generated a synthetic dataset (sDS) through the following steps: (I) dimensionally reducing CXRs in the pDS using a pretrained encoder of the auto-encoding generative adversarial networks (alphaGAN) and integrating them with the correspondent tabular clinical data; (II) training the conditional tabular GAN (CTGAN) on this combined data to generate synthetic records, encompassing encoded image features and clinical data; and (III) reconstructing synthetic images from these encoded image features in the sDS using a pretrained decoder of the alphaGAN. The utility of sDS was assessed by the performance of the prediction models for patient outcomes (deceased or discharged). For the pDS test set, the area under the receiver operating characteristic (AUC) curve was calculated to compare the performance of prediction models trained separately with pDS, sDS, or a combination of both. We created an sDS comprising CXRs with a resolution of 256 x 256 pixels and tabular data containing 13 variables. The AUC for the outcome was 0.83 when the model was trained with the pDS, 0.74 with the sDS, and 0.87 when combining pDS and sDS for training. Our method is effective for generating synthetic records consisting of both images and tabular clinical data.
Transfer learning may explain pigeons’ ability to detect cancer in histopathology
Kilim, Oz
Báskay, János
Biricz, András
Bedőházi, Zsolt
Pollner, Péter
Csabai, István
2024Journal Article, cited 0 times
DLBCL-Morphology
Ovarian Bevacizumab Response
Hungarian-Colorectal-Screening
HER2 tumor ROIs
Public data homogenization for AI model development in breast cancer
Kilintzis, V.
Kalokyri, V.
Kondylakis, H.
Joshi, S.
Nikiforaki, K.
Diaz, O.
Lekadir, K.
Tsiknakis, M.
Marias, K.
Eur Radiol Exp2024Journal Article, cited 0 times
Website
I-SPY 2
Duke-Breast-Cancer-MRI
ISPY1
TCGA-BRCA
Breast-MRI-NACT-Pilot
Humans
Female
*Breast Neoplasms/diagnostic imaging
Artificial Intelligence
Magnetic Resonance Imaging (MRI)
ISPY2
ACRIN 6657
Public data
Software
BACKGROUND: Developing trustworthy artificial intelligence (AI) models for clinical applications requires access to clinical and imaging data cohorts. Reusing of publicly available datasets has the potential to fill this gap. Specifically in the domain of breast cancer, a large archive of publicly accessible medical images along with the corresponding clinical data is available at The Cancer Imaging Archive (TCIA). However, existing datasets cannot be directly used as they are heterogeneous and cannot be effectively filtered for selecting specific image types required to develop AI models. This work focuses on the development of a homogenized dataset in the domain of breast cancer including clinical and imaging data. METHODS: Five datasets were acquired from the TCIA and were harmonized. For the clinical data harmonization, a common data model was developed and a repeatable, documented "extract-transform-load" process was defined and executed for their homogenization. Further, Digital Imaging and COmmunications in Medicine (DICOM) information was extracted from magnetic resonance imaging (MRI) data and made accessible and searchable. RESULTS: The resulting harmonized dataset includes information about 2,035 subjects with breast cancer. Further, a platform named RV-Cherry-Picker enables search over both the clinical and diagnostic imaging datasets, providing unified access, facilitating the downloading of all study imaging that correspond to specific series' characteristics (e.g., dynamic contrast-enhanced series), and reducing the burden of acquiring the appropriate set of images for the respective AI model scenario. CONCLUSIONS: RV-Cherry-Picker provides access to the largest, publicly available, homogenized, imaging/clinical dataset for breast cancer to develop AI models on top. RELEVANCE STATEMENT: We present a solution for creating merged public datasets supporting AI model development, using as an example the breast cancer domain and magnetic resonance imaging images. KEY POINTS: * The proposed platform allows unified access to the largest, homogenized public imaging dataset for breast cancer. * A methodology for the semantically enriched homogenization of public clinical data is presented. * The platform is able to make a detailed selection of breast MRI data for the development of AI models.
CNN-based CT denoising with an accurate image domain noise insertion technique
Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.
Weakly-supervised progressive denoising with unpaired CT images
Kim, Byeongjoon
Shim, Hyunjung
Baek, Jongduk
Medical Image Analysis2021Journal Article, cited 0 times
LDCT-and-Projection-data
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
Validation of MRI-Based Models to Predict MGMT Promoter Methylation in Gliomas: BraTS 2021 Radiogenomics Challenge
Kim, B. H.
Lee, H.
Choi, K. S.
Nam, J. G.
Park, C. K.
Park, S. H.
Chung, J. W.
Choi, S. H.
Cancers (Basel)2022Journal Article, cited 1 times
Website
BraTS 2021
Radiogenomics
O6-methylguanine-DNA methyl transferase
Glioma
neural networks
O6-methylguanine-DNA methyl transferase (MGMT) methylation prediction models were developed using only small datasets without proper external validation and achieved good diagnostic performance, which seems to indicate a promising future for radiogenomics. However, the diagnostic performance was not reproducible for numerous research teams when using a larger dataset in the RSNA-MICCAI Brain Tumor Radiogenomic Classification 2021 challenge. To our knowledge, there has been no study regarding the external validation of MGMT prediction models using large-scale multicenter datasets. We tested recent CNN architectures via extensive experiments to investigate whether MGMT methylation in gliomas can be predicted using MR images. Specifically, prediction models were developed and validated with different training datasets: (1) the merged (SNUH + BraTS) (n = 985); (2) SNUH (n = 400); and (3) BraTS datasets (n = 585). A total of 420 training and validation experiments were performed on combinations of datasets, convolutional neural network (CNN) architectures, MRI sequences, and random seed numbers. The first-place solution of the RSNA-MICCAI radiogenomic challenge was also validated using the external test set (SNUH). For model evaluation, the area under the receiver operating characteristic curve (AUROC), accuracy, precision, and recall were obtained. With unexpected negative results, 80.2% (337/420) and 60.0% (252/420) of the 420 developed models showed no significant difference with a chance level of 50% in terms of test accuracy and test AUROC, respectively. The test AUROC and accuracy of the first-place solution of the BraTS 2021 challenge were 56.2% and 54.8%, respectively, as validated on the SNUH dataset. In conclusion, MGMT methylation status of gliomas may not be predictable with preoperative MR images even using deep learning.
Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging
Kim, Donnie
Wang, Nicholas C
Ravikumar, Visweswaran
Raghuram, DR
Li, Jinju
Patel, Ankit
Wendt, Richard E
Rao, Ganesh
Rao, Arvind
Frontiers in Computational Neuroscience2019Journal Article, cited 0 times
glioma
BRATS
radiogenomics
CNN
Associations between gene expression profiles of invasive breast cancer and Breast Imaging Reporting and Data System MRI lexicon
Kim, Ga Ram
Ku, You Jin
Cho, Soon Gu
Kim, Sei Joong
Min, Byung Soh
Annals of Surgical Treatment and Research2017Journal Article, cited 3 times
Website
TCGA-BRCA
Radiogenomics
BI-RADS
BREAST
Magnetic resonance imaging (MRI)
Gene expression profiling
Purpose: To evaluate whether the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon could reflect the genomic information of breast cancers and to suggest intuitive imaging features as biomarkers.; Methods: Matched breast MRI data from The Cancer Imaging Archive and gene expression profile from The Cancer Genome Atlas of 70 invasive breast cancers were analyzed. Magnetic resonance images were reviewed according to the BI-RADS MRI lexicon of mass morphology. The cancers were divided into 2 groups of gene clustering by gene set enrichment analysis. Clinicopathologic and imaging characteristics were compared between the 2 groups.; Results: The luminal subtype was predominant in the group 1 gene set and the triple-negative subtype was predominant in the group 2 gene set (55 of 56, 98.2% vs. 9 of 14, 64.3%). Internal enhancement descriptors were different between the 2 groups; heterogeneity was most frequent in group 1 (27 of 56, 48.2%) and rim enhancement was dominant in group 2 (10 of 14, 71.4%). In group 1, the gene sets related to mammary gland development were overexpressed whereas the gene sets related to mitotic cell division were overexpressed in group 2.; Conclusion: We identified intuitive imaging features of breast MRI associated with distinct gene expression profiles using the standard imaging variables of BI-RADS. The internal enhancement pattern on MRI might reflect specific gene expression profiles of breast cancers, which can be recognized by visual distinction.
Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer
Kim, Ga Ram
Ku, You Jin
Kim, Jun Ho
Kim, Eun-Kyung
Journal of the Korean Society of Radiology2020Journal Article, cited 0 times
Website
TCGA-BRCA
Radiogenomics
Radiomic features
Magnetic Resonance Imaging (MRI)
Modification of population based arterial input function to incorporate individual variation
Kim, Harrison
Magn Reson Imaging2018Journal Article, cited 2 times
Website
QIN PROSTATE
Algorithm Development
PROSTATE
Arterial input function (AIF)
DCE-MRI
This technical note describes how to modify a population-based arterial input function to incorporate variation among the individuals. In DCE-MRI, an arterial input function (AIF) is often distorted by pulsated inflow effect and noise. A population-based AIF (pAIF) has high signal-to-noise ratio (SNR), but cannot incorporate the individual variation. AIF variation is mainly induced by variation in cardiac output and blood volume of the individuals, which can be detected by the full width at half maximum (FWHM) during the first passage and the amplitude of AIF, respectively. Thus pAIF scaled in time and amplitude fitting to the individual AIF may serve as a high SNR AIF incorporating the individual variation. The proposed method was validated using DCE-MRI images of 18 prostate cancer patients. Root mean square error (RMSE) of pAIF from individual AIFs was 0.88+/-0.48mM (mean+/-SD), but it was reduced to 0.25+/-0.11mM after pAIF modification using the proposed method (p<0.0001).
Pulse Sequence Dependence of a Simple and Interpretable Deep Learning Method for Detection of Clinically Significant Prostate Cancer Using Multiparametric MRI
Kim, H.
Margolis, D. J. A.
Nagar, H.
Sabuncu, M. R.
Acad Radiol2022Journal Article, cited 0 times
PROSTATEx
Deep Learning
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
multi-parametric magnetic resonance imaging (multi-parametric MRI)
Prostate Cancer
RATIONALE AND OBJECTIVES: Multiparametric magnetic resonance imaging (mpMRI) is increasingly used for risk stratification and localization of prostate cancer (PCa). Thanks to the great success of deep learning models in computer vision, the potential application for early detection of PCa using mpMRI is imminent. MATERIALS AND METHODS: Deep learning analysis of the PROSTATEx dataset. RESULTS: In this study, we show a simple convolutional neural network (CNN) with mpMRI can achieve high performance for detection of clinically significant PCa (csPCa), depending on the pulse sequences used. The mpMRI model with T2-ADC-DWI achieved 0.90 AUC score in the held-out test set, not significantly better than the model using K(trans) instead of DWI (AUC 0.89). Interestingly, the model incorporating T2-ADC- K(trans) better estimates grade. We also describe a saliency "heat" map. Our results show that csPCa detection models with mpMRI may be leveraged to guide clinical management strategies. CONCLUSION: Convolutional neural networks incorporating multiple pulse sequences show high performance for detection of clinically-significant prostate cancer, and the model including dynamic contrast-enhanced information correlates best with grade.
Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
Kim, Incheol
Rajaraman, Sivaramakrishnan
Antani, Sameer
Diagnostics (Basel)2019Journal Article, cited 0 times
Website
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Deep learning
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.
Roadmap for providing and leveraging annotated data by cytologists in the PDAC domain as open data: support for AI-based pathology image analysis development and data utilization strategies
Kim, J.
Bae, S.
Yoon, S. M.
Jeong, S.
Front Oncol2024Journal Article, cited 0 times
Website
CPTAC-PDA
Pancreatic cancer is one of the most lethal cancers worldwide, with a 5-year survival rate of less than 5%, the lowest of all cancer types. Pancreatic ductal adenocarcinoma (PDAC) is the most common and aggressive pancreatic cancer and has been classified as a health emergency in the past few decades. The histopathological diagnosis and prognosis evaluation of PDAC is time-consuming, laborious, and challenging in current clinical practice conditions. Pathological artificial intelligence (AI) research has been actively conducted lately. However, accessing medical data is challenging; the amount of open pathology data is small, and the absence of open-annotation data drawn by medical staff makes it difficult to conduct pathology AI research. Here, we provide easily accessible high-quality annotation data to address the abovementioned obstacles. Data evaluation is performed by supervised learning using a deep convolutional neural network structure to segment 11 annotated PDAC histopathological whole slide images (WSIs) drawn by medical staff directly from an open WSI dataset. We visualized the segmentation results of the histopathological images with a Dice score of 73% on the WSIs, including PDAC areas, thus identifying areas important for PDAC diagnosis and demonstrating high data quality. Additionally, pathologists assisted by AI can significantly increase their work efficiency. The pathological AI guidelines we propose are effective in developing histopathological AI for PDAC and are significant in the clinical field.
Training of deep convolutional neural nets to extract radiomic signatures of tumors
Kim, J.
Seo, S.
Ashrafinia, S.
Rahmim, A.
Sossi, V.
Klyuzhin, I.
Journal of Nuclear Medicine2019Journal Article, cited 0 times
Head-Neck-PET-CT
Radiomics
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). ; Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.
RGU-Net: Computationally Efficient U-Net for Automated Brain Extraction of mpMRI with Presence of Glioblastoma
Brain extraction refers to the process of removing non-brain tissues in brain scans and is one of the initial pre-processing procedures in neuroimage analysis. Since errors produced during this process can be challenging to amend in subsequent analyses, accurate brain extraction is crucial. Most deep learning-based brain extraction models are optimised on performance, leading to computationally expensive models. Such models may be ideal for research; however, they are not ideal in a clinical setting. In this work, we propose a new computationally efficient 2D brain extraction model, named RGU-Net. RGU-Net incorporates Ghost modules and residual paths to accurately extract features and reduce computational cost. Our results show that RGU-Net has 98.26% fewer parameters compared to the original U-Net model, whilst yielding state-of-the-art performance of 97.97 ± 0.84% Dice similarity coefficient. Faster run time was also observed in CPUs which illustrates the model’s practicality in real-world applications.
ICP Algorithm Based Liver Rigid Registration Method Using Liver and Liver Vessel Surface Mesh
Kim, Soohyun
Koo, Kyoyeong
Park, Taeyong
Lee, Jeongjin
2023Conference Paper, cited 0 times
TCGA-LIHC
HCC-TACE-Seg
LIVER
Hepatocellular carcinoma (HCC)
computed Tomography (CT)
Image Registration
Segmentation
Organ segmentation
Vasculature
Computer Aided Diagnosis (CADx)
To improve the survival rate of hepatocellular carcinoma (HCC), early diagnosis and treatment are essential. Early diagnosis HCC often involves comparing and analyzing hundreds of computed tomography (CT) images, which is a kind of subjective judgment and is also a time-consuming process. In this paper, we propose a liver rigid registration method using liver and liver vessel surface mesh to enable fast and objective diagnosis of HCC. The proposed method involves segmenting the liver and liver vessel regions from abdominal CT images, generating surface meshes, and performing liver rigid registration based on the Iterative Closest Point (ICP) algorithm using the generated meshes. We evaluate the accuracy of the proposed method through experiments, demonstrating its potential for rapid and objective diagnosis of HCC. The performance evaluations show that the proposed method aids in the early diagnosis and treatment of HCC fast and objectively.
Two-Step U-Nets for Brain Tumor Segmentation and Random Forest with Radiomics for Survival Time Prediction
Kim, Soopil
Luna, Miguel
Chikontwe, Philip
Park, Sang Hyun
2020Book Section, cited 0 times
BraTS-TCGA-LGG
Segmentation
Radiomics
Random Forest
Convolutional Neural Network (CNN)
In this paper, a two-step convolutional neural network (CNN) for brain tumor segmentation in brain MR images with a random forest regressor for survival prediction of high-grade glioma subjects are proposed. The two-step CNN consists of three 2D U-nets for utilizing global information on axial, coronal, and sagittal axes, and a 3D U-net that uses local information in 3D patches. In our two-step setup, an initial segmentation probability map is first obtained using the ensemble 2D U-nets; second, a 3D U-net takes as input both the MR image and initial segmentation map to generate the final segmentation. Following segmentation, radiomics features from T1-weighted, T2-weighted, contrast enhanced T1-weighted, and T2-FLAIR images are extracted with the segmentation results as a prior. Lastly, a random forest regressor is used for survival time prediction. Moreover, only a small number of features selected by the random forest regressor are used to avoid overfitting. We evaluated the proposed methods on the BraTS 2019 challenge dataset. For the segmentation task, we obtained average dice scores of 0.74, 0.85 and 0.80 for enhanced tumor core, whole tumor, and tumor core, respectively. In the survival prediction task, an average accuracy of 50.5% was obtained showing the effectiveness of the proposed methods.
“SPOCU”: scaled polynomial constant unit activation function
Kiseľák, Jozef
Lu, Ying
Švihra, Ján
Szépe, Peter
Stehlík, Milan
Neural Computing and Applications2020Journal Article, cited 0 times
Pancreas-CT
We address the following problem: given a set of complex images or a large database, the numerical and computational complexity and quality of approximation for neural network may drastically differ from one activation function to another. A general novel methodology, scaled polynomial constant unit activation function “SPOCU,” is introduced and shown to work satisfactorily on a variety of problems. Moreover, we show that SPOCU can overcome already introduced activation functions with good properties, e.g., SELU and ReLU, on generic problems. In order to explain the good properties of SPOCU, we provide several theoretical and practical motivations, including tissue growth model and memristive cellular nonlinear networks. We also provide estimation strategy for SPOCU parameters and its relation to generation of random type of Sierpinski carpet, related to the [pppq] model. One of the attractive properties of SPOCU is its genuine normalization of the output of layers. We illustrate SPOCU methodology on cancer discrimination, including mammary and prostate cancer and data from Wisconsin Diagnostic Breast Cancer dataset. Moreover, we compared SPOCU with SELU and ReLU on large dataset MNIST, which justifies usefulness of SPOCU by its very good performance.
PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines
Kiser, K. J.
Ahmed, S.
Stieb, S.
Mohamed, A. S. R.
Elhalawani, H.
Park, P. Y. S.
Doyle, N. S.
Wang, B. J.
Barman, A.
Li, Z.
Zheng, W. J.
Fuller, C. D.
Giancardo, L.
Med Phys2020Journal Article, cited 0 times
Website
PleThora
NSCLC-Radiomics
Analysis Results
LUNG
U-Net
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.
Sketch-based semantic retrieval of medical images
Kobayashi, Kazuma
Gu, Lin
Hataya, Ryuichiro
Mizuno, Takaaki
Miyake, Mototaka
Watanabe, Hirokazu
Takahashi, Masamichi
Takamizawa, Yasuyuki
Yoshida, Yukihiro
Nakamura, Satoshi
Kouno, Nobuji
Bolatkan, Amina
Kurose, Yusuke
Harada, Tatsuya
Hamamoto, Ryuji
Medical Image Analysis2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The volume of medical images stored in hospitals is rapidly increasing; however, the utilization of these accumulated medical images remains limited. Existing content-based medical image retrieval (CBMIR) systems typically require example images, leading to practical limitations, such as the lack of customizable, fine-grained image retrieval, the inability to search without example images, and difficulty in retrieving rare cases. In this paper, we introduce a sketch-based medical image retrieval (SBMIR) system that enables users to find images of interest without the need for example images. The key concept is feature decomposition of medical images, which allows the entire feature of a medical image to be decomposed into and reconstructed from normal and abnormal features. Building on this concept, our SBMIR system provides an easy-to-use two-step graphical user interface: users first select a template image to specify a normal feature and then draw a semantic sketch of the disease on the template image to represent an abnormal feature. The system integrates both types of input to construct a query vector and retrieves reference images. For evaluation, ten healthcare professionals participated in a user test using two datasets. Consequently, our SBMIR system enabled users to overcome previous challenges, including image retrieval based on fine-grained image characteristics, image retrieval without example images, and image retrieval for rare cases. Our SBMIR system provides on-demand, customizable medical image retrieval, thereby expanding the utility of medical image databases.
Decomposing normal and abnormal features of medical images for content-based image retrieval of glioma imaging
Kobayashi, K.
Hataya, R.
Kurose, Y.
Miyake, M.
Takahashi, M.
Nakagawa, A.
Harada, T.
Hamamoto, R.
Med Image Anal2021Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Classification
comparative diagnostic reading
Content-based image retrieval (CBIR)
Deep Learning
disentangled representation
feature decomposition
In medical imaging, the characteristics purely derived from a disease should reflect the extent to which abnormal findings deviate from the normal features. Indeed, physicians often need corresponding images without abnormal findings of interest or, conversely, images that contain similar abnormal findings regardless of normal anatomical context. This is called comparative diagnostic reading of medical images, which is essential for a correct diagnosis. To support comparative diagnostic reading, content-based image retrieval (CBIR) that can selectively utilize normal and abnormal features in medical images as two separable semantic components will be useful. In this study, we propose a neural network architecture to decompose the semantic components of medical images into two latent codes: normal anatomy code and abnormal anatomy code. The normal anatomy code represents counterfactual normal anatomies that should have existed if the sample is healthy, whereas the abnormal anatomy code attributes to abnormal changes that reflect deviation from the normal baseline. By calculating the similarity based on either normal or abnormal anatomy codes or the combination of the two codes, our algorithm can retrieve images according to the selected semantic component from a dataset consisting of brain magnetic resonance images of gliomas. Moreover, it can utilize a synthetic query vector combining normal and abnormal anatomy codes from two different query images. To evaluate whether the retrieved images are acquired according to the targeted semantic component, the overlap of the ground-truth labels is calculated as metrics of the semantic consistency. Our algorithm provides a flexible CBIR framework by handling the decomposed features with qualitatively and quantitatively remarkable results.
RadiomicsJ: a library to compute radiomic features
Kobayashi, T.
Radiol Phys Technol2022Journal Article, cited 0 times
Website
LGG-1p19qDeletion
Computer Aided Diagnosis (CADx)
Imaging biomarker
Machine learning
Radiomic features
Radiomics
Despite the widely recognized need for radiomics research, the development and use of full-scale radiomics-based predictive models in clinical practice remains scarce. This is because of the lack of well-established methodologies for radiomic research and the need to develop systems to support radiomic feature calculations and predictive model use. Several excellent programs for calculating radiomic features have been developed. However, there are still issues such as the types of image features, variations in the calculated results, and the limited system environment in which to run the program. Against this background, we developed RadiomicsJ, an open-source radiomic feature computation library. RadiomicsJ will not only be a new research tool to enhance the efficiency of radiomics research but will also become a knowledge resource for medical imaging feature studies through its release as an open-source program.
Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors
Koç, Alpaslan
Güveniş, Albert
Med Biol Eng Comput2020Journal Article, cited 0 times
Website
RIDER PHANTOM PET-CT
Segmentation
Positron Emission Tomography (PET)
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.
Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas
Kocak, Burak
Ates, Ece
Durmaz, Emine Sebnem
Ulusan, Melis Baykara
Kilickesmez, Ozgur
European Radiology2019Journal Article, cited 0 times
Website
TCGA-KIRC
Segmentation
CT
Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade
Kocak, Burak
Durmaz, Emine Sebnem
Ates, Ece
Kaya, Ozlem Korkmaz
Kilickesmez, Ozgur
American Journal of Roentgenology2019Journal Article, cited 0 times
Website
TCGA-KIRC
Machine learning
Radiomics
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.
Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status
Kocak, B.
Durmaz, E. S.
Ates, E.
Sel, I.
Turgut Gunes, S.
Kaya, O. K.
Zeynalova, A.
Kilickesmez, O.
Eur Radiol2019Journal Article, cited 0 times
LGG-1p19qDeletion
Radiogenomics
1p/19q codeletion
Machine learning
Radiomics
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. ; MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). ; RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. ; ; CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.
Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility
Kocak, Burak
Durmaz, Emine Sebnem
Kaya, Ozlem Korkmaz
Ates, Ece
Kilickesmez, Ozgur
AJR Am J Roentgenol2019Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
Segmentation
KIDNEY
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.
Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas
Kocak, Burak
Durmaz, Emine Sebnem
Kaya, Ozlem Korkmaz
Kilickesmez, Ozgur
Acta Radiol2019Journal Article, cited 0 times
Radiogenomics
TCGA-KIRC
Radiomic features
Machine Learning
Clear cell renal cell carcinoma (ccRCC)
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.
Textural differences between renal cell carcinoma subtypes: Machine learning-based quantitative computed tomography texture analysis with independent external validation
Kocak, Burak
Yardimci, Aytul Hande
Bektas, Ceyda Turan
Turkcanoglu, Mehmet Hamza
Erdim, Cagri
Yucetas, Ugur
Koca, Sevim Baykal
Kilickesmez, Ozgur
European Journal of Radiology2018Journal Article, cited 0 times
TCGA-KICH
TCGA-KIRC
TCGA-KIRP
OBJECTIVE: To develop externally validated, reproducible, and generalizable models for distinguishing three major subtypes of renal cell carcinomas (RCCs) using machine learning-based quantitative computed tomography (CT) texture analysis (qCT-TA).
MATERIALS AND METHODS: Sixty-eight RCCs were included in this retrospective study for model development and internal validation. Another 26 RCCs were included from public databases (The Cancer Genome Atlas-TCGA) for independent external validation. Following image preparation steps (reconstruction, resampling, normalization, and discretization), 275 texture features were extracted from unenhanced and corticomedullary phase CT images. Feature selection was firstly done with reproducibility analysis by three radiologists, and; then, with a wrapper-based classifier-specific algorithm. A nested cross-validation was performed for feature selection and model optimization. Base classifiers were the artificial neural network (ANN) and support vector machine (SVM). Base classifiers were also combined with three additional algorithms to improve generalizability performance. Classifications were done with the following groups: (i), non-clear cell RCC (non-cc-RCC) versus clear cell RCC (cc-RCC) and (ii), cc-RCC versus papillary cell RCC (pc-RCC) versus chromophobe cell RCC (chc-RCC). Main performance metric for comparisons was the Matthews correlation coefficient (MCC).
RESULTS: Number of the reproducible features is smaller for the unenhanced images (93 out of 275) compared to the corticomedullary phase images (232 out of 275). Overall performance metrics of the machine learning-based qCT-TA derived from corticomedullary phase images were better than those of unenhanced images. Using corticomedullary phase images, ANN with adaptive boosting algorithm performed best for discrimination of non-cc-RCCs from cc-RCCs (MCC = 0.728) with an external validation accuracy, sensitivity, and specificity of 84.6%, 69.2%, and 100%, respectively. On the other hand, the performance of the machine learning-based qCT-TA is rather poor for distinguishing three major subtypes. The SVM with bagging algorithm performed best for discrimination of pc-RCC from other RCC subtypes (MCC = 0.804) with an external validation accuracy, sensitivity, and specificity of 69.2%, 71.4%, and 100%, respectively.
CONCLUSIONS: Machine learning-based qCT-TA can distinguish non-cc-RCCs from cc-RCCs with a satisfying performance. On the other hand, the performance of the method for distinguishing three major subtypes is rather poor. Corticomedullary phase CT images provide much more valuable texture parameters than unenhanced images.
Topological radiogenomics based on persistent lifetime images for identification of epidermal growth factor receptor mutation in patients with non-small cell lung tumors
Kodama, T.
Arimura, H.
Tokuda, T.
Tanaka, K.
Yabuuchi, H.
Gowdh, N. F. M.
Liam, C. K.
Chai, C. S.
Ng, K. H.
Comput Biol Med2025Journal Article, cited 0 times
Website
NSCLC Radiogenomics
TCGA-LUAD
TCGA-LUSC
We hypothesized that persistent lifetime (PLT) images could represent tumor imaging traits, locations, and persistent contrasts of topological components (connected and hole components) corresponding to gene mutations such as epidermal growth factor receptor (EGFR) mutant signs. We aimed to develop a topological radiogenomic approach using PLT images to identify EGFR mutation-positive patients with non-small cell lung cancer (NSCLC). The PLT image was newly proposed to visualize the locations and persistent contrasts of the topological components for a sequence of binary images with consecutive thresholding of an original computed tomography (CT) image. This study employed 226 NSCLC patients (94 mutant and 132 wildtype patients) with pretreatment contrast-enhanced CT images obtained from four datasets from different countries for training and testing prediction models. Two-dimensional (2D) and three-dimensional (3D) PLT images were assumed to characterize specific imaging traits (e.g., air bronchogram sign, cavitation, and ground glass nodule) of EGFR-mutant tumors. Seven types of machine learning classification models were constructed to predict EGFR mutations with significant features selected from 2D-PLT, 3D-PLT, and conventional radiogenomic features. Among the means and standard deviations of the test areas under the receiver operating characteristic curves (AUCs) of all radiogenomic approaches in a four-fold cross-validation test, the 2D-PLT features showed the highest AUC with the lowest standard deviation of 0.927 ± 0.08. The best radiogenomic approaches with the highest AUC were the random forest model trained with the Betti number (BN) map features (AUC = 0.984) in the internal test and the adapting boosting model trained with the BN map features (AUC = 0.717) in the external test. PLT features can be used as radiogenomic imaging biomarkers for the identification of EGFR mutation status in patients with NSCLC.
A Baseline for Predicting Glioblastoma Patient Survival Time with Classical Statistical Models and Primitive Features Ignoring Image Information
Kofler, Florian
Paetzold, Johannes C.
Ezhov, Ivan
Shit, Suprosanna
Krahulec, Daniel
Kirschke, Jan S.
Zimmer, Claus
Wiestler, Benedikt
Menze, Bjoern H.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Magnetic Resonance Imaging (MRI)
Gliomas are the most prevalent primary malignant brain tumors in adults. Until now an accurate and reliable method to predict patient survival time based on medical imaging and meta-information has not been developed [3]. Therefore, the survival time prediction task was introduced to the Multimodal Brain Tumor Segmentation Challenge (BraTS) to facilitate research in survival time prediction.; ; Here we present our submissions to the BraTS survival challenge based on classical statistical models to which we feed the provided metadata as features. We intentionally ignore the available image information to explore how patient survival can be predicted purely by metadata. We achieve our best accuracy on the validation set using a simple median regression model taking only patient age into account. We suggest using our model as a baseline to benchmark the added predictive value of sophisticated features for survival time prediction.
Creation and curation of the society of imaging informatics in Medicine Hackathon Dataset
Kohli, Marc
Morrison, James J
Wawira, Judy
Morgan, Matthew B
Hostetter, Jason
Genereaux, Brad
Hussain, Mohannad
Langer, Steve G
Journal of Digital Imaging2018Journal Article, cited 4 times
Website
SIIM hackathon dataset
FHIR
HL7
DICOM
DICOMweb
Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy
Koike, Yuhei
Akino, Yuichi
Sumida, Iori
Shiomi, Hiroya
Mizuno, Hirokazu
Yagi, Masashi
Isohashi, Fumiaki
Seo, Yuji
Suzuki, Osamu
Ogawa, Kazuhiko
J Radiat Res2019Journal Article, cited 0 times
Deep Learning
BRAIN
Magnetic Resonance Imaging (MRI)
Computed Tomography (CT)
Modality synthesis
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.
SAROS: A dataset for whole-body region and organ segmentation in CT imaging
Koitka, S.
Baldini, G.
Kroll, L.
van Landeghem, N.
Pollok, O. B.
Haubold, J.
Pelka, O.
Kim, M.
Kleesiek, J.
Nensa, F.
Hosch, R.
Sci Data2024Journal Article, cited 0 times
Website
SAROS
ACRIN-NSCLC-FDG-PET
CPTAC-LSCC
Soft-tissue-Sarcoma
NSCLC Radiogenomics
Lung-PET-CT-Dx
NSCLC-Radiomics
LIDC-IDRI
TCGA-LUAD
TCGA-STAD
Anti-PD-1_MELANOMA
TCGA-UCEC
CPTAC-CM
TCGA-LUSC
ACRIN-FLT-Breast
Anti-PD-1_Lung
HNSCC
QIN-HEADNECK
CPTAC-LUAD
C4KC-KiTS
Head-Neck Cetuximab
TCGA-LIHC
CPTAC-PDA
NSCLC-Radiomics-Genomics
ACRIN-HNSCC-FDG-PET-CT
Pancreas-CT
TCGA-HNSC
COVID-19-NY-SBU
Female
Humans
Male
Segmentation
Algorithm Development
Model
Image Processing
Computer-Assisted
*Tomography
X-Ray Computed
*Whole Body Imaging
The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.
A Quantum-Inspired Self-Supervised Network model for automatic segmentation of brain MR images
Konar, Debanjan
Bhattacharyya, Siddhartha
Gandhi, Tapan Kr
Panigrahi, Bijaya Ketan
Applied Soft Computing2020Journal Article, cited 1 times
Website
QIN-BRAIN-DSC-MRI
Segmentation
Magnetic Resonance Imaging (MRI)
Fuzzy C-means clustering (FCM)
The classical self-supervised neural network architectures suffer from slow convergence problem and incorporation of quantum computing in classical self-supervised networks is a potential solution towards it. In this article, a fully self-supervised novel quantum-inspired neural network model referred to as Quantum-Inspired Self-Supervised Network (QIS-Net) is proposed and tailored for fully automatic segmentation of brain MR images to obviate the challenges faced by deeply supervised Convolutional Neural Network (CNN) architectures. The proposed QIS-Net architecture is composed of three layers of quantum neuron (input, intermediate and output) expressed as qbits. The intermediate and output layers of the QIS-Net architecture are inter-linked through bi-directional propagation of quantum states, wherein the image pixel intensities (quantum bits) are self-organized in between these two layers without any external supervision or training. Quantum observation allows to obtain the true output once the superimposed quantum states interact with the external environment. The proposed self-supervised quantum-inspired network model has been tailored for and tested on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data sets for detecting complete tumor and reported promising accuracy and reasonable dice similarity scores in comparison with the unsupervised Fuzzy C-Means clustering, self-trained QIBDS Net, Opti-QIBDS Net, deeply supervised U-Net and Fully Convolutional Neural Networks (FCNNs).
Classical self-supervised networks suffer from convergence problems and reduced segmentation accuracy due to forceful termination. Qubits or bilevel quantum bits often describe quantum neural network models. In this article, a novel self-supervised shallow learning network model exploiting the sophisticated three-level qutrit-inspired quantum information system, referred to as quantum fully self-supervised neural network (QFS-Net), is presented for automated segmentation of brain magnetic resonance (MR) images. The QFS-Net model comprises a trinity of a layered structure of qutrits interconnected through parametric Hadamard gates using an eight-connected second-order neighborhood-based topology. The nonlinear transformation of the qutrit states allows the underlying quantum neural network model to encode the quantum states, thereby enabling a faster self-organized counterpropagation of these states between the layers without supervision. The suggested QFS-Net model is tailored and extensively validated on the Cancer Imaging Archive (TCIA) dataset collected from the Nature repository. The experimental results are also compared with state-of-the-art supervised (U-Net and URes-Net architectures) and the self-supervised QIS-Net model and its classical counterpart. Results shed promising segmented outcomes in detecting tumors in terms of dice similarity and accuracy with minimum human intervention and computational resources. The proposed QFS-Net is also investigated on natural gray-scale images from the Berkeley segmentation dataset and yields promising outcomes in segmentation, thereby demonstrating the robustness of the QFS-Net model.
Negligible effect of brain MRI data preprocessing for tumor segmentation
Kondrateva, Ekaterina
Druzhinina, Polina
Dalechina, Alexandra
Zolotova, Svetlana
Golanov, Andrey
Shirokikh, Boris
Belyaev, Mikhail
Kurmukov, Anvar
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
Burdenko-GBM-Progression
Magnetic resonance imaging (MRI) data is heterogeneous due to differences in device manufacturers, scanning protocols, and inter-subject variability. A conventional way to mitigate MR image heterogeneity is to apply preprocessing transformations such as anatomy alignment, voxel resampling, signal intensity equalization, image denoising, and localization of regions of interest. Although a preprocessing pipeline standardizes image appearance, its influence on the quality of image segmentation and on other downstream tasks in deep neural networks has never been rigorously studied. Experiments on three publicly available datasets evaluate the effect of different preprocessing steps in intra- and inter-dataset training scenarios. Results demonstrate that most popular standardization steps add no value to network performance; moreover, preprocessing can hamper performance. Our results suggest that image intensity normalization approaches do not contribute to model accuracy because of the reduction of signal variance with image standardization. Additionally, the contribution of skull-stripping in data preprocessing is almost negligible if measured in terms of estimated tumor volume. The only essential transformation for accurate deep learning analysis is the unification of voxel spacing across the dataset. In contrast, inter-subjects anatomy alignment in the form of atlas registration is not necessary and intensity equalization steps (denoising, bias-field correction and histogram matching) do not improve performance. The study code is accessible online. 2 2 https://github.com/MedImAIR/brain-mri-processing-pipeline.
Neural network-based reversible data hiding for medical image
Kong, Ping
Zhang, Yongdong
Huang, Lin
Zhou, Liang
Chen, Lifan
Qin, Chuan
Expert Systems with Applications2024Journal Article, cited 0 times
Website
CTpred-Sunitinib-panNET
MIDRC-RICORD-1A
StageII-Colorectal-CT
CT imaging-based radiomics signatures improve prognosis prediction in postoperative colorectal cancer
OBJECTIVE: To investigate the use of non-contrast-enhanced (NCE) and contrast-enhanced (CE) CT radiomics signatures (Rad-scores) as prognostic factors to help improve the prediction of the overall survival (OS) of postoperative colorectal cancer (CRC) patients. METHODS: A retrospective analysis was performed on 65 CRC patients who underwent surgical resection in our hospital as the training set, and 19 patient images retrieved from The Cancer Imaging Archive (TCIA) as the external validation set. In training, radiomics features were extracted from the preoperative NCE/CE-CT, then selected through 5-fold cross validation LASSO Cox method and used to construct Rad-scores. Models derived from Rad-scores and clinical factors were constructed and compared. Kaplan-Meier analyses were also used to compare the survival probability between the high- and low-risk Rad-score groups. Finally, a nomogram was developed to predict the OS. RESULTS: In training, a clinical model achieved a C-index of 0.796 (95% CI: 0.722-0.870), while clinical and two Rad-scores combined model performed the best, achieving a C-index of 0.821 (95% CI: 0.743-0.899). Furthermore, the models with the CE-CT Rad-score yielded slightly better performance than that of NCE-CT in training. For the combined model with CE-CT Rad-scores, a C-index of 0.818 (95% CI: 0.742-0.894) and 0.774 (95% CI: 0.556-0.992) were achieved in both the training and validation sets. Kaplan-Meier analysis demonstrated a significant difference in survival probability between the high- and low-risk groups. Finally, the areas under the receiver operating characteristics (ROC) curves for the model were 0.904, 0.777, and 0.843 for 1, 3, and 5-year survival, respectively. CONCLUSION: NCE-CT or CE-CT radiomics and clinical combined models can predict the OS for CRC patients, and both Rad-scores are recommended to be included when available.
Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome
Kontopodis, Eleftherios
Venianaki, Maria
Manikis, George C
Nikiforaki, Katerina
Salvetti, Ovidio
Papadaki, Efrosini
Papadakis, Georgios Z
Karantanas, Apostolos H
Marias, Kostas
IEEE J Biomed Health Inform2019Journal Article, cited 0 times
QIN Breast
Breast
DCE-MRI
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.
A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis
Konz, N.
Buda, M.
Gu, H.
Saha, A.
Yang, J.
Chledowski, J.
Park, J.
Witowski, J.
Geras, K. J.
Shoshan, Y.
Gilboa-Solomon, F.
Khapun, D.
Ratner, V.
Barkan, E.
Ozery-Flato, M.
Marti, R.
Omigbodun, A.
Marasinou, C.
Nakhaei, N.
Hsu, W.
Sahu, P.
Hossain, M. B.
Lee, J.
Santos, C.
Przelaskowski, A.
Kalpathy-Cramer, J.
Bearce, B.
Cha, K.
Farahani, K.
Petrick, N.
Hadjiiski, L.
Drukker, K.
Armato, S. G., 3rd
Mazurowski, M. A.
JAMA Netw Open2023Journal Article, cited 0 times
Website
Breast-Cancer-Screening-DBT
Challenge
Humans
Computer Aided Detection (CADe)
Benchmarking
Mammography/methods
Algorithm Development
Radiographic Image Interpretation
Computer-Assisted/methods
*Breast Neoplasms/diagnostic imaging
IMPORTANCE: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
The Intrinsic Manifolds of Radiological Images and Their Role in Deep Learning
Konz, Nicholas
Gu, Hanxue
Dong, Haoyu
Mazurowski, Maciej A.
2022Book Section, cited 0 times
Prostate-MRI-US-Biopsy
The manifold hypothesis is a core mechanism behind the success of deep learning, so understanding the intrinsic manifold structure of image data is central to studying how neural networks learn from the data. Intrinsic dataset manifolds and their relationship to learning difficulty have recently begun to be studied for the common domain of natural images, but little such research has been attempted for radiological images. We address this here. First, we compare the intrinsic manifold dimensionality of radiological and natural images. We also investigate the relationship between intrinsic dimensionality and generalization ability over a wide range of datasets. Our analysis shows that natural image datasets generally have a higher number of intrinsic dimensions than radiological images. However, the relationship between generalization ability and intrinsic dimensionality is much stronger for medical images, which could be explained as radiological images having intrinsic features that are more difficult to learn. These results give a more principled underpinning for the intuition that radiological images can be more challenging to apply deep learning to than natural image datasets common to machine learning research. We believe rather than directly applying models developed for natural images to the radiological imaging domain, more care should be taken to developing architectures and algorithms that are more tailored to the specific characteristics of this domain. The research shown in our paper, demonstrating these characteristics and the differences from natural images, is an important first step in this direction.
Deep Machine Learning Histopathological Image Analysis for Renal Cancer Detection
Koo, Jia Chun
Hum, Yan Chai
Lai, Khin Wee
Yap, Wun-She
Manickam, Swaminathan
Tee, Yee Kai
2022Conference Paper, cited 0 times
CPTAC-CCRCC
Histopathology
Deep Learning
Classification
Python
Transfer learning
Renal cancer is one of the top causes of cancer-related deaths among men globally. Early detection of renal cancer is crucial because it can significantly improve the probability of survival rate. However, assessing the histopathological renal tissues is a labor-intensive job and traditionally, this is done manually by a pathologist, leading to a high possibility of misdetection and/or misdiagnosis especially in the early stages and prone to inter-pathologist variations. The development of an automatic histopathological diagnosis of renal cancer can greatly reduce the bias and provide accurate characterization of diseases even though the nature of pathology and microscopy are highly complex and complicated. This paper investigated the use of deep learning methods to develop a binary histopathological image classification model (cancer or normal). 783 whole slide images of renal tissue were processed into patches using PyHIST tool at 5x magnification power before feeding them to the deep learning models. Five pre-trained deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet, were trained with transfer learning on the CPTAC-CCRCC dataset and their performances were evaluated. EfficientNetB0 achieved the state-of-the-art accuracy (97%), specificity (94%), F1-score (98%) and AUC (96%) but slightly inferior recall (98%) when compared to the best published results in the literature. These findings showed that the proposed deep learning approach can effectively classify the histopathological images of renal tissue into tumor and non-tumor classes to make pathology diagnosis more efficient and less labor intensive.
Non-annotated renal histopathological image analysis with deep ensemble learning
Koo, Jia Chun
Ke, Qi
Hum, Yan Chai
Goh, Choon Hian
Lai, Khin Wee
Yap, Wun-She
Tee, Yee Kai
Quantitative Imaging in Medicine and Surgery2023Journal Article, cited 0 times
CPTAC-CCRCC
Background: Renal cancer is one of the leading causes of cancer-related deaths worldwide, and early detection of renal cancer can significantly improve the patients' survival rate. However, the manual analysis of renal tissue in the current clinical practices is labor-intensive, prone to inter-pathologist variations and easy to miss the important cancer markers, especially in the early stage.
Methods: In this work, we developed deep convolutional neural network (CNN) based heterogeneous ensemble models for automated analysis of renal histopathological images without detailed annotations. The proposed method would first segment the histopathological tissue into patches with different magnification factors, then classify the generated patches into normal and tumor tissues using the pre-trained CNNs and lastly perform the deep ensemble learning to determine the final classification. The heterogeneous ensemble models consisted of CNN models from five deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet. These CNN models were fine-tuned and used as base learners, they exhibited different performances and had great diversity in histopathological image analysis. The CNN models with superior classification accuracy (Acc) were then selected to undergo ensemble learning for the final classification. The performance of the investigated ensemble approaches was evaluated against the state-of-the-art literature.
Results: The performance evaluation demonstrated the superiority of the proposed best performing ensembled model: five-CNN based weighted averaging model, with an Acc (99%), specificity (Sp) (98%), F1-score (F1) (99%) and area under the receiver operating characteristic (ROC) curve (98%) but slightly inferior recall (Re) (99%) compared to the literature.
Conclusions: The outstanding robustness of the developed ensemble model with a superiorly high-performance scores in the evaluated metrics suggested its reliability as a diagnosis system for assisting the pathologists in analyzing the renal histopathological tissues. It is expected that the proposed ensemble deep CNN models can greatly improve the early detection of renal cancer by making the diagnosis process more efficient, and less misdetection and misdiagnosis; subsequently, leading to higher patients' survival rate.
A Study on the Geometrical Limits and Modern Approaches to External Beam Radiotherapy
Radiation therapy is integral to treating cancer and improving survival probability. Improving treatment methods and modalities can lead to significant impacts on the life quality of cancer patients. One such method is stereotactic radiotherapy. Stereotactic radiotherapy is a form of External Beam Radiotherapy (EBRT). It delivers a highly conformal dose of radiation to a target from beams arranged at many different angles. The goal of any radiotherapy treatment is to deliver radiation only to the cancerous cells while maximally sparing other tissues. However, such a perfect treatment outcome is difficult to achieve due to the physical limitations of EBRT. The quality of treatment is dependent on the characteristics of these beams and the number of angles of which radiation is delivered. However, as technology and techniques have improved, the dependence on the quality of beams and beam coverage may have become less critical.; ; This thesis investigates different geometric aspects of stereotactic radiotherapy and their impacts on treatment quality. The specific aims are: (1) To explore the treatment outcome of a virtual stereotactic delivery where no geometric limit exists in the sense of physical collisions. This allows for the full solid angle treatment space to be investigated and to explore if a large solid angle space is necessary to improve treatment. (2) To evaluate the effect of a reduced solid angle with a specific radiotherapy device using real clinical cases. (3) To investigate how the quality of a single beam influences treatment outcome when multiple overlapping beams are in use. (4) To study the feasibility of using a novel treatment method of lattice radiotherapy with an existing stereotactic device for treating breast cancer. All these aims were investigated with the use of inverse planning optimization and Monte-Carlo based particle transport simulations.
Validation of a convolutional neural network for the automated creation of curved planar reconstruction images along the main pancreatic duct
Koretsune, Y.
Sone, M.
Sugawara, S.
Wakatsuki, Y.
Ishihara, T.
Hattori, C.
Fujisawa, Y.
Kusumoto, M.
Jpn J Radiol2022Journal Article, cited 0 times
Website
Pancreas-CT
Curved planar reconstruction
3d convolutional neural network (CNN)
Image Processing
Segmentation
Algorithm Development
Contrast enhancement
Computed Tomography (CT)
Deep learning
Imaging algorithm
Main pancreatic duct
Pancreatic cancer
PANCREAS
PURPOSE: To evaluate the accuracy and time-efficiency of newly developed software in automatically creating curved planar reconstruction (CPR) images along the main pancreatic duct (MPD), which was developed based on a 3-dimensional convolutional neural network, and compare them with those of conventional manually generated CPR ones. MATERIALS AND METHODS: A total of 100 consecutive patients with MPD dilatation (>/= 3 mm) who underwent contrast-enhanced computed tomography between February 2021 and July 2021 were included in the study. Two radiologists independently performed blinded qualitative analysis of automated and manually created CPR images. They rated overall image quality based on a four-point scale and weighted kappa analysis was employed to compare between manually created and automated CPR images. A quantitative analysis of the time required to create CPR images and the total length of the MPD measured from CPR images was performed. RESULTS: The kappa value was 0.796, and a good correlation was found between the manually created and automated CPR images. The average time to create automated and manually created CPR images was 61.7 s and 174.6 s, respectively (P < 0.001). The total MPD length of the automated and manually created CPR images was 110.5 and 115.6 mm, respectively (P = 0.059). CONCLUSION: The automated CPR software significantly reduced reconstruction time without compromising image quality.
Failure to Achieve Domain Invariance With Domain Generalization Algorithms: An Analysis in Medical Imaging
Korevaar, Steven
Tennakoon, Ruwan
Bab-Hadiashar, Alireza
IEEE Access2023Journal Article, cited 0 times
MIDRC-RICORD-1A
MIDRC-RICORD-1B
One prominent issue in the application of deep learning is the failure to generalize to data that lies on a different distribution to the training data. While many methods have been proposed to address this, prior work has shown that when operating under the same conditions most algorithms perform almost equally. As such, more work needs to be done to validate past and future methods before they are put into important scenarios like medical imaging. Our work analyses eight domain generalization algorithms across four important medical imaging classification datasets along with three standard natural image classification problems to discover the differences in how these methods operate in these different contexts. We assess these algorithms in terms of generalization capability, domain invariance, and representational sensitivity. Through this, we show that despite the differences between domain and content variations between natural and medical imaging there is little deviation in the operation of each method between natural images and medical images. Additionally, we show that all tested algorithms retain significant amounts of domain-specific information in their feature representations despite explicit training to remove it. Thus, revealing the failure point of all these methods is a lack of class-discriminative features extracted from out-of-distribution data. While these results show that methods that work well on natural imaging work similarly in medical imaging, no method outperforms baseline methods, highlighting the continuing gap of achieving adequate domain generalization. Similarly, the results also question the efficacy of optimizing for domain invariant representations as a method for generalizing to unseen domains.
Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning
Korfiatis, Panagiotis
Kline, Timothy L
Erickson, Bradley J
Tomography2016Journal Article, cited 16 times
Website
BraTS
Magnetic Resonance Imaging (MRI)
FLAIR
Convolutional Neural Network (CNN)
Segmentation
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated. The proposed technique was within the interobserver variability with respect to Dice, Jaccard, and true positive fraction. The developed method can be used to produce automatic segmentations of tumor regions corresponding to signal-increased fluid-attenuated inversion recovery regions.
Is sarcopenia a predictor of overall survival in primary IDH-wildtype GBM patients with and without MGMT promoter hypermethylation?
Korkmaz, Serhat
Demirel, Emin
Neurology Asia2023Journal Article, cited 0 times
UCSF-PDGM
UPENN-GBM
Sarcopenia
Glioblastoma
MGMT methylation status
Magnetic Resonance Imaging (MRI)
Survival
Radiomic features
Radiogenomics
Background: In this study, we aimed to examine the success of temporal muscle thickness (TMT) and masseter muscle thickness (MMT) in predicting overall survival (OS) in primary IDH-wild glioblastoma (GBM) patients with and without MGMT promoter hypermethylation through publicly available datasets. Methods: We included 345 primary IDH-wild GBM patients with known MGMT promoter hypermethylation status who underwent gross-total resection and standard treatment, whose data were obtained from the open datasets. TMT was evaluated on axial thin section postcontrast T1-weighted images, and MMT was evaluated on axial T2-weighted images. The median TMT and MMT were used to determine the cut-off point. Results: The findings showed that median TMT 9.5 mm and median MMT 12.7 mm determined the cut-off value in predicting survival. Both TMT and MMT values less than the median muscle thickness were negatively associated with OS (TMT<9.5: HR 3.63 CI 2.34–4.23, p <0.001, MMT<12.7: HR 3.53 CI 2.27–4.07, p <0.001). When patients were classified according to MGMT positivity, the findings showed MGMT-negative patients (TMT<9.5: HR 2.54 CI 1.89–3.56, p <0.001, MMT<12.7: HR 2.65 CI 2.07–3.62, p <0.001) and MGMT-positive patients (TMT<9.5: HR 3.84 CI 2.48–4.28, p <0.001, MMT<12.7: HR 3.73 CI 2.98–4.71, p <0.001). Conclusion: Both TMT and MMT successfully predict survival in primary GBM patients. In addition, it can successfully predict survival in patients with and without MGMT promoter hypermethylation.
Examining the Validity of Input Lung CT Images Submitted to the AI-Based Computerized Diagnosis
Kosareva, Aleksandra A.
Paulenka, Dzmitry A.
Snezhko, Eduard V.
Bratchenko, Ivan A.
Kovalev, Vassili A.
Journal of Biomedical Photonics & Engineering2022Journal Article, cited 0 times
Website
LCTSC
LIDC-IDRI
Pancreas-CT
Head-Neck-PET-CT
ACRIN 6668
ACRIN-NSCLC-FDG-PET
Anti-PD-1_Lung
B-mode-and-CEUS-Liver
Prostate-MRI-US-Biopsy
Breast-MRI-NACT-Pilot
CPTAC-PDA
VICTRE
Classification
Convolutional Neural Network (CNN)
Deep Learning
Computer Aided Diagnosis (CADx)
Computed Tomography (CT)
A well-designed CAD tool should respond to input requests, user actions, and perform input checks. Thus, an important element of such a tool is the pre-processing of incoming data and screening out those data that cannot be processed by the application. In this paper, we consider non-trivial methods of chest computed tomography (CT) images verifications: modality and human chest checks. We review sources to develop training datasets, describe architectures of convolution neural networks (CNN), clarify pre-processing and augmentation processes of chest CT scans and show results of training. The developed application showed good results: 100% classification accuracy on the test dataset for modality check and 89% classification accuracy on the test dataset for checking of lungs presence. Analysis of wrong predictions showed that the model performs poorly on biopsy of lungs. In general, the developed input data validation model shows good results on the designed datasets for CT image modality check and for checking of lungs presence.
Visual attention condenser model for multiple disease detection from heterogeneous medical image modalities
Kotei, Evans
Thirunavukarasu, Ramkumar
Multimedia Tools and Applications2023Journal Article, cited 0 times
CBIS-DDSM
BREAST
Computer Aided Detection (CADe)
Algorithm Development
The World Health Organization (WHO) has identified breast cancer and tuberculosis (TB) as major global health issues. While breast cancer is a top killer of women, TB is an infectious disease caused by a single bacterium with a high mortality rate. Since both TB and breast cancer are curable, early screening ensures treatment. Medical imaging modalities, such as chest X-ray radiography and ultrasound, are widely used for diagnosing TB and breast cancer. Artificial intelligence (AI) techniques are applied to supplement the screening process for effective and early treatment due to the global shortage of radiologists and oncologists. These techniques fast-track the screening process leading to early detection and treatment. Deep learning (DL) is the most used technique producing outstanding results. Despite the success of these DL models in the automatic detection of TB and breast cancer, the suggested models are task-specific, meaning they are disease-oriented. Again, the complexity and weight of the DL applications make it difficult to apply the models on edge devices. Motivated by this, a Multi Disease Visual Attention Condenser Network (MD-VACNet) got proposed for multiple disease identification from different medical image modalities. The network architecture got designed automatically through a machine-driven design exploration with generative synthesis. The proposed MD-VACNet is a lightweight stand-alone visual recognition deep neural network based on VAC with a self-attention mechanism to run on edge devices. In the experiment, TB was identified based on chest X-ray images and breast cancer was based on ultrasound images. The suggested model achieved a 98.99% accuracy score, a 99.85% sensitivity score, and a 98.20% specificity score on the x-ray radiographs for TB diagnosis. The model also produced a cutting-edge performance on breast cancer classification into benign and malignant, with accuracy, sensitivity and specificity scores of 98.47%, 98.42%, and 98.31%, respectively. Regarding model architectural complexity, MD-VACNet is simple and lightweight for edge device implementation.
The impact of inter-observer variation in delineation on robustness of radiomics features in non-small cell lung cancer
Artificial intelligence and radiomics have the potential to revolutionise cancer prognostication and personalised treatment. Manual outlining of the tumour volume for extraction of radiomics features (RF) is a subjective process. This study investigates robustness of RF to inter-observer variation (IOV) in contouring in lung cancer. We utilised two public imaging datasets: 'NSCLC-Radiomics' and 'NSCLC-Radiomics-Interobserver1' ('Interobserver'). For 'NSCLC-Radiomics', we created an additional set of manual contours for 92 patients, and for 'Interobserver', there were five manual and five semi-automated contours available for 20 patients. Dice coefficients (DC) were calculated for contours. 1113 RF were extracted including shape, first order and texture features. Intraclass correlation coefficient (ICC) was computed to assess robustness of RF to IOV. Cox regression analysis for overall survival (OS) was performed with a previously published radiomics signature. The median DC ranged from 0.81 ('NSCLC-Radiomics') to 0.85 ('Interobserver'-semi-automated). The median ICC for the 'NSCLC-Radiomics', 'Interobserver' (manual) and 'Interobserver' (semi-automated) were 0.90, 0.88 and 0.93 respectively. The ICC varied by feature type and was lower for first order and gray level co-occurrence matrix (GLCM) features. Shape features had a lower median ICC in the 'NSCLC-Radiomics' dataset compared to the 'Interobserver' dataset. Survival analysis showed similar separation of curves for three of four RF apart from 'original_shape_Compactness2', a feature with low ICC (0.61). The majority of RF are robust to IOV, with first order, GLCM and shape features being the least robust. Semi-automated contouring improves feature stability. Decreased robustness of a feature is significant as it may impact upon the features' prognostic capability.
Federated Evaluation of nnU-Nets Enhanced with Domain Knowledge for Brain Tumor Segmentation
Kotowski, Krzysztof
Adamski, Szymon
Machura, Bartosz
Malara, Wojciech
Zarudzki, Lukasz
Nalepa, Jakub
2023Book Section, cited 0 times
RSNA-ASNR-MICCAI BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate and reproducible segmentation of brain tumors from multi-modal magnetic resonance (MR) scans is a pivotal step in practice. In this BraTS Continuous Evaluation initiative, we exploit a 3D nnU-Net for this task which was ranked at the 6th$$6^\textrm{th}$$ place (out of 1600 participants) in the BraTS’21 Challenge. We benefit from an ensemble of deep models enhanced with the expert knowledge of a senior radiologist captured in a form of several post-processing routines. The experimental study showed that infusing the domain knowledge into the algorithm can enhance their performance, and we obtained the average Dice score of 0.81977 (enhancing tumor), 0.87837 (tumor core), and 0.92723 (whole tumor) over the validation set. For the test data, we had the average Dice score of 0.86317, 0.87987, and 0.92838 for the enhancing tumor, tumor core and whole tumor. To validate the generalization capabilities of the nnU-Nets enhanced with domain knowledge, we performed their federated evaluation within the Federated Tumor Segmentation (FeTS) 2022 Challenge over the datasets captured across 30 institutions. Our technique was ranked 2nd$$2^\textrm{nd}$$ across all participating teams, proving its generalization capabilities over unseen out-of-sample datasets.
Coupling nnU-Nets with Expert Knowledge for Accurate Brain Tumor Segmentation from MRI
Kotowski, Krzysztof
Adamski, Szymon
Machura, Bartosz
Zarudzki, Lukasz
Nalepa, Jakub
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate and reproducible segmentation of brain tumors from multi-modal magnetic resonance (MR) scans is a pivotal step in clinical practice, as MR imaging is the modality of choice in brain tumor diagnosis and assessment, and incorrectly delineated tumor areas may adversely affect the process of designing the treatment pathway. In this paper, we exploit an end-to-end 3D nnU-Net architecture for this task, and utilize an ensemble of five models using our custom stratification based on the distribution of the necrosis, enhancing tumor, and edema. To improve the segmentation, we benefit from the experience of a senior radiologist captured in a form of several post-processing routines. The experiments obtained for the BraTS’21 training and validation sets show that exploiting such expert knowledge can significantly improve the underlying models, delivering the average Dice score of 0.81977 (enhancing tumor), 0.87837 (tumor core), and 0.92723 (whole tumor). Finally, our algorithm allowed us to take the 6th$$6^\mathrm{th}$$ place (out of 1600 participants) in the BraTS’21 Challenge, with the average Dice score over the test data of 0.86317, 0.87987, and 0.92838 for the enhancing tumor, tumor core and whole tumor, respectively.
Infusing Domain Knowledge into nnU-Nets for Segmenting Brain Tumors in MRI
Kotowski, Krzysztof
Adamski, Szymon
Machura, Bartosz
Zarudzki, Lukasz
Nalepa, Jakub
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate and reproducible segmentation of brain tumors from multi-modal magnetic resonance (MR) scans is a pivotal step in clinical practice. In this BraTS Continuous Evaluation initiative, we exploit a 3D nnU-Net for this task which was ranked at the 6th$$6^\textrm{th}$$ place (out of 1600 participants) in the BraTS’21 Challenge. We benefit from an ensemble of deep models enhanced with the expert knowledge of a senior radiologist captured in a form of several post-processing routines. The experimental study showed that infusing the domain knowledge into the deep models can enhance their performance, and we obtained the average Dice score of 0.81977 (enhancing tumor), 0.87837 (tumor core), and 0.92723 (whole tumor) over the validation set. For the test data, we had the average Dice score of 0.86317, 0.87987, and 0.92838 for the enhancing tumor, tumor core and whole tumor. Our approach was also validated over the hold-out testing data which encompassed the BraTS 2021 Challenge test set, as well as new data from out-of-sample sources including independent pediatric population of diffuse intrinsic pontine glioma patients, together with an independent multi-institutional dataset covering under-represented Sub-Saharian African adult patient population of brain diffuse glioma. Our technique was ranked 2nd$$2^\textrm{nd}$$ and 3rd$$3^\textrm{rd}$$ over the pediatric and Sub-Saharian African populations, respectively, proving its high generalization capabilities.
Segmenting Brain Tumors from MRI Using Cascaded 3D U-Nets
Kotowski, Krzysztof
Adamski, Szymon
Malara, Wojciech
Machura, Bartosz
Zarudzki, Lukasz
Nalepa, Jakub
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Computer Aided Detection (CADe)
In this paper, we exploit a cascaded 3D U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from multi-modal magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. To provide high-quality generalization, we investigate several regularization techniques that help improve the segmentation performance obtained for the unseen scans, and benefit from the expert knowledge of a senior radiologist captured in a form of several post-processing routines. Our preliminary experiments elaborated over the BraTS’20 validation set revealed that our approach delivers high-quality tumor delineation.
Detecting liver cirrhosis in computed tomography scans using clinically-inspired and radiomic features
Kotowski, K.
Kucharski, D.
Machura, B.
Adamski, S.
Gutierrez Becker, B.
Krason, A.
Zarudzki, L.
Tessier, J.
Nalepa, J.
Comput Biol Med2023Journal Article, cited 1 times
Website
HCC-TACE-Seg
Humans
Reproducibility of Results
*Tomography
X-Ray Computed/methods
*Liver Cirrhosis/diagnostic imaging
Abdomen
Retrospective Studies
Computed Tomography (CT)
Radiomic features
Liver cirrhosis
Machine learning
Hepatic cirrhosis is an increasing cause of mortality in developed countries-it is the pathological sequela of chronic liver diseases, and the final liver fibrosis stage. Since cirrhosis evolves from the asymptomatic phase, it is of paramount importance to detect it as quickly as possible, because entering the symptomatic phase commonly leads to hospitalization and can be fatal. Understanding the state of the liver based on the abdominal computed tomography (CT) scans is tedious, user-dependent and lacks reproducibility. We tackle these issues and propose an end-to-end and reproducible approach for detecting cirrhosis from CT. It benefits from the introduced clinically-inspired features that reflect the patient's characteristics which are often investigated by experienced radiologists during the screening process. Such features are coupled with the radiomic ones extracted from the liver, and from the suggested region of interest which captures the liver's boundary. The rigorous experiments, performed over two heterogeneous clinical datasets (two cohorts of 241 and 32 patients) revealed that extracting radiomic features from the liver's rectified contour is pivotal to enhance the classification abilities of the supervised learners. Also, capturing clinically-inspired image features significantly improved the performance of such models, and the proposed features were consistently selected as the important ones. Finally, we showed that selecting the most discriminative features leads to the Pareto-optimal models with enhanced feature-level interpretability, as the number of features was dramatically reduced (280x) from thousands to tens.
Robustifying Automatic Assessment of Brain Tumor Progression from MRI
Kotowski, Krzysztof
Machura, Bartosz
Nalepa, Jakub
2023Book Section, cited 0 times
Brain-Tumor-Progression
Accurate assessment of brain tumor progression from magnetic resonance imaging is a critical issue in clinical practice which allows us to precisely monitor the patient’s response to a given treatment. Manual analysis of such imagery is, however, prone to human errors and lacks reproducibility. Therefore, designing automated end-to-end quantitative tumor’s response assessment is of pivotal clinical importance nowadays. In this work, we further investigate this issue and verify the robustness of bidimensional and volumetric tumor’s measurements calculated over the delineations obtained using the state-of-the-art tumor segmentation deep learning model which was ranked 6th$$^\textrm{th}$$ in the BraTS21 Challenge. Our experimental study, performed over the Brain Tumor Progression dataset, showed that volumetric measurements are more robust against varying-quality tumor segmentation, and that improving brain extraction can notably impact the calculation of the tumor’s characteristics.
Detection and Segmentation of Brain Tumors from MRI Using U-Nets
Kotowski, Krzysztof
Nalepa, Jakub
Dudzik, Wojciech
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Graphics Processing Units (GPU)
In this paper, we exploit a cascaded U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. The total processing time of a single input volume amounts to around 15 s using a single GPU. The preliminary experiments over the BraTS’19 validation set revealed that our approach delivers high-quality tumor delineation and offers instant segmentation.
Boundary-aware semantic clustering network for segmentation of prostate zones from T2-weighted MRI
Kou, Weixuan
Marshall, Harry
Chiu, Bernard
Phys Med Biol2024Journal Article, cited 0 times
Website
ISBI-MR-Prostate-2013
boundary-aware contrastive (BAC) loss
prostate zonal segmentation
self-attention
semantic clustering attention (SCA)
Automatic segmentation of prostatic zones from MRI can improve clinical diagnosis of prostate cancer as lesions in the peripheral zone (PZ) and central gland (CG) exhibit different characteristics. Existing approaches are limited in their accuracy in localizing the edges of PZ and CG. The proposed boundary-aware semantic clustering network (BASC-Net) improves segmentation performance by learning features in the vicinity of the prostate zonal boundaries, instead of only focusing on manually segmented boundaries.

Approach: BASC-Net consists of two major components: the semantic clustering attention (SCA) module and the boundary-aware contrastive (BAC) loss. The SCA module implements a self-attention mechanism that extracts feature bases representing essential features of the inner body and boundary subregions and constructs attention maps highlighting each subregion. SCA is the first self-attention algorithm that utilizes ground truth masks to supervise the feature basis construction process. The features extracted from the inner body and boundary subregions of the same zone were integrated by BAC loss, which promotes the similarity of features extracted in the two subregions of the same zone. The BAC loss further promotes the difference between features extracted from different zones.

Main results: BASC-Net was evaluated on the NCI-ISBI 2013 Challenge and Prostate158 datasets. An inter-dataset evaluation was conducted to evaluate the generalizability of the proposed method. BASC-Net outperformed nine state-of-the-art methods in all three experimental settings, attaining Dice similarity coefficients (DSCs) of 79.9% and 88.6% for PZ and CG, respectively, in the NCI-ISBI dataset, 80.5% and 89.2% for PZ and CG, respectively, in Prostate158 dataset, and 73.2% and 87.4% for PZ and CG, respectively, in the inter-dataset evaluation.

Significance: As PZ and CG lesions have different characteristics, the zonal boundaries segmented by BASC-Net will facilitate prostate lesion detection.
.
A large dataset of white blood cells containing cell locations and types, along with segmented nuclei and cytoplasm
Accurate and early detection of anomalies in peripheral white blood cells plays a crucial role in the evaluation of well-being in individuals and the diagnosis and prognosis of hematologic diseases. For example, some blood disorders and immune system-related diseases are diagnosed by the differential count of white blood cells, which is one of the common laboratory tests. Data is one of the most important ingredients in the development and testing of many commercial and successful automatic or semi-automatic systems. To this end, this study introduces a free access dataset of normal peripheral white blood cells called Raabin-WBC containing about 40,000 images of white blood cells and color spots. For ensuring the validity of the data, a significant number of cells were labeled by two experts. Also, the ground truths of the nuclei and cytoplasm are extracted for 1145 selected cells. To provide the necessary diversity, various smears have been imaged, and two different cameras and two different microscopes were used. We did some preliminary deep learning experiments on Raabin-WBC to demonstrate how the generalization power of machine learning methods, especially deep neural networks, can be affected by the mentioned diversity. Raabin-WBC as a public data in the field of health can be used for the model development and testing in different machine learning tasks including classification, detection, segmentation, and localization.
Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer
Kovacs, B.
Netzer, N.
Baumgartner, M.
Schrader, A.
Isensee, F.
Weisser, C.
Wolf, I.
Gortz, M.
Jaeger, P. F.
Schutz, V.
Floca, R.
Gnirs, R.
Stenzinger, A.
Hohenfellner, M.
Schlemmer, H. P.
Bonekamp, D.
Maier-Hein, K. H.
Sci Rep2023Journal Article, cited 0 times
PROSTATEx
Image Registration
Algorithm Development
Computer Aided Diagnosis (CADx)
Male
Humans
*Prostate/diagnostic imaging/pathology
Magnetic Resonance Imaging/methods
Diagnosis
Computer-Assisted/methods
*Prostatic Neoplasms/diagnostic imaging/pathology
Computers
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.
The quest for'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images
Kowalik-Urbaniak, Ilona
Brunet, Dominique
Wang, Jiheng
Koff, David
Smolarski-Koff, Nadine
Vrscay, Edward R
Wallace, Bill
Wang, Zhou
2014Conference Proceedings, cited 0 times
Image Compression
BRAIN
JPEG2000
Computed Tomography (CT)
Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index and curve fitting method can provide SSIM and MSE thresholds for acceptable compression ratios.
Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT
Koyasu, S.
Nishio, M.
Isoda, H.
Nakamoto, Y.
Togashi, K.
Ann Nucl Med2020Journal Article, cited 3 times
Website
NSCLC Radiogenomics
LUNG
Non Small Cell Lung Cancer (NSCLC)
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.
Lupsix: A Cascade Framework for Lung Parenchyma Segmentation in Axial CT Images
Koyuncu, Hasan
International Journal of Intelligent Systems and Applications in Engineering2018Journal Article, cited 0 times
Website
LIDC-IDRI
Segmentation
An Efficient Pipeline for Abdomen Segmentation in CT Images
Koyuncu, H.
Ceylan, R.
Sivri, M.
Erdogan, H.
J Digit Imaging2018Journal Article, cited 4 times
Website
TCGA-LUAD
Segmentation
Classification
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.
Nodule2vec: A 3D Deep Learning System for Pulmonary Nodule Retrieval Using Semantic Representation
Kravets, Ilia
Heletz, Tal
Greenspan, Hayit
2020Book Section, cited 0 times
LIDC-IDRI
Content-based retrieval supports a radiologist decision making process by presenting the doctor the most similar cases from the database containing both historical diagnosis and further disease development history. We present a deep learning system that transforms a 3D image of a pulmonary nodule from a CT scan into a low-dimensional embedding vector. We demonstrate that such a vector representation preserves semantic information about the nodule and offers a viable approach for content-based image retrieval (CBIR). We discuss the theoretical limitations of the available datasets and overcome them by applying transfer learning of the state-of-the-art lung nodule detection model. We evaluate the system using the LIDC-IDRI dataset of thoracic CT scans. We devise a similarity score and show that it can be utilized to measure similarity 1) between annotations of the same nodule by different radiologists and 2) between the query nodule and the top four CBIR results. A comparison between doctors and algorithm scores suggests that the benefit provided by the system to the radiologist end-user is comparable to obtaining a second radiologist’s opinion.
Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept
Krieger, Miriam
Giger, Alina
Salomir, Rares
Bieri, Oliver
Celicanin, Zarko
Cattin, Philippe C
Lomax, Antony J
Weber, Damien C
Zhang, Ye
Radiotherapy and Oncology2020Journal Article, cited 0 times
Website
Proton Radiation Therapy
4D-Lung
Medical (CT) image generation with style
Krishna, Arjun
Mueller, Klaus
2019Conference Proceedings, cited 0 times
GAN
CT
Generation of hemipelvis surface geometry based on statistical shape modelling and contralateral mirroring
Krishna, Praveen
Robinson, Dale L.
Bucknill, Andrew
Lee, Peter Vee Sin
Biomechanics and Modeling in Mechanobiology2022Journal Article, cited 0 times
Website
CT Lymph Nodes
Model
Personalised fracture plates manufactured using 3D printing offer an improved treatment option for unstable pelvic ring fractures that may not be adequately secured using off-the-shelf components. To design fracture plates that secure the bone fragments in their pre-fracture positions, the fractures must be reduced virtually using medical imaging-based reconstructions, a time-consuming process involving segmentation and repositioning of fragments until surface congruency is achieved. This study compared statistical shape models (SSMs) and contralateral mirroring as automated methods to reconstruct the hemipelvis using varying amounts of bone surface geometry. The training set for the geometries was obtained from pelvis CT scans of 33 females. The root-mean-squared error (RMSE) was quantified across the entire surface of the hemipelvis and within specific regions, and deviations of pelvic landmarks were computed from their positions in the intact hemipelvis. The reconstruction of the entire hemipelvis surfaced based on contralateral mirroring had an RMSE of 1.21 ± 0.29 mm, whereas for SSMs based on the entire hemipelvis surface, the RMSE was 1.11 ± 0.29 mm, a difference that was not significant (p = 0.32). Moreover, all hemipelvis reconstructions based on the full or partial bone geometries had RMSEs and landmark deviations from contralateral mirroring that were significantly lower (p < 0.05) or statistically equivalent to the SSMs. These results indicate that contralateral mirroring tends to be more accurate than SSMs for reconstructing unilateral pelvic fractures. SSMs may still be a viable method for hemipelvis fracture reconstruction in situations where contralateral geometries are not available, such as bilateral pelvic factures, or for highly asymmetric pelvic anatomies.
Performance Analysis of Denoising in MR Images with Double Density Dual Tree Complex Wavelets, Curvelets and NonSubsampled Contourlet Transforms
Krishnakumar, V
Parthiban, Latha
Annual Research & Review in Biology2014Journal Article, cited 0 times
RIDER Breast MRI
Digital images are extensively used by the medical doctors during different stages of disease diagnosis and treatment process. In the medical field, noise occurs in an image during two phases: acquisition and transmission. During the acquisition phase, noise is induced into an image, due to manufacturing defects, improper functioning of internal components, minute component failures and manual handling errors of the electronic scanning devices such as PECT/SPECT, MRI/CT scanners. Nowadays, healthcare organizations are beginning to consider cloud computing solutions for managing and sharing huge volume of medical data. This leads to the possibility of transmitting different types of medical data including CT, MR images, patient details and much more information through internet. Due to the presence of noise in the transmission channel, some unwanted signals are added to the transmitted medical data. Image denoising algorithms are employed to reduce the unwanted modifications of the pixels in an image. In this paper, the performance of denoising methods with two dimensional transformations of nonsubsampled contourlets (NSCT), curvelets, double density dual tree complex wavelets (DD-DTCWT) are compared and analysed using the image quality measures such as peak signal to noise ratio, root mean square error, structural similarity index. In this paper, 200 MR images of brain (3T MRI scan), heart and breast are selected for testing the noise reduction techniques with above transformations. The results shows that the NSCT gives good PSNR values for random and impulse noises. DD-DTCWT has good noise suppressing capability for speckle and Rician noises. Both NSCT and DD-DTCWT copes well in images affected by poisson noises. The best PSNR value obtained for salt and pepper and additive white Guassian noises are 21.29 and 56.45 respectively. For speckle noises, DD-DTCWT gives 33.46 and it is better than NSCT and curvelet. The values 33.50 and 33.56 are the top PSNRs of NSCT and DD-DTCWT for poisson noises.
An Level Set Evolution Morphology Based Segmentation of Lung Nodules and False Nodule Elimination by 3D Centroid Shift and Frequency Domain DC Constant Analysis
Krishnamurthy, Senthilkumar
Narasimhan, Ganesh
Rengasamy, Umamaheswari
International Journal of u- and e- Service, Science and Technology2016Journal Article, cited 0 times
Website
LIDC-IDRI
Segmentation
LUNG
Classification
A Level Set Evolution with Morphology (LSEM) based segmentation algorithm is proposed in this work to segment all the possible lung nodules from a series of CT scan images. All the segmented nodule candidates were not cancerous in nature. Initially the vessels and calcifications were also segmented as nodule candidates. The structural feature analysis was carried out to remove the vessels. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule’s resultant position did not usually deviate. The calcifications were eliminated by frequency domain analysis.; DC constant of nodule candidates were computed in frequency domain. The nodule candidates with high DC constant value could be the calcifications as the calcification patterns were homogeneous in nature. This algorithm was applied on a database of 40 patient cases with 58 malignant nodules. The algorithms proposed in this paper precisely detected 55 malignant nodules and failed to detect 3 with a sensitivity of 95%. Further,; this algorithm correctly eliminated 778 tissue clusters that were initially segmented as nodules, however, 79 non-malignant tissue clusters were detected as malignant nodules.; Therefore, the false positive of this algorithm was 1.98 per patient.
Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives
Krishnamurthy, Senthilkumar
Narasimhan, Ganesh
Rengasamy, Umamaheswari
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine2016Journal Article, cited 17 times
Website
LIDC-IDRI
Algorithms
Analysis of Variance
Humans
Imaging
Three-Dimensional/*methods
LUNG
Radiographic Image Interpretation
Computer-Assisted/*methods
Tomography
X-Ray Computed/*methods
Computed Tomography (CT)
juxta-pleural nodule
morphology processing
shape feature extraction
three-dimensional segmentation
The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient.
Enrichment of lung cancer computed tomography collections with AI-derived annotations
Krishnaswamy, D.
Bontempi, D.
Thiriveedhi, V. K.
Punzo, D.
Clunie, D.
Bridge, C. P.
Aerts, Hjwl
Kikinis, R.
Fedorov, A.
Sci Data2024Journal Article, cited 0 times
Website
NLST
NSCLC-Radiomics
Public imaging datasets are critical for the development and evaluation of automated tools in cancer imaging. Unfortunately, many do not include annotations or image-derived features, complicating downstream analysis. Artificial intelligence-based annotation tools have been shown to achieve acceptable performance and can be used to automatically annotate large datasets. As part of the effort to enrich public data available within NCI Imaging Data Commons (IDC), here we introduce AI-generated annotations for two collections containing computed tomography images of the chest, NSCLC-Radiomics, and a subset of the National Lung Screening Trial. Using publicly available AI algorithms, we derived volumetric annotations of thoracic organs-at-risk, their corresponding radiomics features, and slice-level annotations of anatomical landmarks and regions. The resulting annotations are publicly available within IDC, where the DICOM format is used to harmonize the data and achieve FAIR (Findable, Accessible, Interoperable, Reusable) data principles. The annotations are accompanied by cloud-enabled notebooks demonstrating their use. This study reinforces the need for large, publicly accessible curated datasets and demonstrates how AI can aid in cancer imaging.
Exploring Compound Loss Functions for Brain Tumor Segmentation
Kriz, Anita
Mehta, Raghav
Nichyporuk, Brennan
Arbel, Tal
2024Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS 2023
In this study, we introduce a modified 3D U-Net framework tailored for the BraTS 2023 Segmentation - Adult Glioma challenge. Alongside conventional techniques such as data augmentation, post-processing, and Monte Carlo dropout, we investigate the efficacy of compound loss functions with a primary focus on mitigating class imbalance. In particular, we investigate various combinations of cross-entropy, boundary, and dice loss functions to identify the most suitable loss for the given data distribution. By engineering the baseline U-Net model with these modifications, we have determined that the combination of dice and cross-entropy loss yields encouraging results, exemplified by lesion-wise dice scores of 0.753, 0.791, and 0.886. Our analysis justifies the use of specially designed loss functions for the underlying data distribution at hand.
Automated Koos Classification of Vestibular Schwannoma
Kujawa, Aaron
Dorent, Reuben
Connor, Steve
Oviedova, Anna
Okasha, Mohamed
Grishchuk, Diana
Ourselin, Sebastien
Paddick, Ian
Kitchen, Neil
Vercauteren, Tom
Shapey, Jonathan
Frontiers in Radiology2022Journal Article, cited 0 times
Website
Vestibular-Schwannoma-SEG
Classification
Machine Learning
Objective: The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management.; ; Methods: We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons.; ; Results: Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 ± 0.05, F1 = 89.3 ± 3.0%, accuracy = 89.3 ± 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 ± 0.08, F1 = 89.1 ± 5.2, accuracy = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases.; ; Conclusions: We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.
A Deep Learning-Aided Automated Method for Calculating Metabolic Tumor Volume in Diffuse Large B-Cell Lymphoma
Metabolic tumor volume (MTV) is a robust prognostic biomarker in diffuse large B-cell lymphoma (DLBCL). The available semiautomatic software for calculating MTV requires manual input limiting its routine application in clinical research. Our objective was to develop a fully automated method (AM) for calculating MTV and to validate the method by comparing its results with those from two nuclear medicine (NM) readers. The automated method designed for this study employed a deep convolutional neural network to segment normal physiologic structures from the computed tomography (CT) scans that demonstrate intense avidity on positron emission tomography (PET) scans. The study cohort consisted of 100 patients with newly diagnosed DLBCL who were randomly selected from the Alliance/CALGB 50303 (NCT00118209) trial. We observed high concordance in MTV calculations between the AM and readers with Pearson’s correlation coefficients and interclass correlations comparing reader 1 to AM of 0.9814 (p < 0.0001) and 0.98 (p < 0.001; 95%CI = 0.96 to 0.99), respectively; and comparing reader 2 to AM of 0.9818 (p < 0.0001) and 0.98 (p < 0.0001; 95%CI = 0.96 to 0.99), respectively. The Bland–Altman plots showed only relatively small systematic errors between the proposed method and readers for both MTV and maximum standardized uptake value (SUVmax). This approach may possess the potential to integrate PET-based biomarkers in clinical trials.
Comparing the performance of a deep learning-based lung gross tumour volume segmentation algorithm before and after transfer learning in a new hospital
Kulkarni, Chaitanya
Sherkhane, Umesh
Jaiswar, Vinay
Mithun, Sneha
Mysore Siddu, Dinesh
Rangarajan, Venkatesh
Dekker, Andre
Traverso, Alberto
Jha, Ashish
Wee, Leonard
BJR|Open2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC-Radiomics-Interobserver1
Deep Learning
Automatic Segmentation
Lung Cancer
Gross Tumor Volume (GTV)
Transfer learning
Computed Tomography (CT)
Radiotherapy
Abstract; Objectives; Radiation therapy for lung cancer requires a gross tumour volume (GTV) to be carefully outlined by a skilled radiation oncologist (RO) to accurately pinpoint high radiation dose to a malignant mass while simultaneously minimizing radiation damage to adjacent normal tissues. This is manually intensive and tedious however, it is feasible to train a deep learning (DL) neural network that could assist ROs to delineate the GTV. However, DL trained on large openly accessible data sets might not perform well when applied to a superficially similar task but in a different clinical setting. In this work, we tested the performance of DL automatic lung GTV segmentation model trained on open-access Dutch data when used on Indian patients from a large public tertiary hospital, and hypothesized that generic DL performance could be improved for a specific local clinical context, by means of modest transfer-learning on a small representative local subset.; ; Methods; X-ray computed tomography (CT) series in a public data set called “NSCLC-Radiomics” from The Cancer Imaging Archive was first used to train a DL-based lung GTV segmentation model (Model 1). Its performance was assessed using a different open access data set (Interobserver1) of Dutch subjects plus a private Indian data set from a local tertiary hospital (Test Set 2). Another Indian data set (Retrain Set 1) was used to fine-tune the former DL model using a transfer learning method. The Indian data sets were taken from CT of a hybrid scanner based in nuclear medicine, but the GTV was drawn by skilled Indian ROs. The final (after fine-tuning) model (Model 2) was then re-evaluated in “Interobserver1” and “Test Set 2.” Dice similarity coefficient (DSC), precision, and recall were used as geometric segmentation performance metrics.; ; Results; Model 1 trained exclusively on Dutch scans showed a significant fall in performance when tested on “Test Set 2.” However, the DSC of Model 2 recovered by 14 percentage points when evaluated in the same test set. Precision and recall showed a similar rebound of performance after transfer learning, in spite of using a comparatively small sample size. The performance of both models, before and after the fine-tuning, did not significantly change the segmentation performance in “Interobserver1.”; ; Conclusions; A large public open-access data set was used to train a generic DL model for lung GTV segmentation, but this did not perform well initially in the Indian clinical context. Using transfer learning methods, it was feasible to efficiently and easily fine-tune the generic model using only a small number of local examples from the Indian hospital. This led to a recovery of some of the geometric segmentation performance, but the tuning did not appear to affect the performance of the model in another open-access data set.; ; Advances in knowledge; Caution is needed when using models trained on large volumes of international data in a local clinical setting, even when that training data set is of good quality. Minor differences in scan acquisition and clinician delineation preferences may result in an apparent drop in performance. However, DL models have the advantage of being efficiently “adapted” from a generic to a locally specific context, with only a small amount of fine-tuning by means of transfer learning on a small local institutional data set.
Analysis of CT DICOM Image Segmentation for Abnormality Detection
Kulkarni, Rashmi
Bhavani, K.
International Journal of Engineering and Manufacturing2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Computed Tomography (CT)
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.
Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data
Kumar, Ashnil
Kim, Jinman
Cai, Weidong
Fulham, Michael
Feng, Dagan
Journal of Digital Imaging2013Journal Article, cited 109 times
Website
Content based image retrieval (CBIR)
Interoperability
Review
Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.;
A visual analytics approach using the exploration of multidimensional feature spaces for content-based medical image retrieval
Kumar, Ashnil
Nette, Falk
Klein, Karsten
Fulham, Michael
Kim, Jinman
IEEE Journal of Biomedical and Health Informatics2014Journal Article, cited 27 times
Website
LIDC-IDRI
Content based medical image retrieval
Discovery radiomics for pathologically-proven computed tomography lung cancer prediction
Kumar, Devinder
Chung, Audrey G
Shaifee, Mohammad J
Khalvati, Farzad
Haider, Masoom A
Wong, Alexander
2017Conference Proceedings, cited 30 times
Website
LIDC-IDRI
Radiomics
Classification
LUNG
Deep convolutional neural network (DCNN)
Lung cancer is the leading cause for cancer related deaths. As such, there is an urgent need for a streamlined process that can allow radiologists to provide diagnosis with greater efficiency and accuracy. A powerful tool to do this is radiomics: a high-dimension imaging feature set. In this study, we take the idea of radiomics one step further by introducing the concept of discovery radiomics for lung cancer prediction using CT imaging data. In this study, we realize these custom radiomic sequencers as deep convolutional sequencers using a deep convolutional neural network learning architecture. To illustrate the prognostic power and effectiveness of the radiomic sequences produced by the discovered sequencer, we perform cancer prediction between malignant and benign lesions from 97 patients using the pathologically-proven diagnostic data from the LIDC-IDRI dataset. Using the clinically provided pathologically-proven data as ground truth, the proposed framework provided an average accuracy of 77.52% via 10-fold cross-validation with a sensitivity of 79.06% and specificity of 76.11%, surpassing the state-of-the art method.
Automatic Detection of White Blood Cancer From Bone Marrow Microscopic Images Using Convolutional Neural Networks
Kumar, Deepika
Jain, Nikita
Khurana, Aayush
Mittal, Sweta
Satapathy, Suresh Chandra
Senkerik, Roman
Hemanth, Jude D.
IEEE Access2020Journal Article, cited 0 times
SN-AM
Leukocytes, produced in the bone marrow, make up around one percent of all blood cells. Uncontrolled growth of these white blood cells leads to the birth of blood cancer. Out of the three different types of cancers, the proposed study provides a robust mechanism for the classification of Acute Lymphoblastic Leukemia (ALL) and Multiple Myeloma (MM) using the SN-AM dataset. Acute lymphoblastic leukemia (ALL) is a type of cancer where the bone marrow forms too many lymphocytes. On the other hand, Multiple myeloma (MM), a different kind of cancer, causes cancer cells to accumulate in the bone marrow rather than releasing them into the bloodstream. Therefore, they crowd out and prevent the production of healthy blood cells. Conventionally, the process was carried out manually by a skilled professional in a considerable amount of time. The proposed model eradicates the probability of errors in the manual process by employing deep learning techniques, namely convolutional neural networks. The model, trained on cells’ images, first pre-processes the images and extracts the best features. This is followed by training the model with the optimized Dense Convolutional neural network framework (termed DCNN here) and finally predicting the type of cancer present in the cells. The model was able to reproduce all the measurements correctly while it recollected the samples exactly 94 times out of 100. The overall accuracy was recorded to be 97.2%, which is better than the conventional machine learning methods like Support Vector Machine (SVMs), Decision Trees, Random Forests, Naive Bayes, etc. This study indicates that the DCNN model’s performance is close to that of the established CNN architectures with far fewer parameters and computation time tested on the retrieved dataset. Thus, the model can be used effectively as a tool for determining the type of cancer in the bone marrow.
Empirical evaluation of filter pruning methods for acceleration of convolutional neural network
Kumar, Dheeraj
Mehta, Mayuri A.
Joshi, Vivek C.
Oza, Rachana S.
Kotecha, Ketan
Lin, Jerry Chun-Wei
Multimedia Tools and Applications2023Journal Article, cited 0 times
C_NMC_2019
Classification
Deep convolutional neural network (DCNN)
Algorithm Development
Histopathology imaging features
Training and inference of deep convolutional neural networks are usually slow due to the depth of the network and the number of parameters in the network. Although high-performance processors usually accelerate the training of these networks, their use on resource-constrained devices is still limited. Several compression-based acceleration methods have been presented to optimize the performance of neural networks. However, their use and adaptation are still limited due to their adverse effects on the network structure. Therefore, different filter pruning methods have been proposed to keep the network structure intact. To better solve the above limitations, we first propose a detailed classification of model acceleration method to explain the different ways of enhancing the inference performance of the convolutional neural network. Second, we present a broad classification of filter pruning methods including the comparison of these methods. Third, we present an empirical evaluation of four filter pruning methods to understand the effects of filter pruning on model accuracy and parameter reduction. Fourth, we perform several experiments with ResNet20, a pre-trained CNN, and with the proposed custom CNN to show the effect of filter pruning on them. ResNet20 is used to address the multiclass classification using CIFAR 10 dataset and custom CNN is used to address the binary classification using Leukaemia image classification dataset that includes low-information medical images. The experimental results show that among the four filter pruning methods, the soft filter pruning method best preserves the accuracy of the original model for both ResNet20 and the custom CNN. In addition, the sampling-based filter pruning method shows the highest reduction of 99.8% in parameters on custom CNN. The overall results show a reasonable pruning ratio within five training epochs for both the pre-trained CNN and custom CNN. In addition, our results show that pruning redundant filters significantly reduces the model size, and number of floating point operations.
Medical image segmentation using modified fuzzy c mean based clustering
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.
Lung Nodule Classification Using Deep Features in CT Images
Kumar, Devinder
Wong, Alexander
Clausi, David A
2015Conference Proceedings, cited 114 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
Early detection of lung cancer can help in a sharp decrease in the lung cancer mortality rate, which accounts for more than 17% percent of the total cancer related deaths. A large number of cases are encountered by radiologists on a daily basis for initial diagnosis. Computer-aided diagnosis (CAD) systems can assist radiologists by offering a "second opinion" and making the whole process faster. We propose a CAD system which uses deep features extracted from an auto encoder to classify lung nodules as either malignant or benign. We use 4303 instances containing 4323 nodules from the National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) dataset to obtain an overall accuracy of 75.01% with a sensitivity of 83.35% and false positive of 0.39/patient over a 10 fold cross validation.
Lung Cancer Detection Using Image Segmentation by means of Various Evolutionary Algorithms
Kumar, K. Senthil
Venkatalakshmi, K.
Karthikeyan, K.
Computational and Mathematical Methods in Medicine2019Journal Article, cited 0 times
LungCT-Diagnosis
CT
The objective of this paper is to explore an expedient image segmentation algorithm for medical images to curtail the physicians' interpretation of computer tomography (CT) scan images. Modern medical imaging modalities generate large images that are extremely grim to analyze manually. The consequences of segmentation algorithms rely on the exactitude and convergence time. At this moment, there is a compelling necessity to explore and implement new evolutionary algorithms to solve the problems associated with medical image segmentation. Lung cancer is the frequently diagnosed cancer across the world among men. Early detection of lung cancer navigates towards apposite treatment to save human lives. CT is one of the modest medical imaging methods to diagnose the lung cancer. In the present study, the performance of five optimization algorithms, namely, k-means clustering, k-median clustering, particle swarm optimization, inertia-weighted particle swarm optimization, and guaranteed convergence particle swarm optimization (GCPSO), to extract the tumor from the lung image has been implemented and analyzed. The performance of median, adaptive median, and average filters in the preprocessing stage was compared, and it was proved that the adaptive median filter is most suitable for medical CT images. Furthermore, the image contrast is enhanced by using adaptive histogram equalization. The preprocessed image with improved quality is subject to four algorithms. The practical results are verified for 20 sample images of the lung using MATLAB, and it was observed that the GCPSO has the highest accuracy of 95.89%.
Computer-Aided Diagnosis of Life-Threatening Diseases
Kumar, Pramod
Ambekar, Sameer
Roy, Subarna
Kunchur, Pavan
2019Book Section, cited 0 times
LIDC-IDRI
Computer Aided Diagnosis (CADx)
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.
Recasted nonlinear complex diffusion method for removal of Rician noise from breast MRI images
Kumar, Pradeep
Srivastava, Subodh
Padma Sai, Y.
The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology2021Journal Article, cited 0 times
Website
RIDER Breast MRI
Magnetic Resonance Imaging (MRI)
BREAST
The evolution of magnetic resonance imaging (MRI) leads to the study of the internal anatomy of the breast. It maps the physical features along with functional characteristics of selected regions. However, its mapping accuracy is affected by the presence of Rician noise. This noise limits the qualitative and quantitative measures of breast image. This paper proposes recasted nonlinear complex diffusion filter for sharpening the details and removal of Rician noise. It follows maximum likelihood estimation along with optimal parameter selection of complex diffusion where the overall functionality is balanced by regularization parameters. To make recasted nonlinear complex diffusion, the edge threshold constraint “k” of diffusion coefficient is reformed. It is replaced by the standard deviation of the image. It offers a wide range of threshold as variability present in the image with respect to edge. It also provides an automatic selection of “k” instead of user-based value. A series of evaluation has been conducted with respect to different noise ratios further quality improvement of MRI. The qualitative and quantitative assessments of evaluations are tested for the Reference Image Database to Evaluate Therapy Response (RIDER) Breast database. The proposed method is also compared with other existing methods. The quantitative assessment includes the parameters of the full-reference image, human visual system, and no-reference image. It is observed that the proposed method is capable of preserving edges, sharpening the details, and removal of Rician noise.
A principal component fusion-based thresholded bin-stretching for CT image enhancement
Kumar, Sonu
Bhandari, Ashish Kumar
Signal, Image and Video Processing2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Algorithm Development
Image denoising
Computed Tomography (CT)
Computed tomography (CT) images play an important role in the medical field to diagnose unhealthy organs, the structure of the inner body, and other diseases. The acquisition of CT images is a challenging task because a sufficient amount of electromagnetic wave is required to capture better contrast images, but for some unavoidable reason, CT machine captures degraded images like low contrast, dark images, and noisy images. So, the enhancement of the CT images is required to visualize the internal body structure. For enhancing the degraded CT image, a novel enhancement technique is produced on the basis of multilevel Thresholding (MLT)-based bin-stretching with power law transform (PLT). Initially, the distorted CT image is processed using an MLT-based bin-stretching approach to improve the contrast of the image. After that, a median filter is applied to the processed image using MLT-based bin-stretching to eliminate the impulse noise. Now, adaptive PLT is applied to the processed filtered image to improve the overall contrast of the image. Finally, contrast improved image and processed image by histogram equalization are fused using the principle component analysis method to control the over-improved portion of the image using PLT. The enhanced image is found in the form of a fused image. The qualitative and quantitative parameters are much better than the other recently introduced enhancement methods.
An Enhanced Convolutional Neural Architecture with Residual Module for Mri Brain Image Classification System
Kumar, S Mohan
Yadav, K.P.
Turkish Journal of Physiotherapy and Rehabilitation2021Journal Article, cited 0 times
Website
Deep Learning
Classification
REMBRANDT
Computer Aided Diagnosis (CADx)
Deep Neural Network (DNN) has played an important role in the analysis of image and signal processing. It has the ability to abstract features very deeply. In the field of medical image processing DNN has provided a recognition method for classifying the abnormality of the medical images. In this paper, DNN based Magnetic Resonance Imaging (MRI) brain image classification with modified residual module named Pyramid Design of Residual (PDR) system is developed. The conventional residual module is arranged in a pyramid like architecture. The MRI image classification tests performed on REpository of Molecular BRAin Neoplasia DaTa (REMBRANDT) database demonstrated that the DNN-PDR system can improve the accuracy. The classification test results also show that there is notable improvement in terms of accuracy (99.5%), specificity (100%) and sensitivity (99%). A comparison between the DNN-PDR system and the existing systems are also given.
Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images
Kumar, V.
Prabha, C.
Sharma, P.
Mittal, N.
Askar, S. S.
Abouhawwash, M.
BMC Med Imaging2024Journal Article, cited 0 times
LIDC-IDRI
Humans
*Lung Neoplasms/diagnostic imaging
*Deep Learning
Algorithms
Machine Learning
Research Design
Cancer Detection
Deep Learning
EfficientNet-B3
Fusion
Lung Cancer
ResNet-101
ResNet-50
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.
Leukemia Classification using Transfer Learning Models
Kundu, Srijit
Jash, Diptayan
Dutta, Rudrajit
Kannan, Deeba
Shankar, K. C. Prabu
Yakub, Fitri
2024Conference Paper, cited 0 times
C-NMC 2019
Leukemia, a complex cancer of the blood and bone marrow, involves the uncontrolled growth of abnormal, immature white blood cells. Accurate and timely classification of leukemia sub-types is crucial for effective treatment planning and patient management. Traditionally, this relies on microscopic blood cell analysis, a laborious process susceptible to human error. Automated classification systems using machine learning hold promise for improving efficiency and accuracy. This paper investigates the application of various transfer learning models for classifying leukemia sub types from blood images. Through rigorous experimentation and validation, we demonstrate the effectiveness and robustness of our approach in accurately classi-fying leukemia sub types. Among the models tested, InceptionV3performs the best, achieving an accuracy of 95.06%. Finally, we address the ethical considerations and future directions of this research area, highlighting its potential to improve diagnostic accuracy and expedite treatment decisions.
The Clinical Applications of 3D Polymer Gel Dosimetry in Commissioning Stereotactic Radiosurgery (SRS) and Spatially Fractionated Radiotherapy (SFRT)
Radiation therapy is used to treat various types of cancers. The technologies in radiation delivery continue to advance rapidly. Currently, we are able to accurately target a radiation beam to a tumour volume by conforming the shape of the beam to the complex tumour shape. However, with that, there is a need for radiation dose detection tools that accurately capture the complex dose distribution in 3D space in order to verify the accuracy and precision of a treatment delivery. The purpose of this work is to implement a promising solution to this clinical challenge that utilizes a 3D NIPAM polymer gel dosimetry system with CBCT readout to verify the dosimetric and spatial accuracy of stereotactic radiosurgery (SRS) and spatially fractionated radiotherapy (SFRT) technique.
Three main objectives of this work are 1) to evaluate the reproducibility of a NIPAM gel dosimetry workflow between two institutions, by implementing three identical verification plans in order to demonstrate its wide scale applicability in commissioning advanced radiotherapy techniques. In the study, two separate gel analysis pipelines were utilized based on the individual institution’s preference. 2) to commission two SRS techniques; HyperArc® (Varian Medical Systems, Palo Alto CA) to treat brain metastases and a virtual cone technique to treat trigeminal neuralgia. In the virtual cone study, an end–to–end spatial accuracy test of the treatment delivery was performed using a 3D-printed anthropomorphic phantom. The dosimetric accuracy of the gel dosimetry system was benchmarked against a gold standard, film dosimeter. 3) utilizing a traditional dosimeter solely to verify the treatment delivery accuracy of SFRT is incredibly challenging and inefficient due to the heterogenous dose distribution generated in three-dimensional space.
Therefore, the goal of the final study is to demonstrate the application of the gel dosimetry system to commission SFRT technique. A semi-automated SFRT planning approach was utilized to generate a verification plan on a gel dosimeter for analysis.
This work presents novel applications of a gel dosimetry workflow in two advanced radiotherapy deliveries (SRS and SFRT). The dosimetric and spatial accuracy with this type of gel dosimetry analysis is invaluable for the clinical commissioning process.
A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images
Kunkyab, T.
Bahrami, Z.
Zhang, H.
Liu, Z.
Hyde, D.
J Appl Clin Med Phys2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Gross Tumor Volume (GTV)
Computed Tomography (CT)
Auto-segmentation
Model
Deep convolutional neural network (DCNN)
U-Net
encoder-decoder
Non-Small Cell Lung Cancer (NSCLC)
PURPOSE: Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS: Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS: The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS: Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Circular LSTM for Low-Dose Sinograms Inpainting
Kuo, Chin
Wei, Tzu-Ti
Chen, Jen-Jee
Tseng, Yu-Chee
IEEE Access2023Journal Article, cited 0 times
LDCT-and-Projection-data
Image denoising
Graphics Processing Units (GPU)
Unsupervised learning
Computed tomography (CT) is usually accompanied by a long scanning time and substantial patient radiation exposure. Sinograms are the basis for constructing CT scans; however, continuous sinograms may highly overlap, resulting in extra radiation exposure. This paper proposes a deep learning model to inpaint a sparse-view sinogram sequence. Because a sinogram sequence around the human body is circular in nature, we propose a circular LSTM (CirLSTM) architecture that feeds position-relevant information to our model. To evaluate the performance of our proposed method, we compared the results of our inpainted sinograms with ground truth sinograms using evaluation metrics, including the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The SSIM values for both our proposed method and the state-of-the-art method range from 0.998 to 0.999, indicating that the prediction of structures is not challenging for either method. Our proposed CirLSTM achieves PSNR values ranging from 49 to 52, outperforming all the other compared methods. These results demonstrate the feasibility of using only interleaved sinograms to construct a complete sinogram sequence and to generate high-quality CT images. Furthermore, we validated the proposed model across different body portions and CT machine models. The results show that CirLSTM outperforms all other methods in both the across-body segment validation and across-machine validation scenarios.
Multi-center validation of an artificial intelligence system for detection of COVID-19 on chest radiographs in symptomatic patients
Kuo, M. D.
Chiu, K. W. H.
Wang, D. S.
Larici, A. R.
Poplavskiy, D.
Valentini, A.
Napoli, A.
Borghesi, A.
Ligabue, G.
Fang, X. H. B.
Wong, H. K. C.
Zhang, S.
Hunter, J. R.
Mousa, A.
Infante, A.
Elia, L.
Golemi, S.
Yu, L. H. P.
Hui, C. K. M.
Erickson, B. J.
Eur Radiol2022Journal Article, cited 0 times
Website
COVID-19-NY-SBU
COVID-19
Public health
Radiology
Thoracic
OBJECTIVES: While chest radiograph (CXR) is the first-line imaging investigation in patients with respiratory symptoms, differentiating COVID-19 from other respiratory infections on CXR remains challenging. We developed and validated an AI system for COVID-19 detection on presenting CXR. METHODS: A deep learning model (RadGenX), trained on 168,850 CXRs, was validated on a large international test set of presenting CXRs of symptomatic patients from 9 study sites (US, Italy, and Hong Kong SAR) and 2 public datasets from the US and Europe. Performance was measured by area under the receiver operator characteristic curve (AUC). Bootstrapped simulations were performed to assess performance across a range of potential COVID-19 disease prevalence values (3.33 to 33.3%). Comparison against international radiologists was performed on an independent test set of 852 cases. RESULTS: RadGenX achieved an AUC of 0.89 on 4-fold cross-validation and an AUC of 0.79 (95%CI 0.78-0.80) on an independent test cohort of 5,894 patients. Delong's test showed statistical differences in model performance across patients from different regions (p < 0.01), disease severity (p < 0.001), gender (p < 0.001), and age (p = 0.03). Prevalence simulations showed the negative predictive value increases from 86.1% at 33.3% prevalence, to greater than 98.5% at any prevalence below 4.5%. Compared with radiologists, McNemar's test showed the model has higher sensitivity (p < 0.001) but lower specificity (p < 0.001). CONCLUSION: An AI model that predicts COVID-19 infection on CXR in symptomatic patients was validated on a large international cohort providing valuable context on testing and performance expectations for AI systems that perform COVID-19 prediction on CXR. KEY POINTS: * An AI model developed using CXRs to detect COVID-19 was validated in a large multi-center cohort of 5,894 patients from 9 prospectively recruited sites and 2 public datasets. * Differences in AI model performance were seen across region, disease severity, gender, and age. * Prevalence simulations on the international test set demonstrate the model's NPV is greater than 98.5% at any prevalence below 4.5%.
Semi-Supervised Learning with Pseudo-Labeling for Pancreatic Cancer Detection on CT Scans
Kurasova, Olga
Medvedev, Viktor
Šubonienė, Aušra
Dzemyda, Gintautas
Gulla, Aistė
Samuilis, Artūras
Jagminas, Džiugas
Strupas, Kęstutis
2023Conference Paper, cited 0 times
Pancreas-CT
Semi-supervised learning
PANCREAS
Computer Aided Detection (CADe)
Deep learning techniques have recently gained increasing attention not only among computer science researchers but are also being applied in a wide range of fields. However, deep learning models demand huge amounts of data. Furthermore, fully supervised learning requires labeled data to solve classification, recognition, and segmentation problems. Data labeling and annotation in the medical domain are time-consuming and labor-intensive. Semi-supervised learning has demonstrated the ability to improve deep learning performance when labeled data is scarce. However, it is still an open and challenging question on how to leverage not only labeled data but also the huge amount of unlabeled data. In this paper, the problem of pancreatic cancer detection on CT scans is addressed by a semi-supervised learning approach based on pseudo-labeling. Preliminary results are promising and show the potential of semi-supervised deep learning to detect pancreatic cancer at an early stage with a limited amount of labeled data.
Predicting the MGMT Promoter Methylation Status in T2-FLAIR Magnetic Resonance Imaging Scans Using Machine Learning
Kurbiel, Martyna
Wijata, Agata
Nalepa, Jakub
2024Conference Paper, cited 0 times
BraTS 2021
RSNA-ASNR-MICCAI BraTS 2021
Radiomics
MGMT methylation status
Glioblastoma
Classification
Glioblastoma is the most common form of brain cancer in adults, and is characterized by one of the worst prognosis, with median survival being less than one year. Magnetic resonance imaging (MRI) plays a key role in detecting and objectively tracking the disease by extracting quantifiable parameters of the tumor, such as its volume or bidimensional measurements. However, it has been shown that the presence a specific genetic sequence in a lesion, being the DNA repair enzyme O6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation, may be effectively used to predict the patient’s responsiveness to chemotherapy. The invasive process of analyzing a tissue sample to verify the MGMT promoter methylation status is time-consuming, and may require performing multiple surgical interventions in longitudinal studies. Thus, building non-invasive techniques of predicting the genetic subtype of glioblastoma is of utmost practical importance to not only accelerate the overall process of determining the MGMT promoter methylation status in glioblastoma patients, but also to minimize the number of necessary surgeries. In this paper, we tackle this problem and propose an end-to-end machine learning classification pipeline benefitting from radiomic features extracted from brain MRI scans, and validate it over a well-established RSNA-MICCAI Brain Tumor Radiogenomic Classification benchmark dataset.
Classification of magnetic resonance images for brain tumour detection
Kurmi, Yashwant
Chaurasia, Vijayshri
IET Image Processing2020Journal Article, cited 0 times
REMBRANDT
Image segmentation of magnetic resonance image (MRI) is a crucial process for visualisation and examination of abnormal tissues, especially during clinical analysis. Complexity and variations of the tumour structure magnify the challenges in the automated detection of a brain tumour in MRIs. This study presents an automatic lesion recognition method in the MRI followed by classification. In the proposed multistage image segmentation method, the intent region initialisation is performed using low‐level information by the keypoint descriptors. A set of the linear filter is used to transform low‐level information into higher‐level image features. The set of features and filter training data are accomplished to track the tumour region. The authors adopt a possibilistic model for region growing, and disparity map for the refinement process to grave consist boundary. Further, the features are extracted using the Fisher vector and autoencoder. A set of handcrafted features is also extracted using a segmentation‐based localised region to train and test the support vector machine and multilayer perceptron classifiers. The experiments that are performed using five MRI datasets confirm the superiority of proposal as that of the state‐of‐the‐art methods. It reports 94.5 and 91.76%, average accuracy of segmentation and classification, respectively.
A Novel Deep Learning Model for Pancreas Segmentation: Pascal U-Net
Kurnaz, Ender
Ceylan, Rahime
Bozkurt, Mustafa Alper
Cebeci, Hakan
Koplay, Mustafa
Inteligencia Artificial2024Journal Article, cited 0 times
Website
Pancreas-CT
Convolutional Neural Network (CNN)
U-Net
Segmentation
Organ segmentation
A robust and reliable automated organ segmentation from abdomen images is a crucial problem in both quantitative imaging analysis and computer aided diagnosis. Especially, automatic pancreas segmentation from abdomen CT images is most challenging task which based on in two main aspects (1) high variability in anatomy (like as shape, size, etc.) and location across different patients (2) low contrast with neighboring tissues. Due to these reasons, achievement of high accuracies in pancreas segmentation is hard image segmentation problem. In this paper, we propose a novel deep learning model which is convolutional neural network-based model called Pascal U-Net for pancreas segmentation. Performance of the proposed model is evaluated on The Cancer Imaging Archive (TCIA) Pancreas CT database and abdomen CT dataset which is taken from Selcuk University Medicine Faculty Radiology Department. During the experimental studies, k-fold cross-validation method is used. Furthermore, results of the proposed model are compared with results of traditional U-Net. If results obtained by Pascal U-Net and traditional U-net for different batch size and fold number is compared, it can be seen that experiments on both datasets validate the effectiveness of Pascal U-Net model for pancreas segmentation.
KiT-RT: An Extendable Framework for Radiative Transfer and Therapy
Kusch, Jonas
Schotthöfer, Steffen
Stammer, Pia
Wolters, Jannick
Xiao, Tianbai
2023Journal Article, cited 0 times
Lung-PET-CT-Dx
In this article, we present Kinetic Transport Solver for Radiation Therapy (KiT-RT), an open source C++-based framework for solving kinetic equations in therapy applications available at https://github.com/CSMMLab/KiT-RT . This software framework aims to provide a collection of classical deterministic solvers for unstructured meshes that allow for easy extendability. Therefore, KiT-RT is a convenient base to test new numerical methods in various applications and compare them against conventional solvers. The implementation includes spherical harmonics, minimal entropy, neural minimal entropy, and discrete ordinates methods. Solution characteristics and efficiency are presented through several test cases ranging from radiation transport to electron radiation therapy. Due to the variety of included numerical methods and easy extendability, the presented open source code is attractive for both developers, who want a basis to build their numerical solvers, and users or application engineers, who want to gain experimental insights without directly interfering with the codebase.
Conditional Generative Adversarial Networks for low-dose CT image denoising aiming at preservation of critical image content
Kusters, K. C.
Zavala-Mondragon, L. A.
Bescos, J. O.
Rongen, P.
de With, P. H. N.
van der Sommen, F.
Annu Int Conf IEEE Eng Med Biol Soc2021Journal Article, cited 0 times
LDCT-and-Projection-data
Generative Adversarial Network (GAN)
Algorithms
Humans
*Image Processing
Computer-Assisted
Signal-To-Noise Ratio
*Tomography
X-Ray Computed
X-ray Computed Tomography (CT) is an imaging modality where patients are exposed to potentially harmful ionizing radiation. To limit patient risk, reduced-dose protocols are desirable, which inherently lead to an increased noise level in the reconstructed CT scans. Consequently, noise reduction algorithms are indispensable in the reconstruction processing chain. In this paper, we propose to leverage a conditional Generative Adversarial Networks (cGAN) model, to translate CT images from low-to-routine dose. However, when aiming to produce realistic images, such generative models may alter critical image content. Therefore, we propose to employ a frequency-based separation of the input prior to applying the cGAN model, in order to limit the cGAN to high-frequency bands, while leaving low-frequency bands untouched. The results of the proposed method are compared to a state-of-the-art model within the cGAN model as well as in a single-network setting. The proposed method generates visually superior results compared to the single-network model and the cGAN model in terms of quality of texture and preservation of fine structural details. It also appeared that the PSNR, SSIM and TV metrics are less important than a careful visual evaluation of the results. The obtained results demonstrate the relevance of defining and separating the input image into desired and undesired content, rather than blindly denoising entire images. This study shows promising results for further investigation of generative models towards finding a reliable deep learning-based noise reduction algorithm for low-dose CT acquisition.
COVID-19 Lesion Segmentation Framework for the Contrast-Enhanced CT in the Absence of Contrast-Enhanced CT Annotations
Medical imaging is a dynamic domain where new acquisition protocols are regularly developed and employed to meet changing clinical needs. Deep learning models for medical image segmentation have proven to be a valuable tool for medical image processing. Creating such a model from scratch requires a lot of effort in terms of annotating new types of data and model training. Therefore, the amount of annotated training data for the new imaging protocol might still be limited. In this work we propose a framework for segmentation of images acquired with a new imaging protocol(contrast-enhanced lung CT) that does not require annotating training data in the new target domain. Instead, the framework leverages the previously developed models, data and annotations in a related source domain. Using contrast-enhanced lung CT data as a target data we demonstrate that unpaired image translation from the non-contrast enhanced source data, combined with self-supervised pretraining achieves 0.726 Dice Score for the COVID-19 lesion segmentation task on the target data, without the necessity to annotate any target data for the model training.
Combining Generative Models for Multifocal Glioma Segmentation and Registration
In this paper, we propose a new method for simultaneously segmenting brain scans of glioma patients and registering these scans to a normal atlas. Performing joint segmentation and registration for brain tumors is very challenging when tumors include multifocal masses and have complex shapes with heterogeneous textures. Our approach grows tumors for each mass from multiple seed points using a tumor growth model and modifies a normal atlas into one with tumors and edema using the combined results of grown tumors. We also generate a tumor shape prior via the random walk with restart, utilizing multiple tumor seeds as initial foreground information. We then incorporate this shape prior into an EM framework which estimates the mapping between the modified atlas and the scans, posteriors for each tissue labels, and the tumor growth model parameters. We apply our method to the BRATS 2013 leaderboard dataset to evaluate segmentation performance. Our method shows the best performance among all participants.
A 2.5D convolutional neural network for HPV prediction in advanced oropharyngeal cancer
La Greca Saint-Esteven, A.
Bogowicz, M.
Konukoglu, E.
Riesterer, O.
Balermpas, P.
Guckenberger, M.
Tanadini-Lang, S.
van Timmeren, J. E.
Comput Biol Med2022Journal Article, cited 0 times
Website
OPC-Radiomics
Head-Neck-Radiomics-HN1
HNSCC
Computed Tomography (CT)
Convolutional Neural Network (CNN)
Deep Learning
Human papilloma virus
Oropharyngeal cancer
HEADNECK
BACKGROUND: Infection with human papilloma virus (HPV) is one of the most relevant prognostic factors in advanced oropharyngeal cancer (OPC) treatment. In this study we aimed to assess the diagnostic accuracy of a deep learning-based method for HPV status prediction in computed tomography (CT) images of advanced OPC. METHOD: An internal dataset and three public collections were employed (internal: n = 151, HNC1: n = 451; HNC2: n = 80; HNC3: n = 110). Internal and HNC1 datasets were used for training, whereas HNC2 and HNC3 collections were used as external test cohorts. All CT scans were resampled to a 2 mm(3) resolution and a sub-volume of 72x72x72 pixels was cropped on each scan, centered around the tumor. Then, a 2.5D input of size 72x72x3 pixels was assembled by selecting the 2D slice containing the largest tumor area along the axial, sagittal and coronal planes, respectively. The convolutional neural network employed consisted of the first 5 modules of the Xception model and a small classification network. Ten-fold cross-validation was applied to evaluate training performance. At test time, soft majority voting was used to predict HPV status. RESULTS: A final training mean [range] area under the curve (AUC) of 0.84 [0.76-0.89], accuracy of 0.76 [0.64-0.83] and F1-score of 0.74 [0.62-0.83] were achieved. AUC/accuracy/F1-score values of 0.83/0.75/0.69 and 0.88/0.79/0.68 were achieved on the HNC2 and HNC3 test sets, respectively. CONCLUSION: Deep learning was successfully applied and validated in two external cohorts to predict HPV status in CT images of advanced OPC, proving its potential as a support tool in cancer precision medicine.
Diagnostic Accuracy and Reliability of Deep Learning-Based Human Papillomavirus Status Prediction in Oropharyngeal Cancer
Oropharyngeal cancer (OPC) patients with associated human papillomavirus (HPV) infection generally present more favorable outcomes than HPV-negative patients and, consequently, their treatment with radiation therapy may be potentially de-escalated. The diagnostic accuracy of a deep learning (DL) model to predict HPV status on computed tomography (CT) images was evaluated in this study, together with its ability to perform unsupervised heatmap-based localization of relevant regions in OPC and HPV infection, i.e., the primary tumor and lymph nodes, as a measure of its reliability. The dataset consisted of 767 patients from one internal and two public collections from The Cancer Imaging Archive and was split into training, validation and test sets using the ratio 60–20–20. Images were resampled to a resolution of 2 mm3 and a sub-volume of 96 pixels3 was automatically cropped, which spanned from the nose until the start of the lungs. Models Genesis was fine-tuned for the classification task. Grad-CAM and Score-CAM were applied to the test subjects that belonged to the internal cohort (n = 24), and the overlap and Dice coefficients between the resulting heatmaps and the planning target volumes (PTVs) were calculated. Final train/validation/test area-under-the-curve (AUC) values of 0.9/0.87/0.87, accuracies of 0.83/0.82/0.79, and F1-scores of 0.83/0.79/0.74 were achieved. The reliability analysis showed an increased focus on dental artifacts in HPV-positive patients, whereas promising overlaps and moderate Dice coefficients with the PTVs were obtained for HPV-negative cases. These findings prove the necessity of performing reliability studies before a DL model is implemented in a real clinical setting, even if there is optimal diagnostic accuracy.
Knowledge Distillation for Brain Tumor Segmentation
Lachinov, Dmitrii
Shipunova, Elena
Turlapov, Vadim
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
The segmentation of brain tumors in multimodal MRIs is one of the most challenging tasks in medical image analysis. The recent state of the art algorithms solving this task are based on machine learning approaches and deep learning in particular. The amount of data used for training such models and its variability is a keystone for building an algorithm with high representation power.; ; In this paper, we study the relationship between the performance of the model and the amount of data employed during the training process. On the example of brain tumor segmentation challenge, we compare the model trained with labeled data provided by challenge organizers, and the same model trained in omni-supervised manner using additional unlabeled data annotated with the ensemble of heterogeneous models.; ; As a result, a single model trained with additional data achieves performance close to the ensemble of multiple models and outperforms individual methods.
MRI analysis takes central position in brain tumor diagnosis and treatment, thus its precise evaluation is crucially important. However, its 3D nature imposes several challenges, so the analysis is often performed on 2D projections that reduces the complexity, but increases bias. On the other hand, time consuming 3D evaluation, like segmentation, is able to provide precise estimation of a number of valuable spatial characteristics, giving us understanding about the course of the disease.Recent studies focusing on the segmentation task, report superior performance of Deep Learning methods compared to classical computer vision algorithms. But still, it remains a challenging problem. In this paper we present deep cascaded approach for automatic brain tumor segmentation. Similar to recent methods for object detection, our implementation is based on neural networks; we propose modifications to the 3D UNet architecture and augmentation strategy to efficiently handle multimodal MRI input, besides this we introduce approach to enhance segmentation quality with context obtained from models of the same topology operating on downscaled data. We evaluate presented approach on BraTS 2018 dataset and achieve promising results on test dataset with 14th place and Dice score of 0.720/0.878/0.785 for enhancing tumor, whole tumor and tumor core segmentation respectively.
Multi-view Learning with Two-stage Training of 2D CNNs for Tumor Sub-regions Segmentation from 3D Brain MRI Volumes
Lahoti, Ritu
Sinha, Neelam
Reddy, Vinod
2022Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS 2019
In this study, we have performed brain tumor segmentation on a publicly available BraTS 2019 dataset. The training data contains multi-modal 3D volumetric brain MRI data for 259 High Grade Glioma (HGG) cases and 76 Low Grade Glioma (LGG) cases. The focus is on being able to classify each of the brain voxels as belonging to either of the tumor-classes (whole tumor, tumor core, enhancing tumor) or non-tumor class. For this, two-stage training is performed on three 2D CNN models, one for each of sagittal, coronal and axial views. The models are first trained for binary segmentation of the whole brain tumor and then fine-tuned for tumor sub-regions segmentation tasks. Moreover, the models are trained on only tumor-bearing 2D slices of the brain MR volumes and as HGG data are more than LGG data, to make the segmentation models better generalized to both tumor grades, LGG data are augmented more than HGG data. The multi-view segmentation outputs of test data are integrated to obtain the final segmentation results. The entire approach has been evaluated on the BraTS training (335 cases) and validation (125 cases) datasets and compared with results of existing methods on the same dataset.
Domain-Aware Dual Attention for Generalized Medical Image Segmentation on Unseen Domains
Lai, Huilin
Luo, Ye
Li, Bo
Zhang, Guokai
Lu, Jianwei
IEEE Journal of Biomedical and Health Informatics2023Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Recently, there has been significant progress in medical image segmentation utilizing deep learning techniques. However, these achievements largely rely on the supposition that the source and target domain data are identically distributed, and the direct application of related methods without addressing the distribution shift results in dramatic degradation in realistic clinical environments. Current approaches concerning the distribution shift either require the target domain data in advance for adaptation, or focus only on the distribution shift across domains while ignoring the intra-domain data variation. This paper proposes a domain-aware dual attention network for the generalized medical image segmentation task on unseen target domains. To alleviate the severe distribution shift between the source and target domains, an Extrinsic Attention (EA) module is designed to learn image features with knowledge originating from multi-source domains. Moreover, an Intrinsic Attention (IA) module is also proposed to handle the intra-domain variation by individually modeling the pixel-region relations derived from an image. The EA and IA modules complement each other well in terms of modeling the extrinsic and intrinsic domain relationships, respectively. To validate the model effectiveness, comprehensive experiments are conducted on various benchmark datasets, including the prostate segmentation in magnetic resonance imaging (MRI) scans and the optic cup/disc segmentation in fundus images. The experimental results demonstrate that our proposed model effectively generalizes to unseen domains and exceeds the existing advanced approaches.
A Radiogenomic multimodal and whole-transcriptome sequencing for preoperative prediction of axillary lymph node metastasis and drug therapeutic response in breast cancer: a retrospective, machine learning And international multi-cohort study
Lai, J.
Chen, Z.
Liu, J.
Zhu, C.
Huang, H.
Yi, Y.
Cai, G.
Liao, N.
Int J Surg2024Journal Article, cited 0 times
Website
TCGA-BRCA
Duke-Breast-Cancer-MRI
Radiogenomics
Support Vector Machine (SVM)
BACKGROUND: Axillary lymph nodes (ALN) status serves as a crucial prognostic indicator in breast cancer (BC). The aim of this study was to construct a radiogenomic multimodal model, based on machine learning (ML) and whole-transcriptome sequencing (WTS), to accurately preoperative evaluate the risk of ALN metastasis (ALNM), drug therapeutic response and avoid unnecessary axillary surgery in BC patients. METHODS: In this study, we conducted a retrospective analysis of 1078 BC patients from The Cancer Genome Atlas (TCGA), The Cancer Imaging Archive (TCIA), and Foshan cohort. These patients were divided into the TCIA cohort(N=103), TCIA validation cohort(N=51), Duke cohort(N=138), Foshan cohort(N=106), and TCGA cohort(N=680). Radiological features were extracted from BC radiological images and differentially expressed gene expression was calibrated using WTS technology. A support vector machine (SVM) model was employed to screen radiological and genetic features, and a multimodal model was established based on radiogenomic and clinical pathological features to predict ALNM and stratify. The accuracy of the model predictions was assessed using the area under the curve (AUC) and the clinical benefit was measured using decision curve analysis (DCA). Risk stratification analysis of BC patients was performed by gene set enrichment analysis (GSEA), differential comparison of immune checkpoint gene expression, and drug sensitivity testing. RESULTS: For the prediction of ALNM, rad-score was able to significantly differentiate between ALN- and ALN+ patients in both the Duke and Foshan cohorts (P<0.05). Similarly, the gene-score was able to significantly differentiate between ALN- and ALN+ patients in the TCGA cohort (P<0.05). The radiogenomic multimodal nomogram demonstrated satisfactory performance in the TCIA cohort (AUC 0.82, 95% CI: 0.74-0.91) and TCIA validation cohort (AUC 0.77, 95% CI: 0.63-0.91). In the risk sub-stratification analysis, there were significant differences in gene pathway enrichment between high and low-risk groups (P<0.05). Additionally, different risk groups may exhibit varying treatment responses to chemotherapy (including Doxorubicin, Methotrexate and Lapatinib) (P<0.05). CONCLUSION: Overall, the radiogenomic multimodal model employs multimodal data, including radiological images, genetic and clinicopathological typing. The radiogenomic multimodal nomogram can precisely predict ALNM and drug therapeutic response in BC patients.
Binary Classification for Lung Nodule Based on Channel Attention Mechanism
In order to effectively handle the problem of tumor detection on the LUNA16 dataset, we present a new methodology for data augmentation to address the issue of imbalance between the number of positive and negative candidates in this study. Furthermore, a new deep learning model - ASS (a model that combines Convnet sub-attention with Softmax loss) is also proposed and evaluated on patches with different sizes of the LUNA16. Data enrichment techniques are implemented in two ways: off-line augmentation increases the number of images based on the image under consideration, and on-line augmentation increases the number of images by rotating the image at four angles (0°, 90°, 180°, and 270°). We build candidate boxes of various sizes based on the coordinates of each candidate, and these candidate boxes are used to demonstrate the usefulness of the suggested ASS model. The results of cross-testing (with four cases: case 1, ASS trained and tested on a dataset of size 50 × 50; case 2, using ASS trained on a dataset of size 50 × 50 to test a dataset of size 100 × 100; case 3, ASS trained and tested on a dataset of size 100 × 100 and case 4, using ASS trained on a dataset of size 100 × 100 to test a dataset of size 50 × 50) show that the proposed ASS model is feasible.
Detection of Lung Nodules on CT Images based on the Convolutional Neural Network with Attention Mechanism
Lai, Khai Dinh
Nguyen, Thuy Thanh
Le, Thai Hoang
2021Journal Article, cited 0 times
LIDC-IDRI
The development of Computer-aided diagnosis (CAD) systems for automatic lung nodule detection through thoracic computed tomography (CT) scans has been an active area of research in recent years. Lung Nodule Analysis 2016 (LUNA16 challenge) encourages researchers to suggest a variety of successful nodule detection algorithms based on two key stages (1) candidates detection, (2) false-positive reduction. In the scope of this paper, a new convolutional neural network (CNN) architecture is proposed to efficiently solve the second challenge of LUNA16. Specifically, we find that typical CNN models pay little attention to the characteristics of input data, in order to address this constraint, we apply the attention-mechanism: propose a technique to attach Squeeze and Excitation-Block (SE-Block) after each convolution layer of CNN to emphasize important feature maps related to the characteristics of the input image - forming Attention sub-Convnet. The new CNN architecture is suggested by connecting the Attention sub-Convnets. In addition, we also analyze the selection of triplet loss or softmax loss functions to boost the rating performance of the proposed CNN. From the study, this is agreed to select softmax loss during the CNN training phase and triplet loss for the testing phase. Our suggested CNN is used to minimize the number of redundant candidates in order to improve the efficiency of false-positive reduction with the LUNA database. The results obtained in comparison to the previous models indicate the feasibility of the proposed model.
From Pixel to Cancer: Cellular Automata in Computed Tomography
Lai, Yuxiang
Chen, Xiaoxi
Wang, Angtian
Yuille, Alan
Zhou, Zongwei
2024Book Section, cited 0 times
Pancreas-CT
AI for cancer detection encounters the bottleneck of data scarcity, annotation difficulty, and low prevalence of early tumors. Tumor synthesis seeks to create artificial tumors in medical images, which can greatly diversify the data and annotations for AI training. However, current tumor synthesis approaches are not applicable across different organs due to their need for specific expertise and design. This paper establishes a set of generic rules to simulate tumor development. Each cell (pixel) is initially assigned a state between zero and ten to represent the tumor population, and a tumor can be developed based on three rules to describe the process of growth, invasion, and death. We apply these three generic rules to simulate tumor development—from pixel to cancer—using cellular automata. We then integrate the tumor state into the original computed tomography (CT) images to generate synthetic tumors across different organs. This tumor synthesis approach allows for sampling tumors at multiple stages and analyzing tumor-organ interaction. Clinically, a reader study involving three expert radiologists reveals that the synthetic tumors and their developing trajectories are convincingly realistic. Technically, we analyze and simulate tumor development at various stages using 9,262 raw, unlabeled CT images sourced from 68 hospitals worldwide. The performance in segmenting tumors in the liver, pancreas, and kidneys exceeds prevailing literature benchmarks, underlining the immense potential of tumor synthesis, especially for earlier cancer detection. The code and models are available at https://github.com/MrGiovanni/Pixel2Cancer.
Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation
Lai, Ying-Chieh
Yeh, Ta-Sen
Wu, Ren-Chin
Tsai, Cheng-Kun
Yang, Lan-Yan
Lin, Gigin
Kuo, Michael D
Cancers2019Journal Article, cited 0 times
TCGA-STAD
Radiogenomics
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.
Molecular subtype classification of low‐grade gliomas using magnetic resonance imaging‐based radiomics and machine learning
Lam, Luu Ho Thanh
Thi, Duyen
Diep, Doan Thi Ngoc
Le Nhu Nguyet, Dang
Truong, Quang Dinh
Tri, Tran Thanh
Thanh, Huynh Ngoc
Le, Nguyen Quoc Khanh
NMR in Biomedicine2022Journal Article, cited 0 times
TCGA-LGG
In 2016, the World Health Organization (WHO) updated the glioma classification by incorporating molecular biology parameters, including low-grade glioma (LGG). In the new scheme, LGGs have three molecular subtypes: isocitrate dehydrogenase (IDH)-mutated 1p/19q-codeleted, IDH-mutated 1p/19q-noncodeleted, and IDH-wild type 1p/19q-noncodeleted entities. This work proposes a model prediction of LGG molecular subtypes using magnetic resonance imaging (MRI). MR images were segmented and converted into radiomics features, thereby providing predictive information about the brain tumor classification. With 726 raw features obtained from the feature extraction procedure, we developed a hybrid machine learning-based radiomics by incorporating a genetic algorithm and eXtreme Gradient Boosting (XGBoost) classifier, to ascertain 12 optimal features for tumor classification. To resolve imbalanced data, the synthetic minority oversampling technique (SMOTE) was applied in our study. The XGBoost algorithm outperformed the other algorithms on the training dataset by an accuracy value of 0.885. We continued evaluating the XGBoost model, then achieved an overall accuracy of 0.6905 for the three-subtype classification of LGGs on an external validation dataset. Our model is among just a few to have resolved the three-subtype LGG classification challenge with high accuracy compared with previous studies performing similar work.
Development and validation of bone-suppressed deep learning classification of COVID-19 presentation in chest radiographs
Lam, Ngo Fung Daniel
Sun, Hongfei
Song, Liming
Yang, Dongrong
Zhi, Shaohua
Ren, Ge
Chou, Pak Hei
Wan, Shiu Bun Nelson
Wong, Man Fung Esther
Chan, King Kwong
Tsang, Hoi Ching Hailey
Kong, Feng-Ming Spring
Wáng, Yì Xiáng J
Qin, Jing
Chan, Lawrence Wing Chi
Ying, Michael
Cai, Jing
Quantitative Imaging in Medicine and Surgery2022Journal Article, cited 0 times
MIDRC-RICORD-1C
Background: Coronavirus disease 2019 (COVID-19) is a pandemic disease. Fast and accurate diagnosis of COVID-19 from chest radiography may enable more efficient allocation of scarce medical resources and hence improved patient outcomes. Deep learning classification of chest radiographs may be a plausible step towards this. We hypothesize that bone suppression of chest radiographs may improve the performance of deep learning classification of COVID-19 phenomena in chest radiographs.
Methods: Two bone suppression methods (Gusarev et al. and Rajaraman et al.) were implemented. The Gusarev and Rajaraman methods were trained on 217 pairs of normal and bone-suppressed chest radiographs from the X-ray Bone Shadow Suppression dataset (https://www.kaggle.com/hmchuong/xray-bone-shadow-supression). Two classifier methods with different network architectures were implemented. Binary classifier models were trained on the public RICORD-1c and RSNA Pneumonia Challenge datasets. An external test dataset was created retrospectively from a set of 320 COVID-19 positive patients from Queen Elizabeth Hospital (Hong Kong, China) and a set of 518 non-COVID-19 patients from Pamela Youde Nethersole Eastern Hospital (Hong Kong, China), and used to evaluate the effect of bone suppression on classifier performance. Classification performance, quantified by sensitivity, specificity, negative predictive value (NPV), accuracy and area under the receiver operating curve (AUC), for non-suppressed radiographs was compared to that for bone suppressed radiographs. Some of the pre-trained models used in this study are published at (https://github.com/danielnflam).
Results: Bone suppression of external test data was found to significantly (P<0.05) improve AUC for one classifier architecture [from 0.698 (non-suppressed) to 0.732 (Rajaraman-suppressed)]. For the other classifier architecture, suppression did not significantly (P>0.05) improve or worsen classifier performance.
Conclusions: Rajaraman suppression significantly improved classification performance in one classification architecture, and did not significantly worsen classifier performance in the other classifier architecture. This research could be extended to explore the impact of bone suppression on classification of different lung pathologies, and the effect of other image enhancement techniques on classifier performance.
Textural Analysis of Tumour Imaging: A Radiomics Approach
Conventionally, tumour characteristic are assessed by performing a biopsy. These biopsies are invasive and submissive to the problem of tumour heterogeneity. However, analysis; of imaging data may render the need for such biopsies obsolete. This master’s dissertation describes in what matter images of tumour masses can be post-processed to classify the tumours in a variety of respective clinical response classes. Tumour images obtained using both computed tomography and magnetic resonance imaging are analysed. The analysis of these images is done; using a radiomics approach. This approach will convert the imaging data into a high dimensional mineable feature space. The features considered are first-order statistics, texture features, wavelet-based features and shape parameters. Post-processing techniques applied on this feature space include k-means clustering, assessment of stability and prognostic performance and; machine learning techniques. Both random forests and neural networks are included. Results from these analyses show that the radiomics features can be correlated with different clinical response classes as well as serve as input data to create predictive models with correct prediction rates up to 63.9 % in CT and 66.0 % in MRI. Furthermore, a radiomics signature can be created; that consists of four features and is capable of predicting clinical response factors with almost the same accuracy as obtained using the entire data space.; Keywords - Radiomics, texture analysis, lung tumour, CT, brain tumour, MRI, clustering,; random forest, neural network, machine learning, radiomics signature, biopsy, tumour heterogeneity
A simple texture feature for retrieval of medical images
Lan, Rushi
Zhong, Si
Liu, Zhenbing
Shi, Zhuo
Luo, Xiaonan
Multimedia Tools and Applications2017Journal Article, cited 2 times
Website
Imaging features
Classification
Algorithm Development
Texture characteristic is an important attribute of medical images, and has been applied in many medical image applications. This paper proposes a simple approach to employ the texture features of medical images for retrieval. The developed approach first conducts image filtering to medical images using different Gabor and Schmid filters, and then uniformly partitions the filtered images into non-overlapping patches. These operations provide extensive local texture information of medical images. The bag-of-words model is finally used to obtain feature representations of the images. Compared with several existing features, the proposed one is more discriminative and efficient. Experiments on two benchmark medical CT image databases have demonstrated the effectiveness of the proposed approach.
A Video Data Based Transfer Learning Approach for Classification of MGMT Status in Brain Tumor MR Images
Lang, D. M.
Peeken, J. C.
Combs, S. E.
Wilkens, J. J.
Bartzsch, S.
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Radiogenomics
Challenge
BraTS 2021
Transfer learning
Deep Learning
BRAIN
Classification
Algorithm Development
Patient MGMT (O6 methylguanine DNA methyltransferase) status has been identified essential for the responsiveness to chemotherapy in glioblastoma patients and therefore depicts an important clinical factor. Testing for MGMT methylation is invasive, time consuming and costly and lacks a uniform gold standard. We studied MGMT status assessment by multi-parametric magnetic resonance imaging (mpMRI) scans and tested the ability of deep learning for classification of this task. To overcome the limited number of training examples we used a transfer learning approach based on the video clip classification network C3D [30], allowing for full exploitation of three dimensional information in the MR images. MRI sequences were fused using a locally connected layer. Our approach was able to differentiate MGMT methylated from unmethylated patients with an area under the receiver operating characteristics curve (AUC) of 0.689 for the public validation set. On the private test set AUC was given by 0.577. Further studies for assessment of clinical importance and predictive power in terms of survival are needed.
Collaborative and Reproducible Research: Goals, Challenges, and Strategies
Langer, S. G.
Shih, G.
Nagy, P.
Landman, B. A.
J Digit Imaging2018Journal Article, cited 1 times
Website
TCIA General
imaging biomarker
Genomics
Electronic Medical Record (EMR)
Computer analytics
Computers in medicine
Machine learning
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.
A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop
Langlotz, Curtis P
Allen, Bibb
Erickson, Bradley J
Kalpathy-Cramer, Jayashree
Bigelow, Keith
Cook, Tessa S
Flanders, Adam E
Lungren, Matthew P
Mendelson, David S
Rudie, Jeffrey D
Wang, Ge
Kandarpa, Krishna
Radiology2019Journal Article, cited 1 times
Website
Radiomics
National Lung Screening Trial (NLST)
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.
The impact of initial tumor bulk in DLBCL treated with DA-EPOCH-R vs. R-CHOP: a secondary analysis of alliance/CALGB 50303
The ideal treatment paradigm for bulky diffuse large B-cell lymphoma (DLBCL) remains uncertain. We investigated the impact of tumor bulk in patients treated with systemic therapy alone through Alliance/CALGB 50303. Data from this trial were obtained from the National Cancer Institute's NCTN/NCORP Data Archive. The study assessed the size of nodal sites and estimated progression-free survival (PFS) using Cox proportional hazards models. Stratified analysis factored in International Prognostic Index (IPI) risk scores. Out of 524 patients, 155 had pretreatment scans. Using a 7.5 cm cutoff, 44% were classified as bulky. Bulk did not significantly impact progression-free survival (PFS), whether measured continuously or at thresholds of >5 or >7.5 cm (p = 0.10-p = 0.99). Stratified analyses by treatment group and IPI risk group were also non-significant. In this secondary analysis, a significant association between bulk and PFS was not identified.
The prognosis of upfront tumor bulk in DLBCL remains unclear. In this secondary analysis of a phase III trial comparing DA-EPOCH-R to R-CHOP, a significant association between upfront tumor bulk and PFS was not identified.
eng
A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme
Lao, Jiangwei
Chen, Yinsheng
Li, Zhi-Cheng
Li, Qihua
Zhang, Ji
Liu, Jing
Zhai, Guangtao
Scientific RepoRtS2017Journal Article, cited 32 times
Website
TCGA-GBM
Radiomics
Glioblastoma Multiforme (GBM)
Deep learning
Traditional radiomics models mainly rely on explicitly-designed handcrafted features from medical images. This paper aimed to investigate if deep features extracted via transfer learning can generate radiomics signatures for prediction of overall survival (OS) in patients with Glioblastoma Multiforme (GBM). This study comprised a discovery data set of 75 patients and an independent validation data set of 37 patients. A total of 1403 handcrafted features and 98304 deep features were extracted from preoperative multi-modality MR images. After feature selection, a six-deep-feature signature was constructed by using the least absolute shrinkage and selection operator (LASSO) Cox regression model. A radiomics nomogram was further presented by combining the signature and clinical risk factors such as age and Karnofsky Performance Score. Compared with traditional risk factors, the proposed signature achieved better performance for prediction of OS (C-index = 0.710, 95% CI: 0.588, 0.932) and significant stratification of patients into prognostically distinct groups (P < 0.001, HR = 5.128, 95% CI: 2.029, 12.960). The combined model achieved improved predictive performance (C-index = 0.739). Our study demonstrates that transfer learning-based deep features are able to generate prognostic imaging signature for OS prediction and patient stratification for GBM, indicating the potential of deep imaging feature-based biomarker in preoperative care of GBM patients.
A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI
Lapa, Paulo
Castelli, Mauro
Gonçalves, Ivo
Sala, Evis
Rundo, Leonardo
Applied Sciences2020Journal Article, cited 0 times
PROSTATEx
Convolutional Neural Network (CNN)
Prostate
Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI
Lapa, Paulo
Gonçalves, Ivo
Rundo, Leonardo
Castelli, Mauro
2019Conference Proceedings, cited 0 times
SPIE-AAPM PROSTATEx Challenge
Convolutional Neural Network (CNN)
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. ; This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.
Conditional random fields improve the CNN-based prostate cancer classification performance
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality.; ; ; Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.
Integrating histopathology and transcriptomics for spatial tumor microenvironment profiling in a melanoma case study
Lapuente-Santana, O.
Kant, J.
Eduati, F.
NPJ Precis Oncol2024Journal Article, cited 0 times
Website
CPTAC-CM
TIL-WSI-TCGA
Local structures formed by cells in the tumor microenvironment (TME) play an important role in tumor development and treatment response. This study introduces SPoTLIghT, a computational framework providing a quantitative description of the tumor architecture from hematoxylin and eosin (H&E) slides. We trained a weakly supervised machine learning model on melanoma patients linking tile-level imaging features extracted from H&E slides to sample-level cell type quantifications derived from RNA-sequencing data. Using this model, SPoTLIghT provides spatial cellular maps for any H&E image, and converts them in graphs to derive 96 interpretable features capturing TME cellular organization. We show how SPoTLIghT's spatial features can distinguish microenvironment subtypes and reveal nuanced immune infiltration structures not apparent in molecular data alone. Finally, we use SPoTLIghT to effectively predict patients' prognosis in an independent melanoma cohort. SPoTLIghT enhances computational histopathology providing a quantitative and interpretable characterization of the spatial contexture of tumors.
3D-Printed Tumor Phantoms for Assessment of In Vivo Fluorescence Imaging Analysis Methods
LaRochelle, E. P. M.
Streeter, S. S.
Littler, E. A.
Ruiz, A. J.
Mol Imaging Biol2022Journal Article, cited 0 times
Website
Soft-Tissue-Sarcoma
Fluorescence guided surgery
Optical phantom
Standards
Surgical navigation
Fluoroscopy
Contrast enhancement
Model
PURPOSE: Interventional fluorescence imaging is increasingly being utilized to quantify cancer biomarkers in both clinical and preclinical models, yet absolute quantification is complicated by many factors. The use of optical phantoms has been suggested by multiple professional organizations for quantitative performance assessment of fluorescence guidance imaging systems. This concept can be further extended to provide standardized tools to compare and assess image analysis metrics. PROCEDURES: 3D-printed fluorescence phantoms based on solid tumor models were developed with representative bio-mimicking optical properties. Phantoms were produced with discrete tumors embedded with an NIR fluorophore of fixed concentration and either zero or 3% non-specific fluorophore in the surrounding material. These phantoms were first imaged by two fluorescence imaging systems using two methods of image segmentation, and four assessment metrics were calculated to demonstrate variability in the quantitative assessment of system performance. The same analysis techniques were then applied to one tumor model with decreasing tumor fluorophore concentrations. RESULTS: These anatomical phantom models demonstrate the ability to use 3D printing to manufacture anthropomorphic shapes with a wide range of reduced scattering (mu(s)': 0.24-1.06 mm(-1)) and absorption (mu(a): 0.005-0.14 mm(-1)) properties. The phantom imaging and analysis highlight variability in the measured sensitivity metrics associated with tumor visualization. CONCLUSIONS: 3D printing techniques provide a platform for demonstrating complex biological models that introduce real-world complexities for quantifying fluorescence image data. Controlled iterative development of these phantom designs can be used as a tool to advance the field and provide context for consensus-building beyond performance assessment of fluorescence imaging platforms, and extend support for standardizing how quantitative metrics are extracted from imaging data and reported in literature.
4DCT imaging to assess radiomics feature stability: An investigation for thoracic cancers
Larue, Ruben THM
Van De Voorde, Lien
van Timmeren, Janna E
Leijenaar, Ralph TH
Berbée, Maaike
Sosef, Meindert N
Schreurs, Wendy MJ
van Elmpt, Wouter
Lambin, Philippe
Radiotherapy and Oncology2017Journal Article, cited 7 times
Website
RIDER Lung CT
4D-Lung
Radiomics
ESOPHAGUS
LUNG
Computed Tomography (CT)
BACKGROUND AND PURPOSE: Quantitative tissue characteristics derived from medical images, also called radiomics, contain valuable prognostic information in several tumour-sites. The large number of features available increases the risk of overfitting. Typically test-retest CT-scans are used to reduce dimensionality and select robust features. However, these scans are not always available. We propose to use different phases of respiratory-correlated 4D CT-scans (4DCT) as alternative. MATERIALS AND METHODS: In test-retest CT-scans of 26 non-small cell lung cancer (NSCLC) patients and 4DCT-scans (8 breathing phases) of 20 NSCLC and 20 oesophageal cancer patients, 1045 radiomics features of the primary tumours were calculated. A concordance correlation coefficient (CCC) >0.85 was used to identify robust features. Correlation with prognostic value was tested using univariate cox regression in 120 oesophageal cancer patients. RESULTS: Features based on unfiltered images demonstrated greater robustness than wavelet-filtered features. In total 63/74 (85%) unfiltered features and 268/299 (90%) wavelet features stable in the 4D-lung dataset were also stable in the test-retest dataset. In oesophageal cancer 397/1045 (38%) features were robust, of which 108 features were significantly associated with overall-survival. CONCLUSION: 4DCT-scans can be used as alternative to eliminate unstable radiomics features as first step in a feature selection procedure. Feature robustness is tumour-site specific and independent of prognostic value.
Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients
Lassau, N.
Ammari, S.
Chouzenoux, E.
Gortais, H.
Herent, P.
Devilder, M.
Soliman, S.
Meyrignac, O.
Talabard, M. P.
Lamarque, J. P.
Dubois, R.
Loiseau, N.
Trichelair, P.
Bendjebbar, E.
Garcia, G.
Balleyguier, C.
Merad, M.
Stoclin, A.
Jegou, S.
Griscelli, F.
Tetelboum, N.
Li, Y.
Verma, S.
Terris, M.
Dardouri, T.
Gupta, K.
Neacsu, A.
Chemouni, F.
Sefta, M.
Jehanno, P.
Bousaid, I.
Boursin, Y.
Planchet, E.
Azoulay, M.
Dachary, J.
Brulport, F.
Gonzalez, A.
Dehaene, O.
Schiratti, J. B.
Schutte, K.
Pesquet, J. C.
Talbot, H.
Pronier, E.
Wainrib, G.
Clozel, T.
Barlesi, F.
Bellin, M. F.
Blum, M. G. B.
Nat Commun2021Journal Article, cited 20 times
Website
LIDC-IDRI
Deep Learning
Multivariate Analysis
Computed Tomography (CT)
Model
Imaging features
The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach.
Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans
Lassen, BC
Jacobs, C
Kuhnigk, JM
van Ginneken, B
van Rikxoort, EM
Physics in Medicine and Biology2015Journal Article, cited 25 times
Website
LIDC-IDRI
Reproducibility
LUNG
The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of subsolid nodules in clinical routine.
DeepTumor: Framework for Brain MR Image Classification, Segmentation and Tumor Detection
Latif, G.
Diagnostics (Basel)2022Journal Article, cited 1 times
Website
Convolutional Neural Network (CNN)
Deep learning
Glioma
Classification
Segmentation
Fuzzy C-means
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain's required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature.
Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier
Latif, Ghazanfar
Ben Brahim, Ghassen
Iskandar, D. N. F. Awang
Bashar, Abul
Alghazo, Jaafar
Diagnostics2022Journal Article, cited 0 times
BraTS 2018
Classification
Convolutional Neural Network (CNN)
Imaging features
BRAIN
Glioma
Support Vector Machine (SVM)
Algorithm Development
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient's life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.
Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI
Lavasani, S Navaei
Mostaar, A
Ashtiyani, M
Journal of Biomedical Physics & Engineering2018Journal Article, cited 0 times
Website
QIN PROSTATE
DCE-MRI
Prostate Cancer
Semi-quantitative Feature
Wavelet Kinetic Feature
Segmentation
Quantitative neuroimaging with handcrafted and deep radiomics in neurological diseases
Lavrova, Elizaveta
2024Thesis, cited 0 times
Dissertation
Thesis
Radiomics
LGG-1p19qDeletion
TCGA-LGG
neuroimaging
medical image analysis
clinical decision support
Magnetic Resonance Imaging (MRI)
Deep learning
The motivation behind this thesis is to explore the potential of "radiomics" in the field of neurology, where early diagnosis and accurate treatment selection are crucial for improving patient outcomes. Neurological diseases are a major cause of disability and death globally, and there is a pressing need for reliable imaging biomarkers to aid in disease detection and monitoring. While radiomics has shown promising results in oncology, its application in neurology remains relatively unexplored. Therefore, this work aims to investigate the feasibility and challenges of implementing radiomics in the neurological context, addressing various limitations and proposing potential solutions. The thesis begins with a demonstration of the predictive power of radiomics for identifying important diagnostic biomarkers in neuro-oncology. Building on this foundation, the research then delves into radiomics in non-oncological neurology, providing an overview of the pipeline steps, potential clinical applications, and existing challenges. Despite promising results in proof-of-concept studies, the field faces limitations, mostly data-related, such as small sample sizes, retrospective nature, and lack of external validation. To explore the predictive power of radiomics in non-oncological tasks, a radiomics approach was implemented to distinguish between multiple sclerosis patients and normal controls. Notably, radiomic features extracted from normal-appearing white matter were found to contain distinctive information for multiple sclerosis detection, confirming the hypothesis of the thesis. To overcome the data harmonization challenge, in this work quantitative mapping of the brain was used. Unlike traditional imaging methods, quantitative mapping involves measuring the physical properties of brain tissues, providing a more standardized and consistent data representation. By reconstructing the physical properties of each voxel based on multi-echo MRI acquisition, quantitative mapping produces data that is less susceptible to domain-specific biases and scanner variability. Additionally, the insights gained from quantitative mapping are building the bridge toward the physical and biological properties of brain tissues, providing a deeper understanding of the underlying pathology. Another crucial challenge in radiomics is robust and fast data labeling, particularly segmentation. A deep learning method was proposed to perform automated carotid artery segmentation in stroke at-risk patients, surpassing current state-of-the-art approaches. This novel method showcases the potential of automated segmentation to enhance radiomics pipeline implementation. In addition to addressing specific challenges, the thesis also proposes a community-driven open-source toolbox for radiomics, aimed at enhancing pipeline standardization and transparency. This software package would facilitate data curation and exploratory analysis, fostering collaboration and reproducibility in radiomics research. Through an in-depth exploration of radiomics in neuroimaging, this thesis demonstrates its potential to enhance neurological disease diagnosis and monitoring. By uncovering valuable information from seemingly normal brain tissues, radiomics holds promise for early disease detection. Furthermore, the development of innovative tools and methods, including deep learning and quantitative mapping, has the potential to address data labeling and harmonization challenges. Looking to the future, embracing larger, diverse datasets and longitudinal studies will further enhance the generalizability and predictive power of radiomics in neurology. By addressing the challenges identified in this thesis and fostering collaboration within the research community, radiomics can advance toward clinical implementation, revolutionizing precision medicine in neurology.
Deep Learning–based Method for Denoising and Image Enhancement in Low-Field MRI
Deep learning has proven successful in a variety of medical image processing applications, including denoising and removing artifacts. This is of particular interest for low-field Magnetic Resonance Imaging (MRI), which is promising for its affordability, compact footprint, and reduced shielding requirements, but inherently suffers from low signal-to-noise ratio. In this work, we propose a method of simulating scanner-specific images from publicly available, 1.5T and 3T database of MR images, using a signal encoding matrix incorporating explicitly modeled imaging gradients and fields. We apply a stacked, U-Net architecture to reduce noise from the system and remove artifacts due to the inhomogeneous B0 field, nonlinear gradients, undersampling of k-space and image reconstruction to enhance low-field MR images. The final network is applied as a post-processing step following image reconstruction to phantom and human images acquired on a 60-67mT MR scanner and demonstrates promising qualitative and quantitative improvements to overall image quality.
Narrow Band Active Contour Attention Model for Medical Segmentation
Le, N.
Bui, T.
Vo-Ho, V. K.
Yamazaki, K.
Luu, K.
Diagnostics (Basel)2021Journal Article, cited 6 times
Website
BraTS 2018
Deep learning
Segmentation
Weak boundary
Medical image segmentation is one of the most challenging tasks in medical image analysis and widely developed for many clinical applications. While deep learning-based approaches have achieved impressive performance in semantic segmentation, they are limited to pixel-wise settings with imbalanced-class data problems and weak boundary object segmentation in medical images. In this paper, we tackle those limitations by developing a new two-branch deep network architecture which takes both higher level features and lower level features into account. The first branch extracts higher level feature as region information by a common encoder-decoder network structure such as Unet and FCN, whereas the second branch focuses on lower level features as support information around the boundary and processes in parallel to the first branch. Our key contribution is the second branch named Narrow Band Active Contour (NB-AC) attention model which treats the object contour as a hyperplane and all data inside a narrow band as support information that influences the position and orientation of the hyperplane. Our proposed NB-AC attention model incorporates the contour length with the region energy involving a fixed-width band around the curve or surface. The proposed network loss contains two fitting terms: (i) a high level feature (i.e., region) fitting term from the first branch; (ii) a lower level feature (i.e., contour) fitting term from the second branch including the (ii1) length of the object contour and (ii2) regional energy functional formed by the homogeneity criterion of both the inner band and outer band neighboring the evolving curve or surface. The proposed NB-AC loss can be incorporated into both 2D and 3D deep network architectures. The proposed network has been evaluated on different challenging medical image datasets, including DRIVE, iSeg17, MRBrainS18 and Brats18. The experimental results have shown that the proposed NB-AC loss outperforms other mainstream loss functions: Cross Entropy, Dice, Focal on two common segmentation frameworks Unet and FCN. Our 3D network which is built upon the proposed NB-AC loss and 3DUnet framework achieved state-of-the-art results on multiple volumetric datasets.
XGBoost Improves Classification of MGMT Promoter Methylation Status in IDH1 Wildtype Glioblastoma
Le, N. Q. K.
Do, D. T.
Chiu, F. Y.
Yapp, E. K. Y.
Yeh, H. Y.
Chen, C. Y.
J Pers Med2020Journal Article, cited 1 times
Website
TCGA-GBM
Radiogenomics
Classification
Approximately 96% of patients with glioblastomas (GBM) have IDH1 wildtype GBMs, characterized by extremely poor prognosis, partly due to resistance to standard temozolomide treatment. O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation status is a crucial prognostic biomarker for alkylating chemotherapy resistance in patients with GBM. However, MGMT methylation status identification methods, where the tumor tissue is often undersampled, are time consuming and expensive. Currently, presurgical noninvasive imaging methods are used to identify biomarkers to predict MGMT methylation status. We evaluated a novel radiomics-based eXtreme Gradient Boosting (XGBoost) model to identify MGMT promoter methylation status in patients with IDH1 wildtype GBM. This retrospective study enrolled 53 patients with pathologically proven GBM and tested MGMT methylation and IDH1 status. Radiomics features were extracted from multimodality MRI and tested by F-score analysis to identify important features to improve our model. We identified nine radiomics features that reached an area under the curve of 0.896, which outperformed other classifiers reported previously. These features could be important biomarkers for identifying MGMT methylation status in IDH1 wildtype GBM. The combination of radiomics feature extraction and F-core feature selection significantly improved the performance of the XGBoost model, which may have implications for patient stratification and therapeutic strategy in GBM.
Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI
Le, Nguyen Quoc Khanh
Hung, Truong Nguyen Khanh
Do, Duyen Thi
Lam, Luu Ho Thanh
Dang, Luong Huu
Huynh, Tuan-Tu
Comput Biol Med2021Journal Article, cited 0 times
Website
TCGA-GBM
Ivy GAP
Glioblastoma Multiforme (GBM)
BRAIN
Magnetic Resonance Imaging (MRI)
Radiogenomics
Radiomics
BACKGROUND: In the field of glioma, transcriptome subtypes have been considered as an important diagnostic and prognostic biomarker that may help improve the treatment efficacy. However, existing identification methods of transcriptome subtypes are limited due to the relatively long detection period, the unattainability of tumor specimens via biopsy or surgery, and the fleeting nature of intralesional heterogeneity. In search of a superior model over previous ones, this study evaluated the efficiency of eXtreme Gradient Boosting (XGBoost)-based radiomics model to classify transcriptome subtypes in glioblastoma patients. METHODS: This retrospective study retrieved patients from TCGA-GBM and IvyGAP cohorts with pathologically diagnosed glioblastoma, and separated them into different transcriptome subtypes groups. GBM patients were then segmented into three different regions of MRI: enhancement of the tumor core (ET), non-enhancing portion of the tumor core (NET), and peritumoral edema (ED). We subsequently used handcrafted radiomics features (n = 704) from multimodality MRI and two-level feature selection techniques (Spearman correlation and F-score tests) in order to find the features that could be relevant. RESULTS: After the feature selection approach, we identified 13 radiomics features that were the most meaningful ones that can be used to reach the optimal results. With these features, our XGBoost model reached the predictive accuracies of 70.9%, 73.3%, 88.4%, and 88.4% for classical, mesenchymal, neural, and proneural subtypes, respectively. Our model performance has been improved in comparison with the other models as well as previous works on the same dataset. CONCLUSION: The use of XGBoost and two-level feature selection analysis (Spearman correlation and F-score) could be expected as a potential combination for classifying transcriptome subtypes with high performance and might raise public attention for further research on radiomics-based GBM models.
Radiomic features based on Hessian index for prediction of prognosis in head-and-neck cancer patients
Le, Quoc Cuong
Arimura, Hidetaka
Ninomiya, Kenta
Kabata, Yutaro
Scientific RepoRtS2020Journal Article, cited 0 times
Website
HNSCC
Head-Neck-PET-CT
radiomic features
Can Persistent Homology Features Capture More Intrinsic Information about Tumors from (18)F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Images of Head and Neck Cancer Patients?
Le, Q. C.
Arimura, H.
Ninomiya, K.
Kodama, T.
Moriyama, T.
Metabolites2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Radiomics
Positron Emission Tomography (PET)
This study hypothesized that persistent homology (PH) features could capture more intrinsic information about the metabolism and morphology of tumors from (18)F-fluorodeoxyglucose positron emission tomography (PET)/computed tomography (CT) images of patients with head and neck (HN) cancer than other conventional features. PET/CT images and clinical variables of 207 patients were selected from the publicly available dataset of the Cancer Imaging Archive. PH images were generated from persistent diagrams obtained from PET/CT images. The PH features were derived from the PH PET/CT images. The signatures were constructed in a training cohort from features from CT, PET, PH-CT, and PH-PET images; clinical variables; and the combination of features and clinical variables. Signatures were evaluated using statistically significant differences (p-value, log-rank test) between survival curves for low- and high-risk groups and the C-index. In an independent test cohort, the signature consisting of PH-PET features and clinical variables exhibited the lowest log-rank p-value of 3.30 x 10(-5) and C-index of 0.80, compared with log-rank p-values from 3.52 x 10(-2) to 1.15 x 10(-4) and C-indices from 0.34 to 0.79 for other signatures. This result suggests that PH features can capture the intrinsic information of tumors and predict prognosis in patients with HN cancer.
Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network
Le, Trong-Ngoc
Bao, Pham The
Huynh, Hieu Trung
BioMed Research International2016Journal Article, cited 5 times
Website
LIVER
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
Segmentation
Algorithm Development
Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.
Automatic GPU memory management for large neural models in TensorFlow
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.
Cross-institutional outcome prediction for head and neck cancer patients using self-attention neural networks
Le, W. T.
Vorontsov, E.
Romero, F. P.
Seddik, L.
Elsharief, M. M.
Nguyen-Tan, P. F.
Roberge, D.
Bahig, H.
Kadoury, S.
Sci Rep2022Journal Article, cited 0 times
Head-Neck-PET-CT
Multimodal Imaging
Aged
Aged
80 and over
Attention
Biomarkers
Tumor
Carcinoma
Squamous Cell/*diagnostic imaging/therapy
Deep Learning
Diagnosis
Computer-Assisted/*methods
Female
Head and Neck Neoplasms/*diagnostic imaging/therapy
Humans
Image Processing
Computer-Assisted/*methods
Male
Middle Aged
Neoplasm Recurrence
Local/diagnostic imaging
*Neural Networks
Computer
Positron Emission Tomography Computed Tomography
Prognosis
Quality of Life
Retrospective Studies
In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving [Formula: see text] AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.
GRAPH-BASED SIGNAL PROCESSING TO CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE SEGMENTATION
Le-Tien, Thuong
To, Thanh-Nha
Vo, Giang
SEATUC journal of science and engineering2022Journal Article, cited 0 times
TCGA-LGG
Graph Signal Processing
Graph Convolutional Neural Network
Deep Learning
Image Segmentation
Medical Image
Automatic medical image segmentation normally is a difficult task because medical images are complex in nature therefore many researchers have studied a lot of approaches to analyze patterns of images. In which, the crucial applications of deep learning in medicine are growing trends, especially Convolutional Neural Networks (CNNs) in the field of Computer Vision, yielding many remarkable results. In this paper, we propose a method to apply graph-based signal processing to CNNs architecture for medical image segmentation application. In particular, the processed architecture is based on the graph convolution to extract features in the image instead of the traditional convolution in DSP (Digital Signal Processing). The proposed solution is effective in learning neighboring links. We also introduce a back-propagation algorithm that optimizes the weights of the graph filter and finds the adjacency matrix that fits the training data. Then, the network model is applied on the dataset of medical images to help detect abnormal areas.
A Three-Dimensional-Printed Patient-Specific Phantom for External Beam Radiation Therapy of Prostate Cancer
Lee, Christopher L
Dietrich, Max C
Desai, Uma G
Das, Ankur
Yu, Suhong
Xiang, Hong F
Jaffe, C Carl
Hirsch, Ariel E
Bloch, B Nicolas
Journal of Engineering and Science in Medical Diagnostics and Therapy2018Journal Article, cited 0 times
Website
Prostate Cancer
3D Printed Phantom
Radiation Therapy
Deformation driven Seq2Seq longitudinal tumor and organs‐at‐risk prediction for radiotherapy
Lee, Donghoon
Alam, Sadegh R.
Jiang, Jue
Zhang, Pengpeng
Nadeem, Saad
Hu, Yu‐chi
Medical Physics2021Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: Radiotherapy presents unique challenges and clinical requirements for longitudinal tumor and organ-at-risk (OAR) prediction during treatment. The challenges include tumor inflammation/edema and radiation-induced changes in organ geometry, whereas the clinical requirements demand flexibility in input/output sequence timepoints to update the predictions on rolling basis and the grounding of all predictions in relationship to the pre-treatment imaging information for response and toxicity assessment in adaptive radiotherapy.
METHODS: To deal with the aforementioned challenges and to comply with the clinical requirements, we present a novel 3D sequence-to-sequence model based on Convolution Long Short-Term Memory (ConvLSTM) that makes use of series of deformation vector fields (DVFs) between individual timepoints and reference pre-treatment/planning CTs to predict future anatomical deformations and changes in gross tumor volume as well as critical OARs. High-quality DVF training data are created by employing hyper-parameter optimization on the subset of the training data with DICE coefficient and mutual information metric. We validated our model on two radiotherapy datasets: a publicly available head-and-neck dataset (28 patients with manually contoured pre-, mid-, and post-treatment CTs), and an internal non-small cell lung cancer dataset (63 patients with manually contoured planning CT and 6 weekly CBCTs).
RESULTS: The use of DVF representation and skip connections overcomes the blurring issue of ConvLSTM prediction with the traditional image representation. The mean and standard deviation of DICE for predictions of lung GTV at weeks 4, 5, and 6 were 0.83 ± 0.09, 0.82 ± 0.08, and 0.81 ± 0.10, respectively, and for post-treatment ipsilateral and contralateral parotids, were 0.81 ± 0.06 and 0.85 ± 0.02.
CONCLUSION: We presented a novel DVF-based Seq2Seq model for medical images, leveraging the complete 3D imaging information of a relatively large longitudinal clinical dataset, to carry out longitudinal GTV/OAR predictions for anatomical changes in HN and lung radiotherapy patients, which has potential to improve RT outcomes.
High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains
Lee, Donghoong
Choi, Sunghoon
Kim, Hee‐Joung
Medical Physics2018Journal Article, cited 0 times
Website
LungCT-Diagnosis
wavelet
deep learning
Radiomics
Restoration of Full Data from Sparse Data in Low-Dose Chest Digital Tomosynthesis Using Deep Convolutional Neural Networks
Lee, Donghoon
Kim, Hee-Joung
Journal of Digital Imaging2018Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
model-based iterative reconstruction (MBIR)
structure similarity index measure (SSIM)
Evaluation of the Usefulness of Detection of Abdominal CT Kidney and Vertebrae using Deep Learning;
Journal of the Korean Society of Radiology2021Journal Article, cited 0 times
Pancreas-CT
Computer Aided Detection (CADe)
Deep Learning
CT is important role in the medical field, such as disease diagnosis, but the number of examination and CT images are increasing. Recently, deep learning has been actively used in the medical field, and it has been used to diagnose auxiliary disease through object detection during deep learning using medical images. The purpose of study to evaluate accuracy by detecting kidney and vertebrae during abdominal CT using object detection deep learning in YOLOv3. As a results of the study, the detection accuracy of the kidney and vertebrae was 83.00%, 82.45%, and can be used as basic data for the object detection of medical images using deep learning.
Integrative Radiogenomics Approach for Risk Assessment of Post-Operative Metastasis in Pathological T1 Renal Cell Carcinoma: A Pilot Retrospective Cohort Study
Lee, H. W.
Cho, H. H.
Joung, J. G.
Jeon, H. G.
Jeong, B. C.
Jeon, S. S.
Lee, H. M.
Nam, D. H.
Park, W. Y.
Kim, C. K.
Seo, S. I.
Park, H.
Cancers (Basel)2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
KIDNEY
Despite the increasing incidence of pathological stage T1 renal cell carcinoma (pT1 RCC), postoperative distant metastases develop in many surgically treated patients, causing death in certain cases. Therefore, this study aimed to create a radiomics model using imaging features from multiphase computed tomography (CT) to more accurately predict the postoperative metastasis of pT1 RCC and further investigate the possible link between radiomics parameters and gene expression profiles generated by whole transcriptome sequencing (WTS). Four radiomic features, including the minimum value of a histogram feature from inner regions of interest (ROIs) (INNER_Min_hist), the histogram of the energy feature from outer ROIs (OUTER_Energy_Hist), the maximum probability of gray-level co-occurrence matrix (GLCM) feature from inner ROIs (INNER_MaxProb_GLCM), and the ratio of voxels under 80 Hounsfield units (Hus) in the nephrographic phase of postcontrast CT (Under80HURatio), were detected to predict the postsurgical metastasis of patients with pathological stage T1 RCC, and the clinical outcomes of patients could be successfully stratified based on their radiomic risk scores. Furthermore, we identified heterogenous-trait-associated gene signatures correlated with these four radiomic features, which captured clinically relevant molecular pathways, tumor immune microenvironment, and potential treatment strategies. Our results of accurate surrogates using radiogenomics could lead to additional benefit from adjuvant therapy or postsurgical metastases in pT1 RCC.
Comparison of novel multi-level Otsu (MO-PET) and conventional PET segmentation methods for measuring FDG metabolic tumor volume in patients with soft tissue sarcoma
Lee, Inki
Im, Hyung-Jun
Solaiyappan, Meiyappan
Cho, Steve Y
EJNMMI physics2017Journal Article, cited 0 times
Website
Soft-tissue Sarcoma
Algorithm Development
Segmentation
Volumetric and Voxel-Wise Analysis of Dominant Intraprostatic Lesions on Multiparametric MRI
Lee, Joon
Carver, Eric
Feldman, Aharon
Pantelic, Milan V
Elshaikh, Mohamed
Wen, Ning
Front Oncol2019Journal Article, cited 0 times
SPIE-AAPM PROSTATEx Challenge
Radiomics
Classification
Introduction: Multiparametric MR imaging (mpMRI) has shown promising results in the diagnosis and localization of prostate cancer. Furthermore, mpMRI may play an important role in identifying the dominant intraprostatic lesion (DIL) for radiotherapy boost. We sought to investigate the level of correlation between dominant tumor foci contoured on various mpMRI sequences. Methods: mpMRI data from 90 patients with MR-guided biopsy-proven prostate cancer were obtained from the SPIE-AAPM-NCI Prostate MR Classification Challenge. Each case consisted of T2-weighted (T2W), apparent diffusion coefficient (ADC), and K(trans) images computed from dynamic contrast-enhanced sequences. All image sets were rigidly co-registered, and the dominant tumor foci were identified and contoured for each MRI sequence. Hausdorff distance (HD), mean distance to agreement (MDA), and Dice and Jaccard coefficients were calculated between the contours for each pair of MRI sequences (i.e., T2 vs. ADC, T2 vs. K(trans), and ADC vs. K(trans)). The voxel wise spearman correlation was also obtained between these image pairs. Results: The DILs were located in the anterior fibromuscular stroma, central zone, peripheral zone, and transition zone in 35.2, 5.6, 32.4, and 25.4% of patients, respectively. Gleason grade groups 1-5 represented 29.6, 40.8, 15.5, and 14.1% of the study population, respectively (with group grades 4 and 5 analyzed together). The mean contour volumes for the T2W images, and the ADC and K(trans) maps were 2.14 +/- 2.1, 2.22 +/- 2.2, and 1.84 +/- 1.5 mL, respectively. K(trans) values were indistinguishable between cancerous regions and the rest of prostatic regions for 19 patients. The Dice coefficient and Jaccard index were 0.74 +/- 0.13, 0.60 +/- 0.15 for T2W-ADC and 0.61 +/- 0.16, 0.46 +/- 0.16 for T2W-K(trans). The voxel-based Spearman correlations were 0.20 +/- 0.20 for T2W-ADC and 0.13 +/- 0.25 for T2W-K(trans). Conclusions: The DIL contoured on T2W images had a high level of agreement with those contoured on ADC maps, but there was little to no quantitative correlation of these results with tumor location and Gleason grade group. Technical hurdles are yet to be solved for precision radiotherapy to target the DILs based on physiological imaging. A Boolean sum volume (BSV) incorporating all available MR sequences may be reasonable in delineating the DIL boost volume.
Prognostic value and molecular correlates of a CT image-based quantitative pleural contact index in early stage NSCLC
Lee, Juheon
Cui, Yi
Sun, Xiaoli
Li, Bailiang
Wu, Jia
Li, Dengwang
Gensheimer, Michael F
Loo, Billy W
Diehn, Maximilian
Li, Ruijiang
European Radiology2018Journal Article, cited 3 times
Website
NSCLC-Radiomics
LUNG
Radiogenomics
Radiomics
PURPOSE: To evaluate the prognostic value and molecular basis of a CT-derived pleural contact index (PCI) in early stage non-small cell lung cancer (NSCLC). EXPERIMENTAL DESIGN: We retrospectively analysed seven NSCLC cohorts. A quantitative PCI was defined on CT as the length of tumour-pleura interface normalised by tumour diameter. We evaluated the prognostic value of PCI in a discovery cohort (n = 117) and tested in an external cohort (n = 88) of stage I NSCLC. Additionally, we identified the molecular correlates and built a gene expression-based surrogate of PCI using another cohort of 89 patients. To further evaluate the prognostic relevance, we used four datasets totalling 775 stage I patients with publically available gene expression data and linked survival information. RESULTS: At a cutoff of 0.8, PCI stratified patients for overall survival in both imaging cohorts (log-rank p = 0.0076, 0.0304). Extracellular matrix (ECM) remodelling was enriched among genes associated with PCI (p = 0.0003). The genomic surrogate of PCI remained an independent predictor of overall survival in the gene expression cohorts (hazard ratio: 1.46, p = 0.0007) adjusting for age, gender, and tumour stage. CONCLUSIONS: CT-derived pleural contact index is associated with ECM remodelling and may serve as a noninvasive prognostic marker in early stage NSCLC. KEY POINTS: * A quantitative pleural contact index (PCI) predicts survival in early stage NSCLC. * PCI is associated with extracellular matrix organisation and collagen catabolic process. * A multi-gene surrogate of PCI is an independent predictor of survival. * PCI can be used to noninvasively identify patients with poor prognosis.;
Texture feature ratios from relative CBV maps of perfusion MRI are associated with patient survival in glioblastoma
Lee, J
Jain, R
Khalil, K
Griffith, B
Bosca, R
Rao, G
Rao, A
American Journal of Neuroradiology2016Journal Article, cited 27 times
Website
TCGA-GBM
Texture analysis
BACKGROUND AND PURPOSE: Texture analysis has been applied to medical images to assist in tumor tissue classification and characterization. In this study, we obtained textural features from parametric (relative CBV) maps of dynamic susceptibility contrast-enhanced MR images in glioblastoma and assessed their relationship with patient survival. MATERIALS AND METHODS: MR perfusion data of 24 patients with glioblastoma from The Cancer Genome Atlas were analyzed in this study. One- and 2D texture feature ratios and kinetic textural features based on relative CBV values in the contrast-enhancing and nonenhancing lesions of the tumor were obtained. Receiver operating characteristic, Kaplan-Meier, and multivariate Cox proportional hazards regression analyses were used to assess the relationship between texture feature ratios and overall survival. RESULTS: Several feature ratios are capable of stratifying survival in a statistically significant manner. These feature ratios correspond to homogeneity (P = .008, based on the log-rank test), angular second moment (P = .003), inverse difference moment (P = .013), and entropy (P = .008). Multivariate Cox proportional hazards regression analysis showed that homogeneity, angular second moment, inverse difference moment, and entropy from the contrast-enhancing lesion were significantly associated with overall survival. For the nonenhancing lesion, skewness and variance ratios of relative CBV texture were associated with overall survival in a statistically significant manner. For the kinetic texture analysis, the Haralick correlation feature showed a P value close to .05. CONCLUSIONS: Our study revealed that texture feature ratios from contrast-enhancing and nonenhancing lesions and kinetic texture analysis obtained from perfusion parametric maps provide useful information for predicting survival in patients with glioblastoma.;
Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data Are Associated with Molecular Subtype and 12-Month Survival Status in Glioblastoma Multiforme
Lee, Joonsang
Narang, Shivali
Martinez, Juan
Rao, Ganesh
Rao, Arvind
PLoS One2015Journal Article, cited 14 times
Website
TCGA-GBM
Glioblastoma
Radiomics
Magnetic Resonance Imaging (MRI)
One of the most common and aggressive malignant brain tumors is Glioblastoma multiforme. Despite the multimodality treatment such as radiation therapy and chemotherapy (temozolomide: TMZ), the median survival rate of glioblastoma patient is less than 15 months. In this study, we investigated the association between measures of spatial diversity derived from spatial point pattern analysis of multiparametric magnetic resonance imaging (MRI) data with molecular status as well as 12-month survival in glioblastoma. We obtained 27 measures of spatial proximity (diversity) via spatial point pattern analysis of multiparametric T1 post-contrast and T2 fluid-attenuated inversion recovery MRI data. These measures were used to predict 12-month survival status (</=12 or >12 months) in 74 glioblastoma patients. Kaplan-Meier with receiver operating characteristic analyses was used to assess the relationship between derived spatial features and 12-month survival status as well as molecular subtype status in patients with glioblastoma. Kaplan-Meier survival analysis revealed that 14 spatial features were capable of stratifying overall survival in a statistically significant manner. For prediction of 12-month survival status based on these diversity indices, sensitivity and specificity were 0.86 and 0.64, respectively. The area under the receiver operating characteristic curve and the accuracy were 0.76 and 0.75, respectively. For prediction of molecular subtype status, proneural subtype shows highest accuracy of 0.93 among all molecular subtypes based on receiver operating characteristic analysis. We find that measures of spatial diversity from point pattern analysis of intensity habitats from T1 post-contrast and T2 fluid-attenuated inversion recovery images are associated with both tumor subtype status and 12-month survival status and may therefore be useful indicators of patient prognosis, in addition to providing potential guidance for molecularly-targeted therapies in Glioblastoma multiforme.
Associating spatial diversity features of radiologically defined tumor habitats with epidermal growth factor receptor driver status and 12-month survival in glioblastoma: methods and preliminary investigation
Lee, Joonsang
Narang, Shivali
Martinez, Juan J
Rao, Ganesh
Rao, Arvind
Journal of Medical Imaging2015Journal Article, cited 15 times
Website
TCGA-GBM
Radiogenomics
Radiomics
Magnetic Resonance Imaging (MRI)
We analyzed the spatial diversity of tumor habitats, regions with distinctly different intensity characteristics of a tumor, using various measurements of habitat diversity within tumor regions. These features were then used for investigating the association with a 12-month survival status in glioblastoma (GBM) patients and for the identification of epidermal growth factor receptor (EGFR)-driven tumors. T1 postcontrast and T2 fluid attenuated inversion recovery images from 65 GBM patients were analyzed in this study. A total of 36 spatial diversity features were obtained based on pixel abundances within regions of interest. Performance in both the classification tasks was assessed using receiver operating characteristic (ROC) analysis. For association with 12-month overall survival, area under the ROC curve was 0.74 with confidence intervals [0.630 to 0.858]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.59 and 0.75, respectively. For the identification of EGFR-driven tumors, the area under the ROC curve (AUC) was 0.85 with confidence intervals [0.750 to 0.945]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.76 and 0.83, respectively. Our findings suggest that these spatial habitat diversity features are associated with these clinical characteristics and could be a useful prognostic tool for magnetic resonance imaging studies of patients with GBM.
Spatiotemporal genomic architecture informs precision oncology in glioblastoma
Lee, Jin-Ku
Wang, Jiguang
Sa, Jason K.
Ladewig, Erik
Lee, Hae-Ock
Lee, In-Hee
Kang, Hyun Ju
Rosenbloom, Daniel S.
Camara, Pablo G.
Liu, Zhaoqi
van Nieuwenhuizen, Patrick
Jung, Sang Won
Choi, Seung Won
Kim, Junhyung
Chen, Andrew
Kim, Kyu-Tae
Shin, Sang
Seo, Yun Jee
Oh, Jin-Mi
Shin, Yong Jae
Park, Chul-Kee
Kong, Doo-Sik
Seol, Ho Jun
Blumberg, Andrew
Lee, Jung-Il
Iavarone, Antonio
Park, Woong-Yang
Rabadan, Raul
Nam, Do-Hyun
Nat Genet2017Journal Article, cited 45 times
Website
TCGA-GBM
Genomics
Precision medicine in cancer proposes that genomic characterization of tumors can inform personalized targeted therapies. However, this proposition is complicated by spatial and temporal heterogeneity. Here we study genomic and expression profiles across 127 multisector or longitudinal specimens from 52 individuals with glioblastoma (GBM). Using bulk and single-cell data, we find that samples from the same tumor mass share genomic and expression signatures, whereas geographically separated, multifocal tumors and/or long-term recurrent tumors are seeded from different clones. Chemical screening of patient-derived glioma cells (PDCs) shows that therapeutic response is associated with genetic similarity, and multifocal tumors that are enriched with PIK3CA mutations have a heterogeneous drug-response pattern. We show that targeting truncal events is more efficacious than targeting private events in reducing the tumor burden. In summary, this work demonstrates that evolutionary inference from integrated genomic analysis in multisector biopsies can inform targeted therapeutic interventions for patients with GBM.
Added prognostic value of 3D deep learning-derived features from preoperative MRI for adult-type diffuse gliomas
Lee, J. O.
Ahn, S. S.
Choi, K. S.
Lee, J.
Jang, J.
Park, J. H.
Hwang, I.
Park, C. K.
Park, S. H.
Chung, J. W.
Choi, S. H.
Neuro Oncol2024Journal Article, cited 0 times
TCGA-GBM
Adult
Humans
Prognosis
*Brain Neoplasms/diagnostic imaging/genetics
*Deep Learning
Retrospective Studies
*Glioma/diagnostic imaging/genetics/surgery
Magnetic Resonance Imaging/methods
Deep learning
Glioblastoma
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
Survival analysis
BACKGROUND: To investigate the prognostic value of spatial features from whole-brain MRI using a three-dimensional (3D) convolutional neural network for adult-type diffuse gliomas. METHODS: In a retrospective, multicenter study, 1925 diffuse glioma patients were enrolled from 5 datasets: SNUH (n = 708), UPenn (n = 425), UCSF (n = 500), TCGA (n = 160), and Severance (n = 132). The SNUH and Severance datasets served as external test sets. Precontrast and postcontrast 3D T1-weighted, T2-weighted, and T2-FLAIR images were processed as multichannel 3D images. A 3D-adapted SE-ResNeXt model was trained to predict overall survival. The prognostic value of the deep learning-based prognostic index (DPI), a spatial feature-derived quantitative score, and established prognostic markers were evaluated using Cox regression. Model evaluation was performed using the concordance index (C-index) and Brier score. RESULTS: The MRI-only median DPI survival prediction model achieved C-indices of 0.709 and 0.677 (BS = 0.142 and 0.215) and survival differences (P < 0.001 and P = 0.002; log-rank test) for the SNUH and Severance datasets, respectively. Multivariate Cox analysis revealed DPI as a significant prognostic factor, independent of clinical and molecular genetic variables: hazard ratio = 0.032 and 0.036 (P < 0.001 and P = 0.004) for the SNUH and Severance datasets, respectively. Multimodal prediction models achieved higher C-indices than models using only clinical and molecular genetic variables: 0.783 vs. 0.774, P = 0.001, SNUH; 0.766 vs. 0.748, P = 0.023, Severance. CONCLUSIONS: The global morphologic feature derived from 3D CNN models using whole-brain MRI has independent prognostic value for diffuse gliomas. Combining clinical, molecular genetic, and imaging data yields the best performance.
Evaluation of few-shot detection of head and neck anatomy in CT
The detection of anatomical structures in medical imaging data plays a crucial role as a preprocessing step for various downstream tasks. It, however, poses a significant challenge due to highly variable appearances and intensity values within medical imaging data. In addition, there is a scarcity of annotated datasets in medical imaging data, due to high costs and the requirement for specialized knowledge. These limitations motivate researchers to develop automated and accurate few-shot object detection approaches. While there are generalpurpose deep learning models available for detecting objects in natural images, the applicability of these models for medical imaging data remains uncertain and needs to be validated. To address this, we carry out an unbiased evaluation of the state-of-the-art few-shot object detection methods for detecting head and neck anatomy in CT images. In particular, we choose Query Adaptive Few-Shot Object Detection (QA-FewDet), Meta Faster R-CNN, and Few-Shot Object Detection with Fully Cross-Transformer (FCT) methods and apply each model to detect various anatomical structures using novel datasets containing only a few images, ranging from 1- to 30-shot, during the fine-tuning stage. Our experimental results, carried out under the same setting, demonstrate that few-shot object detection methods can accurately detect anatomical structures, showing promising potential for integration into the clinical workflow.
Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software
Lee, Myungeun
Woo, Boyeong
Kuo, Michael D
Jamshidi, Neema
Kim, Jong Hyo
Korean journal of radiology2017Journal Article, cited 7 times
Website
TCGA-GBM
Radiomics
BRAIN
Magnetic Resonance Imaging (MRI)
Likelihood‐based bilateral filters for pre‐estimated basis sinograms using photon‐counting CT
Lee, Okkyun
Medical Physics2023Journal Article, cited 0 times
Pancreas-CT
BACKGROUND: Noise amplification in material decomposition is an issue for exploiting photon-counting computed tomography (PCCT). Regularization techniques and neighborhood filters have been widely used, but degraded spatial resolution and bias are concerns.
PURPOSE: This paper proposes likelihood-based bilateral filters that can be applied to pre-estimated basis sinograms to reduce the noise while minimally affecting spatial resolution and accuracy.
METHODS: The proposed method needs system models (e.g., incident spectrum, detector response) to calculate the likelihood. First, it performs maximum likelihood (ML)-based estimation in the projection domain to obtain basis sinograms. The estimated basis sinograms suffer from severe noise but are asymptotically unbiased without degrading spatial resolution. Then it calculates the neighborhood likelihoods for a given measurement at the center pixel using the neighborhood estimates and designs the weights based on the distance of likelihoods. It is also analyzed in terms of statistical inference, and then two variations of the filter are introduced: one that requires the significance level instead of the empirical hyperparameter. The other is a measurement-based filter, which can be applied when accurate estimates are given without the system models. The proposed methods were validated by analyzing the local property of noise and spatial resolution and the global trend of noise and bias using numerical thorax and abdominal phantoms for a two-material decomposition (water and bone). They were compared to the conventional neighborhood filters and the model-based iterative reconstruction with an edge-preserving penalty applied in the basis images.
RESULTS: The proposed method showed comparable or superior performance for the local and global properties to conventional methods in many cases. The thorax phantom: The full width at half maximum (FWHM) decreased by -2%-31% (-2 indicates that it increased by 2% compared to the best performance from conventional methods), and the global bias was reduced by 2%-19% compared to other methods for similar noise levels (local: 51% of the ML, global: 49%) in the water basis image. The FWHM decreased by 8%-31%, and the global bias was reduced by 9%-44% for similar noise levels (local: 44% of the ML, global: 36%) in the CT image at 65 keV. The abdominal phantom: The FWHM decreased by 10%-32%, and the global bias was reduced by 3%-35% compared to other methods for similar noise levels (local: 66% of the ML, global: 67%) in the water basis image. The FWHM decreased by up to -11%-47%, and the global bias was reduced by 13%-35% for similar noise levels (local: 71% of the ML, global: 70%) in the CT image at 60 keV.
CONCLUSIONS: This paper introduced the likelihood-based bilateral filters as a post-processing method applied to the ML-based estimates of basis sinograms. The proposed filters effectively reduced the noise in the basis images and the synthesized monochromatic CT images. It showed the potential of using likelihood-based filters in the projection domain as a substitute for conventional regularization or filtering methods.
A curated mammography data set for use in computer-aided detection and diagnosis research
Lee, Rebecca Sawyer
Gimenez, Francisco
Hoogi, Assaf
Miyake, Kanae Kawai
Gorovoy, Mia
Rubin, Daniel L.
Scientific Data2017Journal Article, cited 702 times
Website
CBIS-DDSM
Mammography
Image Enhancement
Published research results are difficult to replicate due to the lack of a standard evaluation data set in the area of decision support systems in mammography; most computer-aided diagnosis (CADx) and detection (CADe) algorithms for breast cancer in mammography are evaluated on private data sets or on unspecified subsets of public databases. This causes an inability to directly compare the performance of methods or to replicate prior results. We seek to resolve this substantial challenge by releasing an updated and standardized version of the Digital Database for Screening Mammography (DDSM) for evaluation of future CADx and CADe systems (sometimes referred to generally as CAD) research in mammography. Our data set, the CBIS-DDSM (Curated Breast Imaging Subset of DDSM), includes decompressed images, data selection and curation by trained mammographers, updated mass segmentation and bounding boxes, and pathologic diagnosis for training data, formatted similarly to modern computer vision data sets. The data set contains 753 calcification cases and 891 mass cases, providing a data-set size capable of analyzing decision support systems in mammography.
Are radiomics features universally applicable to different organs?
Lee, S. H.
Cho, H. H.
Kwon, J.
Lee, H. Y.
Park, H.
Cancer Imaging2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
LungCT-Diagnosis
TCGA-KIRC
CPTAC-GBM
Computed Tomography (CT)
Magnetic Resonance Imaging (MRI)
BACKGROUND: Many studies have successfully identified radiomics features reflecting macroscale tumor features and tumor microenvironment for various organs. There is an increased interest in applying these radiomics features found in a given organ to other organs. Here, we explored whether common radiomics features could be identified over target organs in vastly different environments. METHODS: Four datasets of three organs were analyzed. One radiomics model was constructed from the training set (lungs, n = 401), and was further evaluated in three independent test sets spanning three organs (lungs, n = 59; kidneys, n = 48; and brains, n = 43). Intensity histograms derived from the whole organ were compared to establish organ-level differences. We constructed a radiomics score based on selected features using training lung data over the tumor region. A total of 143 features were computed for each tumor. We adopted a feature selection approach that favored stable features, which can also capture survival. The radiomics score was applied to three independent test data from lung, kidney, and brain tumors, and whether the score could be used to separate high- and low-risk groups, was evaluated. RESULTS: Each organ showed a distinct pattern in the histogram and the derived parameters (mean and median) at the organ-level. The radiomics score trained from the lung data of the tumor region included seven features, and the score was only effective in stratifying survival for other lung data, not in other organs such as the kidney and brain. Eliminating the lung-specific feature (2.5 percentile) from the radiomics score led to similar results. There were no common features between training and test sets, but a common category of features (texture category) was identified. CONCLUSION: Although the possibility of a generally applicable model cannot be excluded, we suggest that radiomics score models for survival were mostly specific for a given organ; applying them to other organs would require careful consideration of organ-specific properties.
HGG and LGG Brain Tumor Segmentation in Multi-Modal MRI Using Pretrained Convolutional Neural Networks of Amazon Sagemaker
Lefkovits, S.
Lefkovits, L.
Szilagyi, L.
Applied Sciences-Basel2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2020
BRAIN
Segmentation
Magnetic Resonance Imaging (MRI)
Deep learning
Convolutional Neural Network (CNN)
Cloud computing
Automatic brain tumor segmentation from multimodal MRI plays a significant role in assisting the diagnosis, treatment, and surgery of glioblastoma and lower glade glioma. In this article, we propose applying several deep learning techniques implemented in AWS SageMaker Framework. The different CNN architectures are adapted and fine-tuned for our purpose of brain tumor segmentation.The experiments are evaluated and analyzed in order to obtain the best parameters as possible for the models created. The selected architectures are trained on the publicly available BraTS 2017-2020 dataset. The segmentation distinguishes the background, healthy tissue, whole tumor, edema, enhanced tumor, and necrosis. Further, a random search for parameter optimization is presented to additionally improve the architectures obtained. Lastly, we also compute the detection results of the ensemble model created from the weighted average of the six models described. The goal of the ensemble is to improve the segmentation at the tumor tissue boundaries. Our results are compared to the BraTS 2020 competition and leaderboard and are among the first 25% considering the ranking of Dice scores.
Brain Tumor Segmentation and Survival Prediction Using a Cascade of Random Forests
Lefkovits, Szidónia
Szilágyi, László
Lefkovits, László
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation is a difficult task due to the strongly varying intensity and shape of gliomas. In this paper we propose a multi-stage discriminative framework for brain tumor segmentation based on BraTS 2018 dataset. The framework presented in this paper is a more complex segmentation system than our previous work presented at BraTS 2016. Here we propose a multi-stage discriminative segmentation model, where every stage is a binary classifier based on the random forest algorithm. Our multi-stage system attempts to follow the layered structure of tumor tissues provided in the annotation protocol. In each segmentation stage we dealt with four major difficulties: feature selection, determination of training database used, optimization of classifier performances and image post-processing. The framework was tested on the evaluation images from BraTS 2018. One of the most important results is the determination of the tumor ROI with a sensitivity of approximately 0.99 in stage I by considering only 16% of the brain in the subsequent stages. Based on the segmentation obtained we solved the survival prediction task using a random forest regressor. The results obtained are comparable to the best ones presented in previous BraTS Challenges.
High-dimensional regression analysis links magnetic resonance imaging features and protein expression and signaling pathway alterations in breast invasive carcinoma
Lehrer, M.
Bhadra, A.
Aithala, S.
Ravikumar, V.
Zheng, Y.
Dogan, B.
Bonaccio, E.
Burnside, E. S.
Morris, E.
Sutton, E.
Whitman, G. J.
Net, J.
Brandt, K.
Ganott, M.
Zuley, M.
Rao, A.
Tcga Breast Phenotype Research Group
Oncoscience2018Journal Article, cited 0 times
Website
TCGA-BRCA
MRI
Radiogenomics
breast invasive carcinoma
protein expression
signaling pathway analysis
Background: Imaging features derived from MRI scans can be used for not only breast cancer detection and measuring disease extent, but can also determine gene expression and patient outcomes. The relationships between imaging features, gene/protein expression, and response to therapy hold potential to guide personalized medicine. We aim to characterize the relationship between radiologist-annotated tumor phenotypic features (based on MRI) and the underlying biological processes (based on proteomic profiling) in the tumor. Methods: Multiple-response regression of the image-derived, radiologist-scored features with reverse-phase protein array expression levels generated association coefficients for each combination of image-feature and protein in the RPPA dataset. Significantly-associated proteins for features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis determined which features were most strongly correlated with pathway activity and cellular functions. Results: Each of the twenty-nine imaging features was found to have a set of significantly correlated molecules, associated biological functions, and pathways. Conclusions: We interrogated the pathway alterations represented by the protein expression associated with each imaging feature. Our study demonstrates the relationships between biological processes (via proteomic measurements) and MRI features within breast tumors.
Multiple-response regression analysis links magnetic resonance imaging features to de-regulated protein expression and pathway activity in lower grade glioma
Lehrer, Michael
Bhadra, Anindya
Ravikumar, Visweswaran
Chen, James Y
Wintermark, Max
Hwang, Scott N
Holder, Chad A
Huang, Erich P
Fevrier-Sullivan, Brenda
Freymann, John B
Rao, Arvind
Oncoscience2017Journal Article, cited 1 times
Website
TCGA-LGG
VASARI
Radiogenomics
cBioPortal
imaging-proteomics analysis
signaling pathway activity
multiple-response regression
Radiomics
Lower-grade glioma (LGG)
BACKGROUND AND PURPOSE: Lower grade gliomas (LGGs), lesions of WHO grades II and III, comprise 10-15% of primary brain tumors. In this first-of-a-kind study, we aim to carry out a radioproteomic characterization of LGGs using proteomics data from the TCGA and imaging data from the TCIA cohorts, to obtain an association between tumor MRI characteristics and protein measurements. The availability of linked imaging and molecular data permits the assessment of relationships between tumor genomic/proteomic measurements with phenotypic features. MATERIALS AND METHODS: Multiple-response regression of the image-derived, radiologist scored features with reverse-phase protein array (RPPA) expression levels generated correlation coefficients for each combination of image-feature and protein or phospho-protein in the RPPA dataset. Significantly-associated proteins for VASARI features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis was used to determine which feature groups were most strongly correlated with pathway activity and cellular functions. RESULTS: The multiple-response regression approach identified multiple proteins associated with each VASARI imaging feature. VASARI features were found to be correlated with expression of IL8, PTEN, PI3K/Akt, Neuregulin, ERK/MAPK, p70S6K and EGF signaling pathways. CONCLUSION: Radioproteomics analysis might enable an insight into the phenotypic consequences of molecular aberrations in LGGs.
Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network
Lei, Y.
Wang, T.
Jeong, J. J.
Janopaul-Naylor, J.
Kesarwala, A. H.
Roper, J.
Tian, S.
Bradley, J. D.
Liu, T.
Higgins, K.
Yang, X.
Med Phys2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Positron Emission Tomography (PET)
Computed Tomography (CT)
PET-CT
Deep learning
LUNG
Radiotherapy
Segmentation
BACKGROUND: Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE: In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS: The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS: In fivefold cross-validation, this method achieved Dice and MSD of 0.84 +/- 0.15 and 1.38 +/- 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION: The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.
Machine learning models predict the primary sites of head and neck squamous cell carcinoma metastases based on DNA methylation
Multimodal analysis suggests differential immuno-metabolic crosstalk in lung squamous cell carcinoma and adenocarcinoma
Leitner, B. P.
Givechian, K. B.
Ospanova, S.
Beisenbayeva, A.
Politi, K.
Perry, R. J.
NPJ Precis Oncol2022Journal Article, cited 0 times
Website
TCGA-LUAD
TCGA-LUSC
LUNG
Radiomics
Radiogenomics
Immunometabolism within the tumor microenvironment is an appealing target for precision therapy approaches in lung cancer. Interestingly, obesity confers an improved response to immune checkpoint inhibition in non-small cell lung cancer (NSCLC), suggesting intriguing relationships between systemic metabolism and the immunometabolic environment in lung tumors. We hypothesized that visceral fat and (18)F-Fluorodeoxyglucose uptake influenced the tumor immunometabolic environment and that these bidirectional relationships differ in NSCLC subtypes, lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). By integrating (18)F-FDG PET/CT imaging, bulk and single-cell RNA-sequencing, and histology, we observed that LUSC had a greater dependence on glucose than LUAD. In LUAD tumors with high glucose uptake, glutaminase was downregulated, suggesting a tradeoff between glucose and glutamine metabolism, while in LUSC tumors with high glucose uptake, genes related to fatty acid and amino acid metabolism were also increased. We found that tumor-infiltrating T cells had the highest expression of glutaminase, ribosomal protein 37, and cystathionine gamma-lyase in NSCLC, highlighting the metabolic flexibility of this cell type. Further, we demonstrate that visceral adiposity, but not body mass index (BMI), was positively associated with tumor glucose uptake in LUAD and that patients with high BMI had favorable prognostic transcriptional profiles, while tumors of patients with high visceral fat had poor prognostic gene expression. We posit that metabolic adjunct therapy may be more successful in LUSC rather than LUAD due to LUAD's metabolic flexibility and that visceral adiposity, not BMI alone, should be considered when developing precision medicine approaches for the treatment of NSCLC.
The Impact of Obesity on Tumor Glucose Uptake in Breast and Lung Cancer
Leitner, Brooks P.
Perry, Rachel J.
JNCI Cancer Spectrum2020Journal Article, cited 0 times
Website
HNSCC
QIN Breast
NSCLC Radiogenomics
Anti-PD-1_Lung
TCGA-LUAD
TCGA-LUSC
Soft-tissue Sarcoma
Obesity confers an increased incidence and poorer clinical prognosis in over ten cancer types. Paradoxically, obesity provides protection from poor outcomes in lung cancer. Mechanisms for the obesity-cancer links are not fully elucidated, with altered glucose metabolism being a promising candidate. Using 18F-Fluorodeoxyglucose positron-emission-tomography/computed-tomography images from The Cancer Imaging Archive, we explored the relationship between body mass index (BMI) and glucose metabolism in several cancers. In 188 patients (BMI: 27.7, SD = 5.1, Range = 17.4-49.3 kg/m2), higher BMI was associated with greater tumor glucose uptake in obesity-associated breast cancer r = 0.36, p = 0.02), and with lower tumor glucose uptake in non-small-cell lung cancer (r=-0.26, p = 0.048) using two-sided Pearson correlations. No relationship was observed in soft tissue sarcoma or squamous cell carcinoma. Harnessing The National Cancer Institute’s open-access database, we demonstrate altered tumor glucose metabolism as a potential mechanism for the detrimental and protective effects of obesity on breast and lung cancer, respectively.
An Automated Prostate-cancer Prediction System (APPS) Based on Advanced DFO-ConGA2L Model using MRI Imaging Technique
The prostate cancer is a deadly form of cancer that assassinates a significant number of men due of its mediocre identification process. Images from people with cancer include important and intricate details that are difficult for conventional diagnostic methods to extract. This work establishes a novel Automated Prostate-cancer Prediction System (APPS) model for the goal of detecting and classifying prostate cancer utilizing MRI imaging sequences. The supplied medical image is normalized using a Coherence Diffusion Filtering (CDFilter) approach for improved quality and contrast. The appropriate properties are also extracted from the normalized image using the morphological and texture feature extraction approach, which helps to increase the classifier's accuracy. In order to train the classifier, the most important properties are also selected utilizing the cutting-edge Dragon Fly Optimized Feature Selection (DFO-FS) algorithm. Using this method greatly improves the classifier's overall disease diagnosis performance in less time and with faster processing. More specifically, the provided MRI input data are used to categorize the prostate cancer-affected and healthy tissues using the new Convoluted Gated Axial Attention Learning Model (ConGA2L) based on the selected features. This study compares and validates the performance of the APPS model by looking at several aspects using publicly available prostate cancer data.
Multimodal Brain Tumor Classification
Lerousseau, Marvin
Deutsch, Eric
Paragios, Nikos
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Digital pathology
Magnetic Resonance Imaging (MRI)
multi-modal imaging
Cancer is a complex disease that provides various types of information depending on the scale of observation. While most tumor diagnostics are performed by observing histopathological slides, radiology images should yield additional knowledge towards the efficacy of cancer diagnostics. This work investigates a deep learning method combining whole slide images and magnetic resonance images to classify tumors. In particular, our solution comprises a powerful, generic and modular architecture for whole slide image classification. Experiments are prospectively conducted on the 2020 Computational Precision Medicine challenge, in a 3-classes unbalanced classification task. We report cross-validation (resp. validation) balanced-accuracy, kappa and f1 of 0.913, 0.897 and 0.951 (resp. 0.91, 0.90 and 0.94). For research purposes, including reproducibility and direct performance comparisons, our finale submitted models are usable off-the-shelf in a Docker image available at https://hub.docker.com/repository/docker/marvinler/cpm_2020_marvinler.
Automated Whole-Body Tumor Segmentation and Prognosis of Cancer on PET/CT
Automatic characterization of malignant disease is an important clinical need to facilitate early detection and treatment of cancer. A deep semi-supervised transfer learning approach was developed for automated whole-body tumor segmentation and prognosis on positron emission tomography (PET)/computed tomography (CT) scans using limited annotations. This study analyzed five datasets consisting of 408 prostate-specific membrane antigen (PSMA) PET/CT scans of prostate cancer patients and 611 18F-fluorodeoxyglucose (18F-FDG) PET/CT scans of lung, melanoma, lymphoma, head and neck, and breast cancer patients. Transfer learning generalized the segmentation task across PSMA and 18F-FDG PET/CT. Imaging measures quantifying molecular tumor burden were extracted from the predicted segmentations. Prognostic risk models were developed and evaluated on follow-up clinical measures, Kaplan-Meier survival analysis, and response assessment for patients with prostate, head and neck, and breast cancers, respectively. The proposed approach demonstrated accurate tumor segmentation and prognosis on PET/CT of patients across six cancer types.
LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction
Leuschner, J.
Schmidt, M.
Baguer, D. O.
Maass, P.
Sci Data2021Journal Article, cited 0 times
Website
LIDC-IDRI
LDCT-and-Projection-data
Computed Tomography (CT)
Model
Deep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios.
A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.
Automated Segmentation of Prostate MR Images Using Prior Knowledge Enhanced Random Walker
Application of Deep Learning on the Prognosis of Cutaneous Melanoma Based on Full Scan Pathology Images
Li,Anhai
Li, Xiaoyuan
Li, Wenwen
Yu, Xiaoqian
Qi, Mengmeng
Li, Ding
Biomed Res Int2022Journal Article, cited 0 times
Website
CPTAC-CM
Pathomics
*Deep Learning
Humans
*Melanoma/diagnostic imaging/pathology
Prognosis
*Skin Neoplasms/diagnostic imaging/pathology
INTRODUCTION: The purpose of this study is to use deep learning and machine learning to learn and classify patients with cutaneous melanoma with different prognoses and to explore the application value of deep learning in the prognosis of cutaneous melanoma patients. METHODS: In deep learning, VGG-19 is selected as the network architecture and learning model for learning and classification. In machine learning, deep features are extracted through the VGG-19 network architecture, and the support vector machine (SVM) model is selected for learning and classification. Compare and explore the application value of deep learning and machine learning in predicting the prognosis of patients with cutaneous melanoma. RESULT: According to receiver operating characteristic (ROC) curves and area under the curve (AUC), the average accuracy of deep learning is higher than that of machine learning, and even the lowest accuracy is better than that of machine learning. CONCLUSION: As the number of learning increases, the accuracy of machine learning and deep learning will increase, but in the same number of cutaneous melanoma patient pathology maps, the accuracy of deep learning will be higher. This study provides new ideas and theories for computational pathology in predicting the prognosis of patients with cutaneous melanoma.
Multi-Dimensional Cascaded Net with Uncertain Probability Reduction for Abdominal Multi-Organ Segmentation in CT Sequences
Li, C.
Mao, Y.
Guo, Y.
Li, J.
Wang, Y.
Comput Methods Programs Biomed2022Journal Article, cited 0 times
Website
Pancreas-CT
circular inference module
high-resolution multi-view 2.5D net
multi-organ segmentation
shallow-layer-enhanced 3D location net
MATLAB
ITK
BACKGROUND AND OBJECTIVE: Deep learning abdominal multi-organ segmentation provides preoperative guidance for abdominal surgery. However, due to the large volume of 3D CT sequences, the existing methods cannot balance complete semantic features and high-resolution detail information, which leads to uncertain, rough, and inaccurate segmentation, especially in small and irregular organs. In this paper, we propose a two-stage algorithm named multi-dimensional cascaded net (MDCNet) to solve the above problems and segment multi-organs in CT images, including the spleen, kidney, gallbladder, esophagus, liver, stomach, pancreas, and duodenum. METHODS: MDCNet combines the powerful semantic encoder ability of a 3D net and the rich high-resolution information of a 2.5D net. In stage1, a prior-guided shallow-layer-enhanced 3D location net extracts entire semantic features from a downsampled CT volume to perform rough segmentation. Additionally, we use circular inference and parameter Dice loss to alleviate uncertain boundary. The inputs of stage2 are high-resolution slices, which are obtained by the original image and coarse segmentation of stage1. Stage2 offsets the details lost during downsampling, resulting in smooth and accurate refined contours. The 2.5D net from the axial, coronal, and sagittal views also compensates for the missing spatial information of a single view. RESULTS: The experiments on the two datasets both obtained the best performance, particularly a higher Dice on small gallbladders and irregular duodenums, which reached 0.85+/-0.12 and 0.77+/-0.07 respectively, increasing by 0.02 and 0.03 compared to the state-of-the-art method. CONCLUSION: Our method can extract all semantic and high-resolution detail information from a large-volume CT image. It reduces the boundary uncertainty while yielding smoother segmentation edges, indicating good clinical application prospects.
A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty
Li, D.
Hu, L.
Peng, X.
Xiao, N.
Zhao, H.
Liu, G.
Liu, H.
Li, K.
Ai, B.
Xia, H.
Lu, L.
Gao, Y.
Wu, J.
Liang, H.
iScience2022Journal Article, cited 3 times
Website
LCTSC
Lung CT Segmentation Challenge 2017
COVID-19
Computed Tomography (CT)
challenge competition
Artificial intelligence
Bioinformatics
Neural networks
Artificial Intelligence (AI) has achieved state-of-the-art performance in medical imaging. However, most algorithms focused exclusively on improving the accuracy of classification while neglecting the major challenges in a real-world application. The opacity of algorithms prevents users from knowing when the algorithms might fail. And the natural gap between training datasets and the in-reality data may lead to unexpected AI system malfunction. Knowing the underlying uncertainty is essential for improving system reliability. Therefore, we developed a COVID-19 AI system, utilizing a Bayesian neural network to calculate uncertainties in classification and reliability intervals of datasets. Validated with four multi-region datasets simulating different scenarios, our approach was proved to be effective to suggest the system failing possibility and give the decision power to human experts in time. Leveraging on the complementary strengths of AI and health professionals, our present method has the potential to improve the practicability of AI systems in clinical application.
Multiscale receptive field based on residual network for pancreas segmentation in CT images
Li, Feiyan
Li, Weisheng
Shu, Yucheng
Qin, Sheng
Xiao, Bin
Zhan, Ziwei
Biomedical Signal Processing and Control2020Journal Article, cited 0 times
Pancreas-CT
Medical image segmentation has made great achievements. Yet pancreas is a challenging abdominal organ to segment due to the high inter-patient anatomical variability in both shape and volume metrics. The UNet often suffers from pancreas over-segmentation, under-segmentation and shape inconsistency between the predicted result and ground truth. We consider the UNet can not extract more deepen features and rich semantic information which can not distinguish the regions between pancreas and background. From this point, we proposed three cross-domain information fusion strategies to solve above three problems. The first strategy named skip network can efficiently restrain the over-segmentation through cross-domain connection. The second strategy named residual network mainly seeks to solve the under- and over- segmentation problem by cross-domain connecting on a small scale. The third multiscale cross-domain information fusion strategy named multiscale residual network added multiscale convolution operation on second strategy which can learn more accurate pancreas shape and restrain over- and under- segmentation. We performed experiments on a dataset of 82 abdominal contrast-enhanced three dimension computed tomography (3D CT) scans from the National Institutes of Health Clinical Center using 4-fold cross-validation. We report 87.57 ± 3.26 % of the mean Dice score, which outperforms the state-of-the-art method, producing 7.87 % improvement from the predicted result of original UNet. Our method is not only superior to the other established methods in terms of accuracy and robustness but can also effectively restrain pancreas over-segmentation, under-segmentation and shape inconsistency between the predicted result and ground truth. Our strategies prone to apply to clinical medicine.
Low-Dose CT streak artifacts removal using deep residual neural network
Reconstruction-Assisted Feature Encoding Network for Histologic Subtype Classification of Non-Small Cell Lung Cancer
Li, Haichun
Song, Qilong
Gui, Dongqi
Wang, Minghui
Min, Xuhong
Li, Ao
IEEE Journal of Biomedical and Health Informatics2022Journal Article, cited 0 times
NSCLC-Radiomics
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets. Supplementary material is available at https://github.com/lhch1994/Rafenet_sup_material.
DT-MIL: Deformable Transformer for Multi-instance Learning on Histopathological Image
MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays
Li, Hui
Zhu, Yitan
Burnside, Elizabeth S
Drukker, Karen
Hoadley, Katherine A
Fan, Cheng
Conzen, Suzanne D
Whitman, Gary J
Sutton, Elizabeth J
Net, Jose M
Radiology2016Journal Article, cited 103 times
Website
TCGA-Breast-Radiogenomics
radiomics
radiogenomics
Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set
Li, Hui
Zhu, Yitan
Burnside, Elizabeth S
Huang, Erich
Drukker, Karen
Hoadley, Katherine A
Fan, Cheng
Conzen, Suzanne D
Zuley, Margarita
Net, Jose M
NPJ Breast Cancer2016Journal Article, cited 63 times
Website
TCGA-BRCA
Radiomics
breast cancer
Deep Learning Modeling and Increasing Interpretability of Lung Nodule Classification
Li, Juliana
2024Conference Paper, cited 0 times
LIDC-IDRI
NSCLC Radiogenomics
NSCLC-Radiomics
A major step in lung cancer diagnosis is the classification of nodule malignancy, but benign and malignant nodules appear very similar in early stages, leading to frequent misdiagnoses. This study developed a novel multimodal imagebased CNN (MIB-CNN) model architecture to classify pulmonary nodules as either benign or malignant, performing multimodal learning on only computed tomography (CT) images, without the need for other clinical data like genomic tests. MIB-CNN takes in CT images of nodules, convolutionally extracts chosen semantic features from the images to obtain numeric data, and integrates it with the image data using a novel method, improving model performance and uncovering the mechanisms of the “black box” of this deep learning task. The results showed that the MIB-CNN model achieved 0.94 AUC on the LIDC-IDRI dataset compared to 0.90 AUC with a basic image CNN, and 0.91 specificity in comparison to 0.86 specificity of the basic model, indicating a significant decrease in the number of false positives. This study also identifies the primary causes of inaccurate predictions: small airways and other thoracic organs cause noise in the image data and decrease visibility of small nodules. Furthermore, the premise of MIBCNN is not limited to this lung nodule malignancy classification task, as this methodology can be applied to other medical imagebased deep learning tasks to overcome the challenge of limited multimodal data availability.
Multiomics profiling reveals the benefits of gamma-delta (gammadelta) T lymphocytes for improving the tumor microenvironment, immunotherapy efficacy and prognosis in cervical cancer
Li, J.
Cao, Y.
Liu, Y.
Yu, L.
Zhang, Z.
Wang, X.
Bai, H.
Zhang, Y.
Liu, S.
Gao, M.
Lu, C.
Li, C.
Guan, Y.
Tao, Z.
Wu, Z.
Chen, J.
Yuan, Z.
J Immunother Cancer2024Journal Article, cited 0 times
Website
TCGA-CESC
Humans
*Uterine Cervical Neoplasms/genetics/therapy
Tumor Microenvironment
Multiomics
Immunotherapy
Prognosis
Biostatistics
Genital Neoplasms
Female
T-Lymphocytes
Radiogenomics
PyRadiomics
BACKGROUND: As an unconventional subpopulation of T lymphocytes, gammadelta T cells can recognize antigens independently of major histocompatibility complex restrictions. Recent studies have indicated that gammadelta T cells play contrasting roles in tumor microenvironments-promoting tumor progression in some cancers (eg, gallbladder and leukemia) while suppressing it in others (eg, lung and gastric). gammadelta T cells are mainly enriched in peripheral mucosal tissues. As the cervix is a mucosa-rich tissue, the role of gammadelta T cells in cervical cancer warrants further investigation. METHODS: We employed a multiomics strategy that integrated abundant data from single-cell and bulk transcriptome sequencing, whole exome sequencing, genotyping array, immunohistochemistry, and MRI. RESULTS: Heterogeneity was observed in the level of gammadelta T-cell infiltration in cervical cancer tissues, mainly associated with the tumor somatic mutational landscape. Definitely, gammadelta T cells play a beneficial role in the prognosis of patients with cervical cancer. First, gammadelta T cells exert direct cytotoxic effects in the tumor microenvironment of cervical cancer through the dynamic evolution of cellular states at both poles. Second, higher levels of gammadelta T-cell infiltration also shape the microenvironment of immune activation with cancer-suppressive properties. We found that these intricate features can be observed by MRI-based radiomics models to non-invasively assess gammadelta T-cell proportions in tumor tissues in patients. Importantly, patients with high infiltration levels of gammadelta T cells may be more amenable to immunotherapies including immune checkpoint inhibitors and autologous tumor-infiltrating lymphocyte therapies, than to chemoradiotherapy. CONCLUSIONS: gammadelta T cells play a beneficial role in antitumor immunity in cervical cancer. The abundance of gammadelta T cells in cervical cancerous tissue is associated with higher response rates to immunotherapy.
Gradient-Rebalanced Uncertainty Minimization for Cross-Site Adaptation of Medical Image Segmentation
Automatically adapting image segmentation across data sites benefits to reduce the data annotation burden in medical image analysis. Due to variations in image collection procedures, there usually exists moderate domain gap between medical image datasets from different sites. Increasing the prediction certainty is beneficial for gradually reducing the category-wise domain shift. However, uncertainty minimization naturally leads to bias towards major classes since the target object usually occupies a small portion of pixels in the input image. In this paper, we propose a gradient-rebalanced uncertainty minimization scheme which is capable of eliminating the learning bias. First, the foreground pixels and background pixels are reweighted according to the total gradient amplitude of every class. Furthermore, we devise a feature-level adaptation scheme to reduce the overall domain gap between source and target datasets, based on feature norm regularization and adversarial learning. Experiments on CT pancreas segmentation and MRI prostate segmentation validate that, our method outperforms existing cross-site adaptation algorithms by around 3% on the DICE similarity coefficient.
ITHscore: comprehensive quantification of intra-tumor heterogeneity in NSCLC by multi-scale radiomic features
Li, J.
Qiu, Z.
Zhang, C.
Chen, S.
Wang, M.
Meng, Q.
Lu, H.
Wei, L.
Lv, H.
Zhong, W.
Zhang, X.
Eur Radiol2022Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Head-Neck-Radiomics-HN1
RIDER LUNG CT
Non-small cell lung cancer
Radiomics
Computed Tomography (CT)
Tumor heterogeneity
OBJECTIVES: To quantify intra-tumor heterogeneity (ITH) in non-small cell lung cancer (NSCLC) from computed tomography (CT) images. METHODS: We developed a quantitative ITH measurement-ITHscore-by integrating local radiomic features and global pixel distribution patterns. The associations of ITHscore with tumor phenotypes, genotypes, and patient's prognosis were examined on six patient cohorts (n = 1399) to validate its effectiveness in characterizing ITH. RESULTS: For stage I NSCLC, ITHscore was consistent with tumor progression from stage IA1 to IA3 (p < 0.001) and captured key pathological change in terms of malignancy (p < 0.001). ITHscore distinguished the presence of lymphovascular invasion (p = 0.003) and pleural invasion (p = 0.001) in tumors. ITHscore also separated patient groups with different overall survival (p = 0.004) and disease-free survival conditions (p = 0.005). Radiogenomic analysis showed that the level of ITHscore in stage I and stage II NSCLC is correlated with heterogeneity-related pathways. In addition, ITHscore was proved to be a stable measurement and can be applied to ITH quantification in head-and-neck cancer (HNC). CONCLUSIONS: ITH in NSCLC can be quantified from CT images by ITHscore, which is an indicator for tumor phenotypes and patient's prognosis. KEY POINTS: * ITHscore provides a radiomic quantification of intra-tumor heterogeneity in NSCLC. * ITHscore is an indicator for tumor phenotypes and patient's prognosis. * ITHscore has the potential to be generalized to other cancer types such as HNC.
A Systematic Collection of Medical Image Datasets for Deep Learning
Li, Johann
Zhu, Guangming
Hua, Cong
Feng, Mingtao
Bennamoun, Basheer
Li, Ping
Lu, Xiaoyuan
Song, Juan
Shen, Peiyi
Xu, Xu
Mei, Lin
Zhang, Liang
Shah, Syed Afaq Ali
Bennamoun, Mohammed
2023Journal Article, cited 0 times
AAPM-RT-MAC
Brain-Tumor-Progression
BREAST-DIAGNOSIS
ISBI-MR-Prostate-2013
Lung-PET-CT-Dx
Prostate-3T
PROSTATE-DIAGNOSIS
The astounding success made by artificial intelligence in healthcare and other fields proves that it can achieve human-like performance. However, success always comes with challenges. Deep learning algorithms are data dependent and require large datasets for training. Many junior researchers face a lack of data for a variety of reasons. Medical image acquisition, annotation, and analysis are costly, and their usage is constrained by ethical restrictions. They also require several other resources, such as professional equipment and expertise. That makes it difficult for novice and non-medical researchers to have access to medical data. Thus, as comprehensively as possible, this article provides a collection of medical image datasets with their associated challenges for deep learning research. We have collected the information of approximately 300 datasets and challenges mainly reported between 2007 and 2020 and categorized them into four categories: head and neck, chest and abdomen, pathology and blood, and others. The purpose of our work is to provide a list, as up-to-date and complete as possible, that can be used as a reference to easily find the datasets for medical image analysis and the information related to these datasets.
Evaluating the performance of a deep learning‐based computer‐aided diagnosis (DL‐CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists
Li, Li
Liu, Zhou
Huang, Hua
Lin, Meng
Luo, Dehong
Thoracic cancer2018Journal Article, cited 0 times
Website
LIDC-IDRI
LDCT
deep learning
Computer aided diagnosis
Computer aided detection
NLST
Novel radiomic analysis on bi-parametric MRI for characterizing differences between MR non-visible and visible clinically significant prostate cancer
Li, Lin
Shiradkar, Rakesh
Tirumani, Sree Harsha
Bittencourt, Leonardo Kayat
Fu, Pingfu
Mahran, Amr
Buzzy, Christina
Stricker, Phillip D.
Rastinehad, Ardeshir R.
Magi-Galluzzi, Cristina
Ponsky, Lee
Klein, Eric
Purysko, Andrei S.
Madabhushi, Anant
2023Journal Article, cited 0 times
PROSTATEx
Background: around one third of clinically significant prostate cancer (CsPCa) foci are reported to be MRI non-visible (MRI─).
Objective: To quantify the differences between MR visible (MRI+) and MRI─ CsPCa using intra- and peri-lesional radiomic features on bi-parametric MRI (bpMRI).
Methods: This retrospective and multi-institutional study comprised 164 patients with pre-biopsy 3T prostate multi-parametric MRI from 2014 to 2017. The MRI─ CsPCa referred to lesions with PI-RADS v2 score < 3 but ISUP grade group > 1. Three experienced radiologists were involved in annotating lesions and PI-RADS assignment. The validation set (Dv) comprised 52 patients from a single institution, the remaining 112 patients were used for training (Dt). 200 radiomic features were extracted from intra-lesional and peri-lesional regions on bpMRI.Logistic regression with least absolute shrinkage and selection operator (LASSO) and 10-fold cross-validation was applied on Dt to identify radiomic features associated with MRI─ and MRI+ CsPCa to generate corresponding risk scores RMRI─ and RMRI+. RbpMRI was further generated by integrating RMRI─ and RMRI+. Statistical significance was determined using the Wilcoxon signed-rank test.
Results: Both intra-lesional and peri-lesional bpMRI Haralick and CoLlAGe radiomic features were significantly associated with MRI─ CsPCa (p < 0.05). Intra-lesional ADC Haralick and CoLlAGe radiomic features were significantly different among MRI─ and MRI+ CsPCa (p < 0.05). RbpMRI yielded the highest AUC of 0.82 (95 % CI 0.72-0.91) compared to AUCs of RMRI+ 0.76 (95 % CI 0.63-0.89), and PI-RADS 0.58 (95 % CI 0.50-0.72) on Dv. RbpMRI correctly reclassified 10 out of 14 MRI─ CsPCa on Dv.
Conclusion: Our preliminary results demonstrated that both intra-lesional and peri-lesional bpMRI radiomic features were significantly associated with MRI─ CsPCa. These features could assist in CsPCa identification on bpMRI.
Special issue “The advance of solid tumor research in China”: Prognosis prediction for stage II colorectal cancer by fusing computed tomography radiomics and deep‐learning features of primary lesions and peripheral lymph nodes
Li, Menglei
Gong, Jing
Bao, Yichao
Huang, Dan
Peng, Junjie
Tong, Tong
2022Journal Article, cited 0 times
StageII-Colorectal-CT
Currently, the prognosis assessment of stage II colorectal cancer (CRC) remains a difficult clinical problem; therefore, more accurate prognostic predictors must be developed. In our study, we developed a prognostic prediction model for stage II CRC by fusing radiomics and deep-learning (DL) features of primary lesions and peripheral lymph nodes (LNs) in computed tomography (CT) scans. First, two CT radiomics models were built using primary lesion and LN image features. Subsequently, an information fusion method was used to build a fusion radiomics model by combining the tumor and LN image features. Furthermore, a transfer learning method was applied to build a deep convolutional neural network (CNN) model. Finally, the prediction scores generated by the radiomics and CNN models were fused to improve the prognosis prediction performance. The disease-free survival (DFS) and overall survival (OS) prediction areas under the curves (AUCs) generated by the fusion model improved to 0.76 ± 0.08 and 0.91 ± 0.05, respectively. These were significantly higher than the AUCs generated by the models using the individual CT radiomics and deep image features. Applying the survival analysis method, the DFS and OS fusion models yielded concordance index (C-index) values of 0.73 and 0.9, respectively. Hence, the combined model exhibited good predictive efficacy; therefore, it could be used for the accurate assessment of the prognosis of stage II CRC patients. Moreover, it could be used to screen out high-risk patients with poor prognoses, and assist in the formulation of clinical treatment decisions in a timely manner to achieve precision medicine.
Multi-scale Selection and Multi-channel Fusion Model for Pancreas Segmentation Using Adversarial Deep Convolutional Nets
Li, M.
Lian, F.
Guo, S.
J Digit Imaging2021Journal Article, cited 0 times
Website
Pancreas-CT
Segmentation
Deep convolutional neural network (DCNN)
Organ segmentation from existing imaging is vital to the medical image analysis and disease diagnosis. However, the boundary shapes and area sizes of the target region tend to be diverse and flexible. And the frequent applications of pooling operations in traditional segmentor result in the loss of spatial information which is advantageous to segmentation. All these issues pose challenges and difficulties for accurate organ segmentation from medical imaging, particularly for organs with small volumes and variable shapes such as the pancreas. To offset aforesaid information loss, we propose a deep convolutional neural network (DCNN) named multi-scale selection and multi-channel fusion segmentation model (MSC-DUnet) for pancreas segmentation. This proposed model contains three stages to collect detailed cues for accurate segmentation: (1) increasing the consistency between the distributions of the output probability maps from the segmentor and the original samples by involving the adversarial mechanism that can capture spatial distributions, (2) gathering global spatial features from several receptive fields via multi-scale field selection (MSFS), and (3) integrating multi-level features located in varying network positions through the multi-channel fusion module (MCFM). Experimental results on the NIH Pancreas-CT dataset show that our proposed MSC-DUnet obtains superior performance to the baseline network by achieving an improvement of 5.1% in index dice similarity coefficient (DSC), which adequately indicates that MSC-DUnet has great potential for pancreas segmentation.
An Adversarial Network Embedded with Attention Mechanism for Pancreas Segmentation
Pancreas segmentation plays an important role in the diagnosis of pancreatic diseases and related complications. However, accurately segmenting pancreas from computed tomography (CT) images tends to be challenging due to the limited proportion and irregular shape of pancreas in the abdominal CT volume. To solve this issue, we propose an adversarial network embedded with attention mechanism for pancreas segmentation in this paper. The involvement of generative adversarial network contributes to retaining much spatial information for segmentation through capturing high dimensional data distributions due to its competing mechanism between the discriminator and the generator. Furthermore, the application of attention mechanism enhances the interdependency among pixels, and thus containing contextual information for segmentation. Experimental results show that our proposed model achieves competitive performance compared with most pancreas segmentation methods.
Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images
Li, M.
Lian, F.
Li, Y.
Guo, S.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
Pancreas-CT
Machine Learning
Generative adversarial network
Segmentation
PURPOSE: Segmenting the organs from computed tomography (CT) images is crucial to early diagnosis and treatment. Pancreas segmentation is especially challenging because the pancreas has a small volume and a large variation in shape. METHODS: To mitigate this issue, an attention-guided duplex adversarial U-Net (ADAU-Net) for pancreas segmentation is proposed in this work. First, two adversarial networks are integrated into the baseline U-Net to ensure the obtained prediction maps resemble the ground truths. Then, attention blocks are applied to preserve much contextual information for segmentation. The implementation of the proposed ADAU-Net consists of two steps: 1) backbone segmentor selection scheme is introduced to select an optimal backbone segmentor from three two-dimensional segmentation model variants based on a conventional U-Net and 2) attention blocks are integrated into the backbone segmentor at several locations to enhance the interdependency among pixels for a better segmentation performance, and the optimal structure is selected as a final version. RESULTS: The experimental results on the National Institutes of Health Pancreas-CT dataset show that our proposed ADAU-Net outperforms the baseline segmentation network by 6.39% in dice similarity coefficient and obtains a competitive performance compared with the-state-of-art methods for pancreas segmentation. CONCLUSION: The ADAU-Net achieves satisfactory segmentation results on the public pancreas dataset, indicating that the proposed model can segment pancreas outlines from CT images accurately.
Accurate pancreas segmentation using multi-level pyramidal pooling residual U-Net with adversarial mechanism
Li, M.
Lian, F.
Wang, C.
Guo, S.
BMC Med Imaging2021Journal Article, cited 0 times
Pancreas-CT
*Tomography
X-Ray Computed
*Adversarial mechanism
*Multi-level pyramidal pooling module
Segmentation
*Residual learning
BACKGROUND: A novel multi-level pyramidal pooling residual U-Net with adversarial mechanism was proposed for organ segmentation from medical imaging, and was conducted on the challenging NIH Pancreas-CT dataset. METHODS: The 82 pancreatic contrast-enhanced abdominal CT volumes were split via four-fold cross validation to test the model performance. In order to achieve accurate segmentation, we firstly involved residual learning into an adversarial U-Net to achieve a better gradient information flow for improving segmentation performance. Then, we introduced a multi-level pyramidal pooling module (MLPP), where a novel pyramidal pooling was involved to gather contextual information for segmentation, then four groups of structures consisted of a different number of pyramidal pooling blocks were proposed to search for the structure with the optimal performance, and two types of pooling blocks were applied in the experimental section to further assess the robustness of MLPP for pancreas segmentation. For evaluation, Dice similarity coefficient (DSC) and recall were used as the metrics in this work. RESULTS: The proposed method preceded the baseline network 5.30% and 6.16% on metrics DSC and recall, and achieved competitive results compared with the-state-of-art methods. CONCLUSIONS: Our algorithm showed great segmentation performance even on the particularly challenging pancreas dataset, this indicates that the proposed model is a satisfactory and promising segmentor.
Patient-specific biomechanical model as whole-body CT image registration tool
Li, Mao
Miller, Karol
Joldes, Grand Roman
Doyle, Barry
Garlapati, Revanth Reddy
Kikinis, Ron
Wittek, Adam
Medical Image Analysis2015Journal Article, cited 15 times
Website
Image registration
patient-specific biomechanical model
non-linear finite element analysis
Fuzzy-c means
Hausdorff distance
Magnetic Resonance Imaging (MRI)
Computed Tomography (CT)
finite-element model
BRAIN
mechanical-properties
nonrigid registration
Whole-body computed tomography (CT) image registration is important for cancer diagnosis, therapy planning and treatment. Such registration requires accounting for large differences between source and target images caused by deformations of soft organs/tissues and articulated motion of skeletal structures. The registration algorithms relying solely on image processing methods exhibit deficiencies in accounting for such deformations and motion. We propose to predict the deformations and movements of body organs/tissues and skeletal structures for whole-body CT image registration using patient-specific non-linear biomechanical modelling. Unlike the conventional biomechanical modelling, our approach for building the biomechanical models does not require time-consuming segmentation of CT scans to divide the whole body into non-overlapping constituents with different material properties. Instead, a Fuzzy C-Means (FCM) algorithm is used for tissue classification to assign the constitutive properties automatically at integration points of the computation grid. We use only very simple segmentation of the spine when determining vertebrae displacements to define loading for biomechanical models. We demonstrate the feasibility and accuracy of our approach on CT images of seven patients suffering from cancer and aortic disease. The results confirm that accurate whole-body CT image registration can be achieved using a patient-specific non-linear biomechanical model constructed without time-consuming segmentation of the whole-body images. (C) 2015 Elsevier B.V. All rights reserved.
Biomechanical model for computing deformations for whole‐body image registration: A meshless approach
Li, Mao
Miller, Karol
Joldes, Grand Roman
Kikinis, Ron
Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering2016Journal Article, cited 13 times
Website
Algorithm Development
Fuzzy C-means clustering (FCM)
Segmentation
Computed Tomography (CT)
Machine Learning
mResU-Net: multi-scale residual U-Net-based brain tumor segmentation from multimodal MRI
Li, P.
Li, Z.
Wang, Z.
Li, C.
Wang, M.
Med Biol Eng Comput2023Journal Article, cited 0 times
BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation
Brain tumor segmentation
Multi-scale Residual U-Net
Multimodal MRI
Automatic Segmentation
Brain tumor segmentation is an important direction in medical image processing, and its main goal is to accurately mark the tumor part in brain MRI. This study proposes a brand new end-to-end model for brain tumor segmentation, which is a multi-scale deep residual convolutional neural network called mResU-Net. The semantic gap between the encoder and decoder is bridged by using skip connections in the U-Net structure. The residual structure is used to alleviate the vanishing gradient problem during training and ensure sufficient information in deep networks. On this basis, multi-scale convolution kernels are used to improve the segmentation accuracy of targets of different sizes. At the same time, we also integrate channel attention modules into the network to improve its accuracy. The proposed model has an average dice score of 0.9289, 0.9277, and 0.8965 for tumor core (TC), whole tumor (WT), and enhanced tumor (ET) on the BraTS 2021 dataset, respectively. Comparing the segmentation results of this method with existing techniques shows that mResU-Net can significantly improve the segmentation performance of brain tumor subregions.
A Fully-Automatic Multiparametric Radiomics Model: Towards Reproducible and Prognostic Imaging Signature for Prediction of Overall Survival in Glioblastoma Multiforme
Li, Qihua
Bai, Hongmin
Chen, Yinsheng
Sun, Qiuchang
Liu, Lei
Zhou, Sijie
Wang, Guoliang
Liang, Chaofeng
Li, Zhi-Cheng
Scientific RepoRtS2017Journal Article, cited 9 times
Website
Radiomics
GBM
Comparison Between Radiological Semantic Features and Lung-RADS in Predicting Malignancy of Screen-Detected Lung Nodules in the National Lung Screening Trial
Li, Qian
Balagurunathan, Yoganand
Liu, Ying
Qi, Jin
Schabath, Matthew B
Ye, Zhaoxiang
Gillies, Robert J
Clinical Lung Cancer2017Journal Article, cited 3 times
Website
Lung cancer screening
Lung-RADS
National Lung Screening Trial (NLST)
Predictive
Semantic features
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation
Li, Q.
Yu, Z.
Wang, Y.
Zheng, H.
Sensors (Basel)2020Journal Article, cited 41 times
Website
BraTS 2017
Brain/diagnostic imaging
*Brain Neoplasms/diagnostic imaging
Humans
Image Processing
Computer-Assisted
Segmentation
Generative Adversarial Network (GAN)
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.
An efficient interactive multi-label segmentation tool for 2D and 3D medical images using fully connected conditional random field
Li, R.
Chen, X.
Comput Methods Programs Biomed2022Journal Article, cited 2 times
Website
ISPY1/ACRIN 6657
Algorithm Development
*Image Processing
Computer-Assisted
*Imaging
Three-Dimensional
MATLAB
Magnetic Resonance Imaging (MRI)
Ultrasound
Segmentation
Conditional random field
OBJECTIVE: Image segmentation is a crucial and fundamental step in many medical image analysis tasks, such as tumor measurement, surgery planning, disease diagnosis, etc. To ensure the quality of image segmentation, most of the current solutions require labor-intensive manual processes by tracing the boundaries of the objects. The workload increases tremendously for the case of three dimensional (3D) image with multiple objects to be segmented. METHOD: In this paper, we introduce our developed interactive image segmentation tool that provides efficient segmentation of multiple labels for both 2D and 3D medical images. The core segmentation method is based on a fast implementation of the fully connected conditional random field. The software also enables automatic recommendation of the next slice to be annotated in 3D, leading to a higher efficiency. RESULTS: We have evaluated the tool on many 2D and 3D medical image modalities (e.g. CT, MRI, ultrasound, X-ray, etc.) and different objects of interest (abdominal organs, tumor, bones, etc.), in terms of segmentation accuracy, repeatability and computational time. CONCLUSION: In contrast to other interactive image segmentation tools, our software produces high quality image segmentation results without the requirement of parameter tuning for each application.
Prediction and verification of survival in patients with non-small-cell lung cancer based on an integrated radiomics nomogram
Li, R.
Peng, H.
Xue, T.
Li, J.
Ge, Y.
Wang, G.
Feng, F.
Clinical Radiology2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
radiomics
AIM To develop and validate a nomogram to predict 1-, 2-, and 5-year survival in patients with non-small-cell lung cancer (NSCLC) by combining optimised radiomics features, clinicopathological factors, and conventional image features extracted from three-dimensional (3D) computed tomography (CT) images. MATERIALS AND METHODS A total of 172 patients with NSCLC were selected to construct the model, and 74 and 72 patients were selected for internal validation and external testing, respectively. A total of 828 radiomics features were extracted from each patient's 3D CT images. Univariable Cox regression and least absolute shrinkage and selection operator (LASSO) regression were used to select features and generate a radiomics signature (radscore). The performance of the nomogram was evaluated by calibration curves, clinical practicability, and the c-index. Kaplan–Meier (KM) analysis was used to compare the overall survival (OS) between the two subgroups. RESULT The radiomics features of the NSCLC patients correlated significantly with survival time. The c-indexes of the nomogram in the training cohort, internal validation cohort, and external test cohort were 0.670, 0.658, and 0.660, respectively. The calibration curves showed that the predicted survival time was close to the actual survival time. Decision curve analysis shows that the nomogram could be useful in the clinic. According to KM analysis, the 1-, 2- and 5-year survival rates of the low-risk group were higher than those of the high-risk group. CONCLUSION The nomogram, combining the radscore, clinicopathological factors, and conventional CT parameters, can improve the accuracy of survival prediction in patients with NSCLC.
Tumor Morphology for Prediction of Poor Responses Early in Neoadjuvant Chemotherapy for Breast Cancer: A Multicenter Retrospective Study
Li, W.
Le, N. N.
Nadkarni, R.
Onishi, N.
Wilmes, L. J.
Gibbs, J. E.
Price, E. R.
Joe, B. N.
Mukhtar, R. A.
Gennatas, E. D.
Kornak, J.
Magbanua, M. J. M.
Van't Veer, L. J.
LeStage, B.
Esserman, L. J.
Hylton, N. M.
Tomography2024Journal Article, cited 0 times
Website
BACKGROUND: This multicenter and retrospective study investigated the additive value of tumor morphologic features derived from the functional tumor volume (FTV) tumor mask at pre-treatment (T0) and the early treatment time point (T1) in the prediction of pathologic outcomes for breast cancer patients undergoing neoadjuvant chemotherapy. METHODS: A total of 910 patients enrolled in the multicenter I-SPY 2 trial were included. FTV and tumor morphologic features were calculated from the dynamic contrast-enhanced (DCE) MRI. A poor response was defined as a residual cancer burden (RCB) class III (RCB-III) at surgical excision. The area under the receiver operating characteristic curve (AUC) was used to evaluate the predictive performance. The analysis was performed in the full cohort and in individual sub-cohorts stratified by hormone receptor (HR) and human epidermal growth factor receptor 2 (HER2) status. RESULTS: In the full cohort, the AUCs for the use of the FTV ratio and clinicopathologic data were 0.64 +/- 0.03 (mean +/- SD [standard deviation]). With morphologic features, the AUC increased significantly to 0.76 +/- 0.04 (p < 0.001). The ratio of the surface area to volume ratio between T0 and T1 was found to be the most contributing feature. All top contributing features were from T1. An improvement was also observed in the HR+/HER2- and triple-negative sub-cohorts. The AUC increased significantly from 0.56 +/- 0.05 to 0.70 +/- 0.06 (p < 0.001) and from 0.65 +/- 0.06 to 0.73 +/- 0.06 (p < 0.001), respectively, when adding morphologic features. CONCLUSION: Tumor morphologic features can improve the prediction of RCB-III compared to using FTV only at the early treatment time point.
Machine Learning Classification of Body Part, Imaging Axis, and Intravenous Contrast Enhancement on CT Imaging
Li, Wuqi
Lin, Hui Ming
Lin, Amy
Napoleone, Marc
Moreland, Robert
Murari, Alexis
Stepanov, Maxim
Ivanov, Eric
Prasad, Abhinav Sanjeeva
Shih, George
Hu, Zixuan
Zulbayar, Suvd
Sejdić, Ervin
Colak, Errol
2023Journal Article, cited 0 times
C4KC-KiTS
CPTAC-LSCC
SPIE-AAPM Lung CT Challenge
StageII-Colorectal-CT
Purpose: The development and evaluation of machine learning models that automatically identify the body part(s) imaged, axis of imaging, and the presence of intravenous contrast material of a CT series of images. Methods: This retrospective study included 6955 series from 1198 studies (501 female, 697 males, mean age 56.5 years) obtained between January 2010 and September 2021. Each series was annotated by a trained board-certified radiologist with labels consisting of 16 body parts, 3 imaging axes, and whether an intravenous contrast agent was used. The studies were randomly assigned to the training, validation and testing sets with a proportion of 70%, 20% and 10%, respectively, to develop a 3D deep neural network for each classification task. External validation was conducted with a total of 35,272 series from 7 publicly available datasets. The classification accuracy for each series was independently assessed for each task to evaluate model performance. Results: The accuracies for identifying the body parts, imaging axes, and the presence of intravenous contrast were 96.0% (95% CI: 94.6%, 97.2%), 99.2% (95% CI: 98.5%, 99.7%), and 97.5% (95% CI: 96.4%, 98.5%) respectively. The generalizability of the models was demonstrated through external validation with accuracies of 89.7 - 97.8%, 98.6 - 100%, and 87.8 - 98.6% for the same tasks. Conclusions: The developed models demonstrated high performance on both internal and external testing in identifying key aspects of a CT series.
BTSSPro: Prompt-Guided Multimodal Co-Learning for Breast Cancer Tumor Segmentation and Survival Prediction
Li, Wei
Liu, Tianyu
Feng, Feiyan
Yu, Shengpeng
Wang, Hong
Sun, Yanshen
IEEE Journal of Biomedical and Health Informatics2024Journal Article, cited 0 times
Breast-MRI-NACT-Pilot
ISPY1
Early detection significantly enhances patients' survival rates by identifying tumors in their initial stages through medical imaging. However, prevailing methodologies encounter challenges in extracting comprehensive information from diverse modalities, thereby exacerbating semantic disparities and overlooking critical task correlations, consequently compromising the accuracy of prognosis predictions. Moreover, clinical insights emphasize the advantageous sharing of parameters between tumor segmentation and survival prediction for enhanced prognostic accuracy. This paper proposes a novel model, BTSSPro, designed to concurrently address B reast cancer T umor S egmentation and S urvival prediction through a Pro mpt-guided multi-modal co-learning framework. Technologically, our approach involves the extraction of tumor-specific discriminative features utilizing shared dual attention (SDA) blocks, which amalgamate spatial and channel information from breast MR images. Subsequently, we employ a guided fusion module (GFM) to seamlessly integrate the Electronic Health Record (EHR) vector into the extracted tumor-related discriminative feature representations. This integration prompts the model's feature selection to align more closely with real-world scenarios. Finally, a feature harmonic unit (FHU) is introduced to coordinate the transformer encoder and CNN decoder, thus reducing semantic differences. Remarkably, BTSSPro achieved a C-index of 0.968 and Dice score of 0.715 on the Breast MRI-NACT-Pilot dataset and a C-index of 0.807 and Dice score of 0.791 on the ISPY1 dataset, surpassing the previous state-of-the-art methods.
Breast Multiparametric MRI for Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer: The BMMR2 Challenge
Li, W.
Partridge, S. C.
Newitt, D. C.
Steingrimsson, J.
Marques, H. S.
Bolan, P. J.
Hirano, M.
Bearce, B. A.
Kalpathy-Cramer, J.
Boss, M. A.
Teng, X.
Zhang, J.
Cai, J.
Kontos, D.
Cohen, E. A.
Mankowski, W. C.
Liu, M.
Ha, R.
Pellicer-Valero, O. J.
Maier-Hein, K.
Rabinovici-Cohen, S.
Tlusty, T.
Ozery-Flato, M.
Parekh, V. S.
Jacobs, M. A.
Yan, R.
Sung, K.
Kazerouni, A. S.
DiCarlo, J. C.
Yankeelov, T. E.
Chenevert, T. L.
Hylton, N. M.
Radiol Imaging Cancer2024Journal Article, cited 0 times
Website
Acute myeloid leukemia
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
Female
Humans
Middle Aged
Artificial Intelligence
*Breast Neoplasms/diagnostic imaging/drug therapy
Magnetic Resonance Imaging (MRI)
Multiparametric Magnetic Resonance Imaging (mpMRI)
Neoadjuvant Therapy
Pathologic Complete Response
Adult
BREAST
Tumor Response
Purpose To describe the design, conduct, and results of the Breast Multiparametric MRI for prediction of neoadjuvant chemotherapy Response (BMMR2) challenge. Materials and Methods The BMMR2 computational challenge opened on May 28, 2021, and closed on December 21, 2021. The goal of the challenge was to identify image-based markers derived from multiparametric breast MRI, including diffusion-weighted imaging (DWI) and dynamic contrast-enhanced (DCE) MRI, along with clinical data for predicting pathologic complete response (pCR) following neoadjuvant treatment. Data included 573 breast MRI studies from 191 women (mean age [+/-SD], 48.9 years +/- 10.56) in the I-SPY 2/American College of Radiology Imaging Network (ACRIN) 6698 trial (ClinicalTrials.gov: NCT01042379). The challenge cohort was split into training (60%) and test (40%) sets, with teams blinded to test set pCR outcomes. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC) and compared with the benchmark established from the ACRIN 6698 primary analysis. Results Eight teams submitted final predictions. Entries from three teams had point estimators of AUC that were higher than the benchmark performance (AUC, 0.782 [95% CI: 0.670, 0.893], with AUCs of 0.803 [95% CI: 0.702, 0.904], 0.838 [95% CI: 0.748, 0.928], and 0.840 [95% CI: 0.748, 0.932]). A variety of approaches were used, ranging from extraction of individual features to deep learning and artificial intelligence methods, incorporating DCE and DWI alone or in combination. Conclusion The BMMR2 challenge identified several models with high predictive performance, which may further expand the value of multiparametric breast MRI as an early marker of treatment response. Clinical trial registration no. NCT01042379 Keywords: MRI, Breast, Tumor Response Supplemental material is available for this article. (c) RSNA, 2024.
MAD‐UNet: A deep U‐shaped network combined with an attention mechanism for pancreas segmentation in CT images
Li, Weisheng
Qin, Sheng
Li, Feiyan
Wang, Linhong
Medical Physics2020Journal Article, cited 0 times
Pancreas-CT
PURPOSE: Pancreas segmentation is a difficult task because of the high intrapatient variability in the shape, size, and location of the organ, as well as the low contrast and small footprint of the CT scan. At present, the U-Net model is likely to lead to the problems of intraclass inconsistency and interclass indistinction in pancreas segmentation. To solve this problem, we improved the contextual and semantic feature information acquisition method of the biomedical image segmentation model (U-Net) based on a convolutional network and proposed an improved segmentation model called the multiscale attention dense residual U-shaped network (MAD-UNet).
METHODS: There are two aspects considered in this method. First, we adopted dense residual blocks and weighted binary cross-entropy to enhance the semantic features to learn the details of the pancreas. Using such an approach can reduce the effects of intraclass inconsistency. Second, we used an attention mechanism and multiscale convolution to enrich the contextual information and suppress learning in unrelated areas. We let the model be more sensitive to pancreatic marginal information and reduced the impact of interclass indistinction.
RESULTS: We evaluated our model using fourfold cross-validation on 82 abdominal enhanced three-dimensional (3D) CT scans from the National Institutes of Health (NIH-82) and 281 3D CT scans from the 2018 MICCAI segmentation decathlon challenge (MSD). The experimental results showed that our method achieved state-of-the-art performance on the two pancreatic datasets. The mean Dice coefficients were 86.10% ± 3.52% and 88.50% ± 3.70%.
CONCLUSIONS: Our model can effectively solve the problems of intraclass inconsistency and interclass indistinction in the segmentation of the pancreas, and it has value in clinical application. Code is available at https://github.com/Mrqins/pancreas-segmentation.
Adaptive multi-modality fusion network for glioma grading;; 自适应多模态特征融合胶质瘤分级网络
Wang Li
Cao Ying
Tian Lili
Chen Qijian
Guo Shunchao
Zhang Jian
Wang Lihui
Journal of Image and Graphics2021Journal Article, cited 0 times
BraTS-TCGA-LGG
BraTS-TCGA-GBM
MICCAI
Classification
BRAIN
Objective The accurate grading of glioma is the main method to assist in the formulation of personalized treatment plans, but most of the existing studies focus on the classification prediction based on the tumor area, which needs to delineate the area of interest in advance, which cannot meet the real-time performance of clinical intelligent auxiliary diagnosis. need. Therefore, this paper proposes an adaptive multi-modal fusion network (AMMFNet), which can achieve end-to-end accurate prediction from the original acquired images to the glioma level without the need to delineate the tumor region. . Methods The AMMFNet method uses four isomorphic network branches to extract multi-scale image features of different modalities; uses adaptive multi-modal feature fusion module and dimensionality reduction module for feature fusion; combines cross-entropy classification loss and feature embedding loss to improve glue. Classification accuracy of plasmoid tumors. In order to verify the model performance, this paper uses the MICCAI (Medical Image Computing and Computer Assisted Intervention Society) 2018 public dataset for training and testing, and compares it with the cutting-edge deep learning model and the latest glioma classification model, and uses the accuracy and subject The area under the curve (AUC) and other indicators were used for quantitative analysis. Results Without delineating the tumor area, the AUC of this model for predicting glioma grade was 0.965; when the tumor area was used, its AUC was as high as 0.997, and the accuracy was 0.982, which was more than the current best glioma classification model- The task convolutional neural network increased by 1.2% year-on-year. Conclusion The adaptive multimodal feature fusion network proposed in this paper can accurately predict glioma grades without delineating tumor regions by combining multimodal and multi-semantic-level features.; ; Glioma grading ; deep learning ; multimodal fusion ; multiscale features ; end-to-end classification
SIFT-GVF-based lung edge correction method for correcting the lung region in CT images
Li, X.
Feng, B.
Qiao, S.
Wei, H.
Feng, C.
PLoS One2023Journal Article, cited 0 times
Website
LIDC-IDRI
Thorax
Computed Tomography (CT)
Lung/diagnostic imaging
Segmentation
Algorithm Development
Radiomic features
Juxtapleural nodules were excluded from the segmented lung region in the Hounsfield unit threshold-based segmentation method. To re-include those regions in the lung region, a new approach was presented using scale-invariant feature transform and gradient vector flow models in this study. First, the scale-invariant feature transform method was utilized to detect all scale-invariant points in the binary lung region. The boundary points in the neighborhood of a scale-invariant point were collected to form the supportive boundary lines. Then, we utilized a Fourier descriptor to obtain a character representation of each supportive boundary line. Spectrum energy recognizes supportive boundaries that must be corrected. Third, the gradient vector flow-snake method was presented to correct the recognized supportive borders with a smooth profile curve, giving an ideal correction edge in those regions. Finally, the performance of the proposed method was evaluated through experiments on multiple authentic computed tomography images. The perfect results and robustness proved that the proposed method could correct the juxtapleural region precisely.
Multi-step Cascaded Networks for Brain Tumor Segmentation
Li, Xiangyu
Luo, Gongning
Wang, Kuanquan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation
Segmentation
Challenge
Automatic brain tumor segmentation method plays an extremely important role in the whole process of brain tumor diagnosis and treatment. In this paper, we propose a multi-step cascaded network which takes the hierarchical topology of the brain tumor substructures into consideration and segments the substructures from coarse to fine. During segmentation, the result of the former step is utilized as the prior information for the next step to guide the finer segmentation process. The whole network is trained in an end-to-end fashion. Besides, to alleviate the gradient vanishing issue and reduce overfitting, we added several auxiliary outputs as a kind of deep supervision for each step and introduced several data augmentation strategies, respectively, which proved to be quite efficient for brain tumor segmentation. Lastly, focal loss is utilized to solve the problem of remarkably imbalance of the tumor regions and background. Our model is tested on the BraTS 2019 validation dataset, the preliminary results of mean dice coefficients are 0.886, 0.813, 0.771 for the whole tumor, tumor core and enhancing tumor respectively. Code is available at https://github.com/JohnleeHIT/Brats2019.
The Visual Computer2023Journal Article, cited 0 times
HER2 tumor ROIs
TCGA-BRCA
TCGA-KIRC
Pathomics
Whole Slide Imaging (WSI)
Classification
Genomic Data Commons
Cancer is one of the most common diseases around the world. For cancer diagnosis, pathological examination is the most effective method. But the heavy and time-consuming workflow has increased the workload of the pathologists. With the appearance of whole slide image (WSI) scanners, tissues on a glass slide can be saved as a high-definition digital image, which makes it possible to diagnose diseases with computer aid. However, the extreme size and the lack of pixel-level annotations of WSIs make machine learning face a great challenge in pathology image diagnosis. To solve this problem, we propose a metric learning-based two-stage MIL framework (TSMIL) for WSI classification, which combines two stages of supervised clustering and metric-based classification. The training samples (WSIs) are first clustered into different clusters based on their labels in supervised clustering. Then, based on the previous step, we propose four different strategies to measure the distance of the test samples to each class cluster to achieve the test samples classification: MaxS, AvgS, DenS and HybS. Our model is evaluated on three pathology datasets: TCGA-NSCLC, TCGA-RCC and HER2. The average AUC scores can be up to 0.9895 and 0.9988 over TCGA-NSCLC and TCGA-RCC, and 0.9265 on HER2, respectively. The results showed that compared with the state-of-the-art methods, our method outperformed. The excellent performance on different kinds of cancer datasets verifies the feasibility of our method as a general architecture.
Radiomics-Based Method for Predicting the Glioma Subtype as Defined by Tumor Grade, IDH Mutation, and 1p/19q Codeletion
Gliomas are among the most common types of central nervous system (CNS) tumors. A prompt diagnosis of the glioma subtype is crucial to estimate the prognosis and personalize the treatment strategy. The objective of this study was to develop a radiomics pipeline based on the clinical Magnetic Resonance Imaging (MRI) scans to noninvasively predict the glioma subtype, as defined based on the tumor grade, isocitrate dehydrogenase (IDH) mutation status, and 1p/19q codeletion status. A total of 212 patients from the public retrospective The Cancer Genome Atlas Low Grade Glioma (TCGA-LGG) and The Cancer Genome Atlas Glioblastoma Multiforme (TCGA-GBM) datasets were used for the experiments and analyses. Different settings in the radiomics pipeline were investigated to improve the classification, including the Z-score normalization, the feature extraction strategy, the image filter applied to the MRI images, the introduction of clinical information, ComBat harmonization, the classifier chain strategy, etc. Based on numerous experiments, we finally reached an optimal pipeline for classifying the glioma tumors. We then tested this final radiomics pipeline on the hold-out test data with 51 randomly sampled random seeds for reliable and robust conclusions. The results showed that, after tuning the radiomics pipeline, the mean AUC improved from 0.8935 (±0.0351) to 0.9319 (±0.0386), from 0.8676 (±0.0421) to 0.9283 (±0.0333), and from 0.6473 (±0.1074) to 0.8196 (±0.0702) in the test data for predicting the tumor grade, IDH mutation, and 1p/19q codeletion status, respectively. The mean accuracy for predicting the five glioma subtypes also improved from 0.5772 (±0.0816) to 0.6716 (±0.0655). Finally, we analyzed the characteristics of the radiomic features that best distinguished the glioma grade, the IDH mutation, and the 1p/19q codeletion status, respectively. Apart from the promising prediction of the glioma subtype, this study also provides a better understanding of the radiomics model development and interpretability. The results in this paper are replicable with our python codes publicly available in github.
4× Super‐resolution of unsupervised CT images based on GAN
Li, Yunhe
Chen, Lunqiang
Li, Bo
Zhao, Huiyan
IET Image Processing2023Journal Article, cited 0 times
QIN LUNG CT
Imaging Feature
Super-resolution
Algorithm Development
Cloud computing
PyTorch
Improving the resolution of computed tomography (CT) medical images can help doctors more accurately identify lesions, which is important in clinical diagnosis. In the absence of natural paired datasets of high resolution and low resolution image pairs, we abandoned the conventional Bicubic method and innovatively used a dataset of images of a single resolution to create near-natural high–low-resolution image pairs by designing a deep learning network and utilizing noise injection. In addition, we propose a super-resolution generative adversarial network called KerSRGAN which includes a super-resolution generator, super-resolution discriminator, and super-resolution feature extractor to achieve a 4× super-resolution of CT images. The results of an experimental evaluation show that KerSRGAN achieved superior performance compared to the state-of-the-art methods in terms of a quantitative comparison of non-reference image quality evaluation indicators on the generated 4× super-resolution CT images. Moreover, in terms of an intuitive visual comparison, the images generated by the KerSRGAN method had more precise details and better perceptual quality.
Prototypical few-shot segmentation for cross-institution male pelvic structures with spatial registration
Li, Yiwen
Fu, Yunguan
Gayo, Iani J M B
Yang, Qianye
Min, Zhe
Saeed, Shaheer U
Yan, Wen
Wang, Yipei
Noble, J Alison
Emberton, Mark
Clarkson, Matthew J
Huisman, Henkjan
Barratt, Dean C
Prisacariu, Victor A
Hu, Yipeng
Medical Image Analysis2023Journal Article, cited 0 times
Prostate-3T
PROSTATE-DIAGNOSIS
PROSTATE-MRI
The prowess that makes few-shot learning desirable in medical image analysis is the efficient use of the support image data, which are labelled to classify or segment new classes, a task that otherwise requires substantially more training images and expert annotations. This work describes a fully 3D prototypical few-shot segmentation algorithm, such that the trained networks can be effectively adapted to clinically interesting structures that are absent in training, using only a few labelled images from a different institute. First, to compensate for the widely recognised spatial variability between institutions in episodic adaptation of novel classes, a novel spatial registration mechanism is integrated into prototypical learning, consisting of a segmentation head and an spatial alignment module. Second, to assist the training with observed imperfect alignment, support mask conditioning module is proposed to further utilise the annotation available from the support images. Extensive experiments are presented in an application of segmenting eight anatomical structures important for interventional planning, using a data set of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results demonstrate the efficacy in each of the 3D formulation, the spatial registration, and the support mask conditioning, all of which made positive contributions independently or collectively. Compared with the previously proposed 2D alternatives, the few-shot segmentation performance was improved with statistical significance, regardless whether the support data come from the same or different institutes.
Histopathologic and proteogenomic heterogeneity reveals features of clear cell renal cell carcinoma aggressiveness
Li, Y.
Lih, T. M.
Dhanasekaran, S. M.
Mannan, R.
Chen, L.
Cieslik, M.
Wu, Y.
Lu, R. J.
Clark, D. J.
Kolodziejczak, I.
Hong, R.
Chen, S.
Zhao, Y.
Chugh, S.
Caravan, W.
Naser Al Deen, N.
Hosseini, N.
Newton, C. J.
Krug, K.
Xu, Y.
Cho, K. C.
Hu, Y.
Zhang, Y.
Kumar-Sinha, C.
Ma, W.
Calinawan, A.
Wyczalkowski, M. A.
Wendl, M. C.
Wang, Y.
Guo, S.
Zhang, C.
Le, A.
Dagar, A.
Hopkins, A.
Cho, H.
Leprevost, F. D. V.
Jing, X.
Teo, G. C.
Liu, W.
Reimers, M. A.
Pachynski, R.
Lazar, A. J.
Chinnaiyan, A. M.
Van Tine, B. A.
Zhang, B.
Rodland, K. D.
Getz, G.
Mani, D. R.
Wang, P.
Chen, F.
Hostetter, G.
Thiagarajan, M.
Linehan, W. M.
Fenyo, D.
Jewell, S. D.
Omenn, G. S.
Mehra, R.
Wiznerowicz, M.
Robles, A. I.
Mesri, M.
Hiltke, T.
An, E.
Rodriguez, H.
Chan, D. W.
Ricketts, C. J.
Nesvizhskii, A. I.
Zhang, H.
Ding, L.
Clinical Proteomic Tumor Analysis, Consortium
Cancer Cell2022Journal Article, cited 0 times
CPTAC-CCRCC
TCGA-KIRC
Pathomics
histopathology imaging features
Uchl1
clear cell renal cell carcinoma (ccRCC)
glycoproteomics
histology
metabolome
phosphoproteomics
proteogenomics
single-nuclei RNA-seq
tumor heterogeneity
Clear cell renal cell carcinomas (ccRCCs) represent approximately 75% of RCC cases and account for most RCC-associated deaths. Inter- and intratumoral heterogeneity (ITH) results in varying prognosis and treatment outcomes. To obtain the most comprehensive profile of ccRCC, we perform integrative histopathologic, proteogenomic, and metabolomic analyses on 305 ccRCC tumor segments and 166 paired adjacent normal tissues from 213 cases. Combining histologic and molecular profiles reveals ITH in 90% of ccRCCs, with 50% demonstrating immune signature heterogeneity. High tumor grade, along with BAP1 mutation, genome instability, increased hypermethylation, and a specific protein glycosylation signature define a high-risk disease subset, where UCHL1 expression displays prognostic value. Single-nuclei RNA sequencing of the adverse sarcomatoid and rhabdoid phenotypes uncover gene signatures and potential insights into tumor evolution. In vitro cell line studies confirm the potential of inhibiting identified phosphoproteome targets. This study molecularly stratifies aggressive histopathologic subtypes that may inform more effective treatment strategies.
Genotype prediction of ATRX mutation in lower-grade gliomas using an MRI radiomics signature
Li, Y.
Liu, X.
Qian, Z.
Sun, Z.
Xu, K.
Wang, K.
Fan, X.
Zhang, Z.
Li, S.
Wang, Y.
Jiang, T.
Eur Radiol2018Journal Article, cited 2 times
Website
Radiogenomics
TCGA-LGG
Biomarkers
Genetics
Glioma
Machine learning
Magnetic resonance imaging
OBJECTIVES: To predict ATRX mutation status in patients with lower-grade gliomas using radiomic analysis. METHODS: Cancer Genome Atlas (TCGA) patients with lower-grade gliomas were randomly allocated into training (n = 63) and validation (n = 32) sets. An independent external-validation set (n = 91) was built based on the Chinese Genome Atlas (CGGA) database. After feature extraction, an ATRX-related signature was constructed. Subsequently, the radiomic signature was combined with a support vector machine to predict ATRX mutation status in training, validation and external-validation sets. Predictive performance was assessed by receiver operating characteristic curve analysis. Correlations between the selected features were also evaluated. RESULTS: Nine radiomic features were screened as an ATRX-associated radiomic signature of lower-grade gliomas based on the LASSO regression model. All nine radiomic features were texture-associated (e.g. sum average and variance). The predictive efficiencies measured by the area under the curve were 94.0 %, 92.5 % and 72.5 % in the training, validation and external-validation sets, respectively. The overall correlations between the nine radiomic features were low in both TCGA and CGGA databases. CONCLUSIONS: Using radiomic analysis, we achieved efficient prediction of ATRX genotype in lower-grade gliomas, and our model was effective in two independent databases. KEY POINTS: * ATRX in lower-grade gliomas could be predicted using radiomic analysis. * The LASSO regression algorithm and SVM performed well in radiomic analysis. * Nine radiomic features were screened as an ATRX-predictive radiomic signature. * The machine-learning model for ATRX-prediction was validated by an independent database.
A 3D lung lesion variational autoencoder
Li, Yiheng
Sadée, Christoph Y.
Carrillo-Perez, Francisco
Selby, Heather M.
Thieme, Alexander H.
Gevaert, Olivier
2024Journal Article, cited 0 times
NSCLC Radiogenomics
Machine Learning
CT
Radiomics
In this study, we develop a 3D beta variational autoencoder (beta-VAE) to advance lung cancer imaging analysis, countering the constraints of conventional radiomics methods. The autoencoder extracts information from public lung computed tomography (CT) datasets without additional labels. It reconstructs 3D lung nodule images with high quality (structural similarity: 0.774, peak signal-to-noise ratio: 26.1, and mean-squared error: 0.0008). The model effectively encodes lesion sizes in its latent embeddings, with a significant correlation with lesion size found after applying uniform manifold approximation and projection (UMAP) for dimensionality reduction. Additionally, the beta-VAE can synthesize new lesions of varying sizes by manipulating the latent features. The model can predict multiple clinical endpoints, including pathological N stage or KRAS mutation status, on the Stanford radiogenomics lung cancer dataset. Comparisons with other methods show that the beta-VAE performs equally well in these tasks, suggesting its potential as a pretrained model for predicting patient outcomes in medical imaging.
Deep Learning Based Multimodal Brain Tumor Diagnosis
Li, Yuexiang
Shen, Linlin
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation plays an important role in the disease diagnosis. In this paper, we proposed deep learning frameworks, i.e. MvNet and SPNet, to address the challenges of multimodal brain tumor segmentation. The proposed multi-view deep learning framework (MvNet) uses three multi-branch fully-convolutional residual networks (Mb-FCRN) to segment multimodal brain images from different view-point, i.e. slices along x, y, z axis. The three sub-networks produce independent segmentation results and vote for the final outcome. The SPNet is a CNN-based framework developed to predict the survival time of patients. The proposed deep learning frameworks was evaluated on BraTS 17 validation set and achieved competing results for tumor segmentation While Dice scores of 0.88, 0.75 0.71 were achieved for whole tumor, enhancing tumor and tumor core, respectively, an accuracy of 0.55 was obtained for survival prediction.
Influence of feature calculating parameters on the reproducibility of CT radiomic features: a thoracic phantom study
Li, Ying
Tan, Guanghua
Vangel, Mark
Hall, Jonathan
Cai, Wenli
Quantitative Imaging in Medicine and Surgery2020Journal Article, cited 0 times
Website
Phantom FDA
Radiomic feature
Prostate gleason score prediction via MRI using capsule network
Li, Yuheng
Wang, Jing
Hu, Mingzhe
Patel, Pretesh
Mao, Hui
Liu, Tian
Yang, Xiaofeng
Iftekharuddin, Khan M.
Chen, Weijie
2023Conference Paper, cited 0 times
Prostate-MRI-US-Biopsy
Computer Aided Diagnosis (CADx)
Magnetic Resonance Imaging (MRI)
Classification
Convolutional Neural Network (CNN)
PROSTATE
Magnetic Resonance imaging (MRI) is a non-invasive modality for diagnosing prostate carcinoma (PCa) and deep learning has gained increasing interest in MR images. We propose a novel 3D Capsule Network to perform low grade vs high grade PCa classification. The proposed network utilizes Efficient CapsNet as backbone and consists of three main components, 3D convolutional blocks, depth-wise separable 3D convolution, and self-attention routing. The network employs convolutional blocks to extract high level features, which will form primary capsules via depth-wise separable convolution operations. A self-attention mechanism is used to route primary capsules to higher level capsules and finally a PCa grade is assigned. The proposed 3D Capsule Network was trained and tested using a public dataset that involves 529 patients diagnosed with PCa. A baseline 3D CNN method was also experimented for comparison. Our Capsule Network achieved 85% accuracy and 0.87 AUC, while the baseline CNN achieved 80% accuracy and 0.84 AUC. The superior performance of Capsule Network demonstrates its feasibility for PCa grade classification from prostate MRI and shows its potential in assisting clinical decision-making.
x4 Super-Resolution Analysis of Magnetic Resonance Imaging based on Generative Adversarial Network without Supervised Images
Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.
Augmented Radiology: Patient-Wise Feature Transfer Model for Glioma Grading
Li, Zisheng
Ogino, Masahiro
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In current oncological workflows of clinical decision making and treatment management, biopsy is the only way to confirm the abnormality of cancer. On the purpose of reducing unnecessary biopsies and diagnostic burden, we propose a patient-wise feature transfer model for learning the relationship of phenotypes between radiological images and pathological images. We hypothesize that high-level features from the same patient are possible to be linked between modalities of different image scales. We integrate multiple feature transfer blocks between CNN-based networks with single-/multi-modality radiological images and pathological images in an end-to-end training framework. We refer to our method as “augmented radiology” because the inference model only requires radiological images as input while the prediction result can be linked to specific pathological phenotypes. We apply the proposed method to glioma grading (high-grade vs. low-grade) and train the feature transfer model by using patient-wise multimodal MRI images and pathological images. Evaluation results show that the proposed method can achieve pathological tumor grading score in high accuracy (AUC 0.959) only given the radiological images as input.
Automatic Brain Tumor Segmentation Using Multi-scale Features and Attention Mechanism
Li, Zhaopei
Shen, Zhiqiang
Wen, Jianhui
He, Tian
Pan, Lin
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Gliomas are the most common primary malignant tumors of the brain. Magnetic resonance (MR) imaging is one of the main detection methods of brain tumors, so accurate segmentation of brain tumors from MR images has important clinical significance in the whole process of diagnosis. At present, most popular automatic medical image segmentation methods are based on deep learning. Many researchers have developed convolutional neural network and applied it to brain tumor segmentation, and proved superior performance. In this paper, we propose a novel deep learned-based method named multi-scale feature recalibration network(MSFR-Net), which can extract features with multiple scales and recalibrate them through the multi-scale feature extraction and recalibration (MSFER) module. In addition, we improve the segmentation performance by exploiting cross-entropy and dice loss to solve the class imbalance problem. We evaluate our proposed architecture on the brain tumor segmentation challenges (BraTS) 2021 test dataset. The proposed method achieved 89.15%, 83.02%, 82.08% dice coefficients for the whole tumor, tumor core and enhancing tumor, respectively.
Low-Dose CT Image Denoising with Improving WGAN and Hybrid Loss Function
Li, Z.
Shi, W.
Xing, Q.
Miao, Y.
He, W.
Yang, H.
Jiang, Z.
Comput Math Methods Med2021Journal Article, cited 1 times
Website
Phantom FDA
LUNG
Low-dose CT
Image denoising
Generative Adversarial Network (GAN)
The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.
Large-scale retrieval for medical image analytics: A comprehensive review
Li, Zhongyu
Zhang, Xiaofan
Müller, Henning
Zhang, Shaoting
Medical Image Analysis2018Journal Article, cited 23 times
Website
Medical image analysis Information retrieval Large scale Computer aided diagnosis
Multiregional radiomics profiling from multiparametric MRI: Identifying an imaging predictor of IDH1 mutation status in glioblastoma
Li, Zhi‐Cheng
Bai, Hongmin
Sun, Qiuchang
Zhao, Yuanshen
Lv, Yanchun
Zhou, Jian
Liang, Chaofeng
Chen, Yinsheng
Liang, Dong
Zheng, Hairong
Cancer medicine2018Journal Article, cited 0 times
Website
TCGA-GBM
Radiogenomics
Glioblastoma multiforme (GBM)
Magnetic Resonance Imaging (MRI)
ITK
Random forest
Isocitrate dehydrogenase (IDH) mutation
PURPOSE: Isocitrate dehydrogenase 1 (IDH1) has been proven as a prognostic and predictive marker in glioblastoma (GBM) patients. The purpose was to preoperatively predict IDH mutation status in GBM using multiregional radiomics features from multiparametric magnetic resonance imaging (MRI). METHODS: In this retrospective multicenter study, 225 patients were included. A total of 1614 multiregional features were extracted from enhancement area, non-enhancement area, necrosis, edema, tumor core, and whole tumor in multiparametric MRI. Three multiregional radiomics models were built from tumor core, whole tumor, and all regions using an all-relevant feature selection and a random forest classification for predicting IDH1. Four single-region models and a model combining all-region features with clinical factors (age, sex, and Karnofsky performance status) were also built. All models were built from a training cohort (118 patients) and tested on an independent validation cohort (107 patients). RESULTS: Among the four single-region radiomics models, the edema model achieved the best accuracy of 96% and the best F1-score of 0.75 while the non-enhancement model achieved the best area under the receiver operating characteristic curve (AUC) of 0.88 in the validation cohort. The overall performance of the tumor-core model (accuracy 0.96, AUC 0.86 and F1-score 0.75) and the whole-tumor model (accuracy 0.96, AUC 0.88 and F1-score 0.75) was slightly better than the single-regional models. The 8-feature all-region radiomics model achieved an improved overall performance of an accuracy 96%, an AUC 0.90, and an F1-score 0.78. Among all models, the model combining all-region imaging features with age achieved the best performance of an accuracy 97%, an AUC 0.96, and an F1-score 0.84. CONCLUSIONS: The radiomics model built with multiregional features from multiparametric MRI has the potential to preoperatively detect the IDH1 mutation status in GBM patients. The multiregional model built with all-region features performed better than the single-region models, while combining age with all-region features achieved the best performance.
Brain Tumor Segmentation Using 3D Convolutional Neural Network
Liang, Kaisheng
Lu, Wenlian
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Brain tumors segmentation is one of the most crucial procedures in the diagnosis of brain tumors because it is of great significance for the analysis and visualization of brain structures that can guide the surgery. With the development of natural scene segmentation model FCN, the most representative model U-net has been developed. An increasing number of people are trying to improve the encoder-decoder architecture to achieve better performance currently. In this paper, we focus on the improvement of the encoder-decoder network and the analysis of 3D medical images. We propose an additional path to enhance the encoder part and two separate up-sampling paths for the decoder part of the model. The proposed approach was trained and evaluated on BraTS 2019 dataset.
Fast automated detection of COVID-19 from medical images using convolutional neural networks
Liang, Shuang
Liu, Huixiang
Gu, Yu
Guo, Xiuhua
Li, Hongjun
Li, Li
Wu, Zhiyuan
Liu, Mengyang
Tao, Lixin
Communications Biology2021Journal Article, cited 0 times
Website
LIDC
LUNA16 Challenge
CoViD-19
Lung
Human-level comparable control volume mapping with a deep unsupervised-learning model for image-guided radiation therapy
Liang, X.
Bassenne, M.
Hristov, D. H.
Islam, M. T.
Zhao, W.
Jia, M.
Zhang, Z.
Gensheimer, M.
Beadle, B.
Le, Q.
Xing, L.
Comput Biol Med2022Journal Article, cited 0 times
Website
QIN-HEADNECK
HNSCC-3DCT-RT
Head and neck
Image registration
Image-guided radiation therapy
Patient positioning
Unsupervised learning
PURPOSE: To develop a deep unsupervised learning method with control volume (CV) mapping from patient positioning daily CT (dCT) to planning computed tomography (pCT) for precise patient positioning. METHODS: We propose an unsupervised learning framework, which maps CVs from dCT to pCT to automatically generate the couch shifts, including translation and rotation dimensions. The network inputs are dCT, pCT and CV positions in the pCT. The output is the transformation parameter of the dCT used to setup the head and neck cancer (HNC) patients. The network is trained to maximize image similarity between the CV in the pCT and the CV in the dCT. A total of 554 CT scans from 158 HNC patients were used for the evaluation of the proposed model. At different points in time, each patient had many CT scans. Couch shifts are calculated for the testing by averaging the translation and rotation from the CVs. The ground-truth of the shifts come from bone landmarks determined by an experienced radiation oncologist. RESULTS: The system positioning errors of translation and rotation are less than 0.47 mm and 0.17 degrees , respectively. The random positioning errors of translation and rotation are less than 1.13 mm and 0.29 degrees , respectively. The proposed method enhanced the proportion of cases registered within a preset tolerance (2.0 mm/1.0 degrees ) from 66.67% to 90.91% as compared to standard registrations. CONCLUSIONS: We proposed a deep unsupervised learning architecture for patient positioning with inclusion of CVs mapping, which weights the CVs regions differently to mitigate any potential adverse influence of image artifacts on the registration. Our experimental results show that the proposed method achieved efficient and effective HNC patient positioning.
Incorporating the Hybrid Deformable Model for Improving the Performance of Abdominal CT Segmentation via Multi-Scale Feature Fusion Network
Liang, Xiaokun
Li, Na
Zhang, Zhicheng
Xiong, Jing
Zhou, Shoujun
Xie, Yaoqin
Medical Image Analysis2021Journal Article, cited 0 times
Website
Pancreas-CT
Segmentation
U-net
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
ORRN: An ODE-Based Recursive Registration Network for Deformable Respiratory Motion Estimation With Lung 4DCT Images
Liang, X.
Lin, S.
Liu, F.
Schreiber, D.
Yip, M.
IEEE Trans Biomed Eng2023Journal Article, cited 3 times
Website
4D-Lung
Humans
*Lung Neoplasms
Four-Dimensional Computed Tomography/methods
Lung/diagnostic imaging
Motion correction
Respiratory Rate
Algorithms
Registration
Deep Learning
OBJECTIVE: Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. METHODS: This article presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. RESULTS: We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24 mm and 1.26 mm, respectively. Additionally, it produces less than 0.001% unrealistic image folding, and the computation speed is less than 1 s for each CT volume. CONCLUSION: ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. SIGNIFICANCE: It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.
Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-OR Network
Liao, Fangzhou
Liang, Ming
Li, Zhe
Hu, Xiaolin
Song, Sen
IEEE Trans Neural Netw Learn Syst2017Journal Article, cited 15 times
Website
Radiomics
LUNG
Computer Assisted Detection (CAD)
Deep Learning
Automatic diagnosing lung cancer from computed tomography scans involves two steps: detect all suspicious lesions (pulmonary nodules) and evaluate the whole-lung/pulmonary malignancy. Currently, there are many studies about the first step, but few about the second step. Since the existence of nodule does not definitely indicate cancer, and the morphology of nodule has a complicated relationship with cancer, the diagnosis of lung cancer demands careful investigations on every suspicious nodule and integration of information of all nodules. We propose a 3-D deep neural network to solve this problem. The model consists of two modules. The first one is a 3-D region proposal network for nodule detection, which outputs all suspicious nodules for a subject. The second one selects the top five nodules based on the detection confidence, evaluates their cancer probabilities, and combines them with a leaky noisy-OR gate to obtain the probability of lung cancer for the subject. The two modules share the same backbone network, a modified U-net. The overfitting caused by the shortage of the training data is alleviated by training the two modules alternately. The proposed model won the first place in the Data Science Bowl 2017 competition.
An UNet-Based Brain Tumor Segmentation Framework via Optimal Mass Transportation Pre-processing
This article aims to build a framework for MRI images of brain tumor segmentation using the deep learning method. For this purpose, we develop a novel 2-Phase UNet-based OMT framework to increase the ratio of brain tumors using optimal mass transportation (OMT). Moreover, due to the scarcity of training data, we change the density function by different parameters to increase the data diversity. For the post-processing, we propose an adaptive ensemble procedure by solving the eigenvectors of the Dice similarity matrix and choosing the result with the highest aggregation probability as the predicted label. The Dice scores of the whole tumor (WT), tumor core (TC), and enhanced tumor (ET) regions for online validation computed by SegResUNet were 0.9214, 0.8823, and 0.8411, respectively. Compared with random crop pre-processing, OMT is far superior.
An open codebase for enhancing transparency in deep learning-based breast cancer diagnosis utilizing CBIS-DDSM data
Accessible mammography datasets and innovative machine learning techniques are at the forefront of computer-aided breast cancer diagnosis. However, the opacity surrounding private datasets and the unclear methodology behind the selection of subset images from publicly available databases for model training and testing, coupled with the arbitrary incompleteness or inaccessibility of code, markedly intensifies the obstacles in replicating and validating the model's efficacy. These challenges, in turn, erect barriers for subsequent researchers striving to learn and advance this field. To address these limitations, we provide a pilot codebase covering the entire process from image preprocessing to model development and evaluation pipeline, utilizing the publicly available Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) mass subset, including both full images and regions of interests (ROIs). We have identified that increasing the input size could improve the detection accuracy of malignant cases within each set of models. Collectively, our efforts hold promise in accelerating global software development for breast cancer diagnosis by leveraging our codebase and structure, while also integrating other advancements in the field.
Dose-Conditioned Synthesis of Radiotherapy Dose With Auxiliary Classifier Generative Adversarial Network
Liao, Wentao
Pu, Yuehu
IEEE Access2021Journal Article, cited 0 times
Head-Neck Cetuximab
HNSCC
In recent years, there are more and more researches on automatic radiotherapy planning based on artificial intelligence technology. Most of the work focuses on the dose prediction of radiotherapy planning, that is, the generation of radiation dose distribution image. Because of the small sample nature of radiotherapy planning data, it is difficult to obtain large-scale training data sets. In this paper, we propose a model of Dose-Conditioned Synthesis of Radiotherapy dose by using Auxiliary Classifier Generative Adversarial Network(ACGAN), and a method of customize and synthesis dose distribution images of specific tumor types and beam types is considered. This method can customize and generate dose distribution images of tumor types and beam types. The dose distribution images generated by our model are evaluated by MS-SSIM and PSNR, the results show that the image quality of dose distribution generated by ACGAN model was excellent, which was very close to the real data and shows high diversity, it can be used for data enhancement work of training data sets of dose prediction methods.
Reproducibility of Tumor Segmentation Outcomes with a Deep Learning Model
Ligneris, Morgane Des
Bonnet, Axel
Chatelain, Yohan
Glatard, Tristan
Sdika, Michaël
Vila, Gaël
Wargnier-Dauchelle, Valentine
Pop, Sorina
Frindel, Carole
2023Conference Paper, cited 0 times
UPENN-GBM
In the last few years, there has been a growing awareness of reproducibility concerns in many areas of science. In this work, our goal is to evaluate the reproducibility of tumor segmentation outcomes produced with a deep segmentation model when MRI images are pre-processed (i) with two different versions of the same pre-processing pipeline, and (ii) by introducing numerical perturbations that mimic executions on different environments. Results show that these two variability sources can lead to important variations of segmentation outcomes: Dice can go as low as 0.59 and Hausdorff distance as high as 84.75. Moreover, both cases show a similar range of values, suggesting that the underlying causes for instability may be numerical stability. This work can be used as a benchmark to improve the numerical stability of the pipeline.
Optimization of Median Modified Wiener Filter for Improving Lung Segmentation Performance in Low-Dose Computed Tomography Images
Lim, Sewon
Park, Minji
Kim, Hajin
Kang, Seong-Hyeon
Kim, Kyuseok
Lee, Youngjin
Applied Sciences2023Journal Article, cited 0 times
NLST
In low-dose computed tomography (LDCT), lung segmentation effectively improves the accuracy of lung cancer diagnosis. However, excessive noise is inevitable in LDCT, which can decrease lung segmentation accuracy. To address this problem, it is necessary to derive an optimized kernel size when using the median modified Wiener filter (MMWF) for noise reduction. Incorrect application of the kernel size can result in inadequate noise removal or blurring, degrading segmentation accuracy. Therefore, various kernel sizes of the MMWF were applied in this study, followed by region-growing-based segmentation and quantitative evaluation. In addition to evaluating the segmentation performance, we conducted a similarity assessment. Our results indicate that the greatest improvement in segmentation performance and similarity was at a kernel size 5 × 5. Compared with the noisy image, the accuracy, F1-score, intersection over union, root mean square error, and peak signal-to-noise ratio using the optimized MMWF were improved by factors of 1.38, 33.20, 64.86, 7.82, and 1.30 times, respectively. In conclusion, we have demonstrated that by applying the MMWF with an appropriate kernel size, the optimization of noise and blur reduction can enhance segmentation performance.
Automated pancreas segmentation and volumetry using deep neural network on computed tomography
Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the cancer imaging archive pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.
Three-dimensional steerable discrete cosine transform with application to 3D image compression
Lima, Verusca S.
Madeiro, Francisco
Lima, Juliano B.
Multidimensional Systems and Signal Processing2020Journal Article, cited 0 times
Website
Algorithm Development
RIDER NEURO MRI
Mouse-Mammary
QIN Breast
PROSTATEx
TCGA-CESC
Image compression
This work introduces the three-dimensional steerable discrete cosine transform (3D-SDCT), which is obtained from the relationship between the discrete cosine transform (DCT) and the graph Fourier transform of a signal on a path graph. One employs the fact that the basis vectors of the 3D-DCT constitute a possible eigenbasis for the Laplacian of the product of such graphs. The proposed transform employs a rotated version of the 3D-DCT basis. We then evaluate the applicability of the 3D-SDCT in the field of 3D medical image compression. We consider the case where we have only one pair of rotation angles per block, rotating all the 3D-DCT basis vectors by the same pair. The obtained results show that the 3D-SDCT can be efficiently used in the referred application scenario and it outperforms the classical 3D-DCT.
Encryption of 3D medical images based on a novel multiparameter cosine number transform
Lima, V.S.
Madeiro, F.
Lima, J.B.
Computers in Biology and Medicine2020Journal Article, cited 0 times
Mouse-Astrocytoma
PROSTATEx
QIN-BREAST
RIDER NEURO MRI
TCGA-CESC
In this paper, a multiparameter cosine number transform is proposed. Such a transform is obtained using the fact that the basis vectors of the three-dimensional cosine number transform (3D-CNT) constitute a possible eigenbasis for the Laplacian of the cubical lattice graph evaluated in a finite field. The proposed transform is identified as three-dimensional steerable cosine number transform (3D-SCNT) and is defined by rotating the 3D-CNT basis vectors, using a finite field rotation operator. We introduce a 3D medical image encryption scheme based on the 3D-SCNT, which uses the rotation angles as secret parameters. By means of computer experiments, we have verified that the scheme is resistant against the main cryptographic attacks.
High-resolution anatomic correlation of cyclic motor patterns in the human colon: Evidence of a rectosigmoid brake
Lin, Anthony Y
Du, Peng
Dinning, Philip G
Arkwright, John W
Kamp, Jozef P
Cheng, Leo K
Bissett, Ian P
O'Grady, Gregory
American Journal of Physiology-Gastrointestinal and Liver Physiology2017Journal Article, cited 12 times
Website
CT COLONOGRAPHY
Colonic motility
High-resolution manometry
Rectosigmoid brake
Adversarial-Learning-Based Taguchi Convolutional Fuzzy Neural Classifier for Images of Lung Cancer
Lin, Cheng-Jian
Lin, Xue-Qian
Jhang, Jyun-Yu
IEEE Access2024Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
LIDC-IDRI
Deep learning technology has extensive application in the classification and recognition of medical images. However, several challenges persist in such application, such as the need for acquiring large-scale labeled data, configuring network parameters, and handling excessive network parameters. To address these challenges, in this study, we developed an adversarial-learning-based Taguchi convolutional fuzzy neural classifier (AL-TCFNC) for classifying malignant and benign lung tumors displayed in computed tomography images. In the framework of the developed AL-TCFNC, a fuzzy neural classifier replaces a conventional fully connected network, thereby reducing the number of network parameters and the training duration. To reduce experimental cost and training time, the Taguchi method was used. This method helps to identify the optimal combination of model parameters through a small number of experiments. The transfer learning of models across databases often results in subpar performance because of the paucity of labeled samples. To resolve this problem, we used a combination of maximum mean discrepancy and cross-entropy for adversarial learning with the proposed model. Two data sets, namely the SPIE–AAPM Lung CT Challenge data set and LIDC–IDRI Lung Imaging Research data set, were used to validate the AL-TCFNC model. When the AL-TCFNC model was used for transfer learning, it exhibited an accuracy rate of 89.55% and outperformed other deep learning models in terms of classification performance.
A CT-based deep learning model for predicting the nuclear grade of clear cell renal cell carcinoma
Lin, Fan
Ma, Changyi
Xu, Jinpeng
Lei, Yi
Li, Qing
Lan, Yong
Sun, Ming
Long, Wansheng
Cui, Enming
European Journal of Radiology2020Journal Article, cited 0 times
TCGA-KIRC
PURPOSE: To investigate the effects of different methodologies on the performance of deep learning (DL) model for differentiating high- from low-grade clear cell renal cell carcinoma (ccRCC).
METHOD: Patients with pathologically proven ccRCC diagnosed between October 2009 and March 2019 were assigned to training or internal test dataset, and external test dataset was acquired from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma (TCGA-KIRC) database. The effects of different methodologies on the performance of DL-model, including image cropping (IC), setting the attention level, selecting model complexity (MC), and applying transfer learning (TL), were compared using repeated measures analysis of variance (ANOVA) and receiver operating characteristic (ROC) curve analysis. The performance of DL-model was evaluated through accuracy and ROC analyses with internal and external tests.
RESULTS: In this retrospective study, patients (n = 390) from one hospital were randomly assigned to training (n = 370) or internal test dataset (n = 20), and the other 20 patients from TCGA-KIRC database were assigned to external test dataset. IC, the attention level, MC, and TL had major effects on the performance of the DL-model. The DL-model based on the cropping of an image less than three times the tumor diameter, without attention, a simple model and the application of TL achieved the best performance in internal (ACC = 73.7 ± 11.6%, AUC = 0.82 ± 0.11) and external (ACC = 77.9 ± 6.2%, AUC = 0.81 ± 0.04) tests.
CONCLUSIONS: CT-based DL model can be conveniently applied for grading ccRCC with simple IC in routine clinical practice.
Dual-Domain Reconstruction Network Incorporating Multi-Level Wavelet Transform and Recurrent Convolution for Sparse View Computed Tomography Imaging
Lin, Juncheng
Li, Jialin
Dou, Jiazhen
Zhong, Liyun
Di, Jianglei
Qin, Yuwen
Tomography2024Journal Article, cited 0 times
LDCT-and-Projection-data
CT COLONOGRAPHY
Machine Learning
Sparse view computed tomography (SVCT) aims to reduce the number of X-ray projection views required for reconstructing the cross-sectional image of an object. While SVCT significantly reduces X-ray radiation dose and speeds up scanning, insufficient projection data give rise to issues such as severe streak artifacts and blurring in reconstructed images, thereby impacting the diagnostic accuracy of CT detection. To address this challenge, a dual-domain reconstruction network incorporating multi-level wavelet transform and recurrent convolution is proposed in this paper. The dual-domain network is composed of a sinogram domain network (SDN) and an image domain network (IDN). Multi-level wavelet transform is employed in both IDN and SDN to decompose sinograms and CT images into distinct frequency components, which are then processed through separate network branches to recover detailed information within their respective frequency bands. To capture global textures, artifacts, and shallow features in sinograms and CT images, a recurrent convolution unit (RCU) based on convolutional long and short-term memory (Conv-LSTM) is designed, which can model their long-range dependencies through recurrent calculation. Additionally, a self-attention-based multi-level frequency feature normalization fusion (MFNF) block is proposed to assist in recovering high-frequency components by aggregating low-frequency components. Finally, an edge loss function based on the Laplacian of Gaussian (LoG) is designed as the regularization term for enhancing the recovery of high-frequency edge structures. The experimental results demonstrate the effectiveness of our approach in reducing artifacts and enhancing the reconstruction of intricate structural details across various sparse views and noise levels. Our method excels in both performance and robustness, as evidenced by its superior outcomes in numerous qualitative and quantitative assessments, surpassing contemporary state-of-the-art CNNs or Transformer-based reconstruction methods.
Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U‐Net
Lin, Mingquan
Momin, Shadab
Lei, Yang
Wang, Hesheng
Curran, Walter J.
Liu, Tian
Yang, Xiaofeng
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
PURPOSE: Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning.
METHOD: In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis.
RESULTS: The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour.
CONCLUSION: Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Integrative radiomics and transcriptomics analyses reveal subtype characterization of non-small cell lung cancer
Lin, P.
Lin, Y. Q.
Gao, R. Z.
Wan, W. J.
He, Y.
Yang, H.
Eur Radiol2023Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
NSCLC Radiogenomics
Heterogeneity
Non-small cell lung cancer
Radiomics
Transcriptomics
Radiomic features
Clustering
OBJECTIVES: To assess whether integrative radiomics and transcriptomics analyses could provide novel insights for radiomic features' molecular annotation and effective risk stratification in non-small cell lung cancer (NSCLC). METHODS: A total of 627 NSCLC patients from three datasets were included. Radiomics features were extracted from segmented 3-dimensional tumour volumes and were z-score normalized for further analysis. In transcriptomics level, 186 pathways and 28 types of immune cells were assessed by using the Gene Set Variation Analysis (GSVA) algorithm. NSCLC patients were categorized into subgroups based on their radiomic features and pathways enrichment scores using consensus clustering. Subgroup-specific radiomics features were used to validate clustering performance and prognostic value. Kaplan-Meier survival analysis with the log-rank test and univariable and multivariable Cox analyses were conducted to explore survival differences among the subgroups. RESULTS: Three radiotranscriptomics subtypes (RTSs) were identified based on the radiomics and pathways enrichment profiles. The three RTSs were characterized as having specific molecular hallmarks: RTS1 (proliferation subtype), RTS2 (metabolism subtype), and RTS3 (immune activation subtype). RTS3 showed increased infiltration of most immune cells. The RTS stratification strategy was validated in a validation cohort and showed significant prognostic value. Survival analysis demonstrated that the RTS strategy could stratify NSCLC patients according to prognosis (p = 0.009), and the RTS strategy remained an independent prognostic indicator after adjusting for other clinical parameters. CONCLUSIONS: This radiotranscriptomics study provides a stratification strategy for NSCLC that could provide information for radiomics feature molecular annotation and prognostic prediction. KEY POINTS: * Radiotranscriptomics subtypes (RTSs) could be used to stratify molecularly heterogeneous patients. * RTSs showed relationships between molecular phenotypes and radiomics features. * The RTS algorithm could be used to identify patients with poor prognosis.
Radiomic profiling of clear cell renal cell carcinoma reveals subtypes with distinct prognoses and molecular pathways
Lin, P.
Lin, Y. Q.
Gao, R. Z.
Wen, R.
Qin, H.
He, Y.
Yang, H.
Transl Oncol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
KIDNEY
Clear cell renal cell carcinoma (ccRCC)
Random Forest
Classification
BACKGROUND: To identify radiomic subtypes of clear cell renal cell carcinoma (ccRCC) patients with distinct clinical significance and molecular characteristics reflective of the heterogeneity of ccRCC. METHODS: Quantitative radiomic features of ccRCC were extracted from preoperative CT images of 160 ccRCC patients. Unsupervised consensus cluster analysis was performed to identify robust radiomic subtypes based on these features. The Kaplan-Meier method and chi-square test were used to assess the different clinicopathological characteristics and gene mutations among the radiomic subtypes. Subtype-specific marker genes were identified, and gene set enrichment analyses were performed to reveal the specific molecular characteristics of each subtype. Moreover, a gene expression-based classifier of radiomic subtypes was developed using the random forest algorithm and tested in another independent cohort (n = 101). RESULTS: Radiomic profiling revealed three ccRCC subtypes with distinct clinicopathological features and prognoses. VHL, MUC16, FBN2, and FLG were found to have different mutation frequencies in these radiomic subtypes. In addition, transcriptome analysis revealed that the dysregulation of cell cycle-related pathways may be responsible for the distinct clinical significance of the obtained subtypes. The prognostic value of the radiomic subtypes was further validated in another independent cohort (log-rank P = 0.015). CONCLUSION: In the present multi-scale radiogenomic analysis of ccRCC, radiomics played a central role. Radiomic subtypes could help discern genomic alterations and non-invasively stratify ccRCC patients.
MRI-based radiogenomics analysis for predicting genetic alterations in oncogenic signalling pathways in invasive breast carcinoma
Lin, P
Liu, WK
Li, X
Wan, D
Qin, H
Li, Q
Chen, G
He, Y
Yang, H
Clinical Radiology2020Journal Article, cited 0 times
TCGA-BRCA
radiogenomics
Breast
Molecular hallmarks of breast multiparametric magnetic resonance imaging during neoadjuvant chemotherapy
Lin, P.
Wan, W. J.
Kang, T.
Qin, L. F.
Meng, Q. X.
Wu, X. X.
Qin, H. Y.
Lin, Y. Q.
He, Y.
Yang, H.
Radiol Med2023Journal Article, cited 0 times
Website
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
TCGA-BRCA
Radiomics
Radiogenomics
Multiparametric Magnetic Resonance Imaging (mpMRI)
Neoadjuvant Therapy/methods
Magnetic Resonance Imaging/methods
Prognosis
Retrospective Studies
Contrast Media
Treatment Outcome
Breast cancer
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Neoadjuvant chemotherapy
Radiogenomics
PURPOSE: To identify molecular basis of four parameters obtained from dynamic contrast-enhanced magnetic resonance imaging, including functional tumor volume (FTV), longest diameter (LD), sphericity, and contralateral background parenchymal enhancement (BPE). MATERIAL AND METHODS: Pretreatment-available gene expression profiling and different treatment timepoints MRI features were integrated for Spearman correlation analysis. MRI feature-related genes were submitted to hypergeometric distribution-based gene functional enrichment analysis to identify related Kyoto Encyclopedia of Genes and Genomes annotation. Gene set variation analysis was utilized to assess the infiltration of distinct immune cells, which were used to determine relationships between immune phenotypes and medical imaging phenotypes. The clinical significance of MRI and relevant molecular features were analyzed to identify their prediction performance of neoadjuvant chemotherapy (NAC) and prognostic impact. RESULTS: Three hundred and eighty-three patients were included for integrative analysis of MRI features and molecular information. FTV, LD, and sphericity measurements were most positively significantly correlated with proliferation-, signal transmission-, and immune-related pathways, respectively. However, BPE did not show marked correlation relationships with gene expression alteration status. FTV, LD and sphericity all showed significant positively or negatively correlated with some immune-related processes and immune cell infiltration levels. Sphericity decreased at 3 cycles after treatment initiation was also markedly negatively related to baseline sphericity measurements and immune signatures. Its decreased status could act as a predictor for prediction of response to NAC. CONCLUSION: Different MRI features capture different tumor molecular characteristics that could explain their corresponding clinical significance.
A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma
Lin, Peng
Wen, Dong-Yue
Chen, Ling
Li, Xin
Li, Sheng-Hua
Yan, Hai-Biao
He, Rong-Quan
Chen, Gang
He, Yun
Yang, Hong
Eur Radiol2019Journal Article, cited 0 times
TCGA-BLCA
Bladder
Radiomics
Radiogenomics
Computed Tomography (CT)
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.
Identification of a 6-RBP gene signature for a comprehensive analysis of glioma and ischemic stroke: Cognitive impairment and aging-related hypoxic stress
Lin, Weiwei
Wang, Qiangwei
Chen, Yisheng
Wang, Ning
Ni, Qingbin
Qi, Chunhua
Wang, Qian
Zhu, Yongjian
2022Journal Article, cited 0 times
DICOM-Glioma-SEG
There is mounting evidence that ischemic cerebral infarction contributes to vascular cognitive impairment and dementia in elderly. Ischemic stroke and glioma are two majorly fatal diseases worldwide, which promote each other's development based on some common underlying mechanisms. As a post-transcriptional regulatory protein, RNA-binding protein is important in the development of a tumor and ischemic stroke (IS). The purpose of this study was to search for a group of RNA-binding protein (RBP) gene markers related to the prognosis of glioma and the occurrence of IS, and elucidate their underlying mechanisms in glioma and IS. First, a 6-RBP (POLR2F, DYNC1H1, SMAD9, TRIM21, BRCA1, and ERI1) gene signature (RBPS) showing an independent overall survival prognostic prediction was identified using the transcriptome data from TCGA-glioma cohort (n = 677); following which, it was independently verified in the CGGA-glioma cohort (n = 970). A nomogram, including RBPS, 1p19q codeletion, radiotherapy, chemotherapy, grade, and age, was established to predict the overall survival of patients with glioma, convenient for further clinical transformation. In addition, an automatic machine learning classification model based on radiomics features from MRI was developed to stratify according to the RBPS risk. The RBPS was associated with immunosuppression, energy metabolism, and tumor growth of gliomas. Subsequently, the six RBP genes from blood samples showed good classification performance for IS diagnosis (AUC = 0.95, 95% CI: 0.902-0.997). The RBPS was associated with hypoxic responses, angiogenesis, and increased coagulation in IS. Upregulation of SMAD9 was associated with dementia, while downregulation of POLR2F was associated with aging-related hypoxic stress. Irf5/Trim21 in microglia and Taf7/Trim21 in pericytes from the mouse cerebral cortex were identified as RBPS-related molecules in each cell type under hypoxic conditions. The RBPS is expected to serve as a novel biomarker for studying the common mechanisms underlying glioma and IS.
A Two-Phase Optimal Mass Transportation Technique for 3D Brain Tumor Detection and Segmentation
Lin, Wen-Wei
Li, Tiexiang
Huang, Tsung-Ming
Lin, Jia-Wei
Yueh, Mei-Heng
Yau, Shing-Tung
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The goal of optimal mass transportation (OMT) is to transform any irregular 3D object (i.e., a brain image) into a cube without creating significant distortion, which is utilized to preprocess irregular brain samples to facilitate the tensor form of the input format of the U-net algorithm. The BraTS 2021 database newly provides a challenging platform for the detection and segmentation of brain tumors, namely, the whole tumor (WT), the tumor core (TC) and the enhanced tumor (ET), by AI techniques. We propose a two-phase OMT algorithm with density estimates for 3D brain tumor segmentation. In the first phase, we construct a volume-mass-preserving OMT via the density determined by the FLAIR grayscale of the scanned modality for the U-net and predict the possible tumor regions. Then, in the second phase, we increase the density on the region of interest and construct a new OMT to enlarge the target region of tumors for the U-net so that the U-net has a better chance to learn how to mark the correct segmentation labels. The application of this preprocessing OMT technique is a new and trending method for CNN training and validation.
Free-breathing and instantaneous abdominal T(2) mapping via single-shot multiple overlapping-echo acquisition and deep learning reconstruction
Lin, X.
Dai, L.
Yang, Q.
Yang, Q.
He, H.
Ma, L.
Liu, J.
Cheng, J.
Cai, C.
Bao, J.
Chen, Z.
Cai, S.
Zhong, J.
Eur Radiol2023Journal Article, cited 0 times
TCGA-LIHC
Abdomen
Deep learning
Magnetic Resonance Imaging (MRI)
LIVER
KIDNEY
GALLBLADDER
SPLEEN
Segmentation
OBJECTIVES: To develop a real-time abdominal T(2) mapping method without requiring breath-holding or respiratory-gating. METHODS: The single-shot multiple overlapping-echo detachment (MOLED) pulse sequence was employed to achieve free-breathing T(2) mapping of the abdomen. Deep learning was used to untangle the non-linear relationship between the MOLED signal and T(2) mapping. A synthetic data generation flow based on Bloch simulation, modality synthesis, and randomization was proposed to overcome the inadequacy of real-world training set. RESULTS: The results from simulation and in vivo experiments demonstrated that our method could deliver high-quality T(2) mapping. The average NMSE and R(2) values of linear regression in the digital phantom experiments were 0.0178 and 0.9751. Pearson's correlation coefficient between our predicted T(2) and reference T(2) in the phantom experiments was 0.9996. In the measurements for the patients, real-time capture of the T(2) value changes of various abdominal organs before and after contrast agent injection was realized. A total of 33 focal liver lesions were detected in the group, and the mean and standard deviation of T(2) values were 141.1 +/- 50.0 ms for benign and 63.3 +/- 16.0 ms for malignant lesions. The coefficients of variance in a test-retest experiment were 2.9%, 1.2%, 0.9%, 3.1%, and 1.8% for the liver, kidney, gallbladder, spleen, and skeletal muscle, respectively. CONCLUSIONS: Free-breathing abdominal T(2) mapping is achieved in about 100 ms on a clinical MRI scanner. The work paved the way for the development of real-time dynamic T(2) mapping in the abdomen. KEY POINTS: * MOLED achieves free-breathing abdominal T(2) mapping in about 100 ms, enabling real-time capture of T(2) value changes due to CA injection in abdominal organs. * Synthetic data generation flow mitigates the issue of lack of sizable abdominal training datasets.
SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts
Lin, X.
Xiang, Y.
Wang, Z.
Cheng, K. T.
Yan, Z.
Yu, L.
IEEE Trans Med Imaging2024Journal Article, cited 0 times
Website
Pancreas-CT
Segment anything model (SAM), a foundation model with superior versatility and generalization across diverse segmentation tasks, has attracted widespread attention in medical imaging. However, it has been proved that SAM would encounter severe performance degradation due to the lack of medical knowledge in training and local feature encoding. Though several SAM-based models have been proposed for tuning SAM in medical imaging, they still suffer from insufficient feature extraction and highly rely on high-quality prompts. In this paper, we propose a powerful foundation model SAMCT allowing labor-free prompts and train it on a collected large CT dataset consisting of 1.1M CT images and 5M masks from public datasets. Specifically, based on SAM, SAMCT is further equipped with a U-shaped CNN image encoder, a cross-branch interaction module, and a task-indicator prompt encoder. The U-shaped CNN image encoder works in parallel with the ViT image encoder in SAM to supplement local features. Cross-branch interaction enhances the feature expression capability of the CNN image encoder and the ViT image encoder by exchanging global perception and local features from one to the other. The task-indicator prompt encoder is a plug-and-play component to effortlessly encode task-related indicators into prompt embeddings. In this way, SAMCT can work in an automatic manner in addition to the semi-automatic interactive strategy in SAM. Extensive experiments demonstrate the superiority of SAMCT against the state-of-the-art task-specific and SAM-based medical foundation models on various tasks. The code, data, and model checkpoints are available at https://github.com/xianlin7/SAMCT.
Deep Learning in Prostate Cancer Diagnosis and Gleason Grading in Histopathology Images: An Extensive Study
Linkon, Ali Hasan Md
Labib, Mahir
Hasan, Tarik
Hossain, Mozammal
E-Jannat, Marium
Informatics in Medicine Unlocked2021Journal Article, cited 0 times
Website
QIN-PROSTATE-Repeatability
H&E-stained slides
PROSTATE
Deep Learning
Among American men, prostate cancer is the cause of the second-highest death by any cancer. It is also the most common cancer in men worldwide, and the annual numbers are quite alarming. The most prognostic marker for prostate cancer is the Gleason grading system on histopathology images. Pathologists determine the Gleason grade on stained tissue specimens of Hematoxylin and Eosin (H&E) based on tumor structural growth patterns from whole slide images. Recent advances in Computer-Aided Detection (CAD) using deep learning have brought the immense scope of automatic detection and recognition at very high accuracy in prostate cancer like other medical diagnoses and prognoses. Automated deep learning systems have delivered promising results from histopathological images to accurate grading of prostate cancer. Many studies have shown that deep learning strategies can achieve better outcomes than simpler systems that make use of pathology samples. This article aims to provide an insight into the gradual evolution of deep learning in detecting prostate cancer and Gleason grading. This article also evaluates a comprehensive, synthesized overview of the current state and existing methodological approaches as well as unique insights in prostate cancer detection using deep learning. We have also described research findings, current limitations, and future avenues for research. We have tried to make this paper applicable to deep learning communities and hope it will encourage new collaborations to create dedicated applications and improvements for prostate cancer detection and Gleason grading.
Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images
Linmin, Pei
Lasitha, Vidyaratne
Monibor, Rahman Md
Iftekharuddin, Khan M
Scientific Reports (Nature Publisher Group)2020Journal Article, cited 0 times
Website
TCGA-LGG
BraTS-TCGA-GBM
BraTS-TCGA-LGG
machine learning
Segmentation
A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge.
Automatic Labeling of Special Diagnostic Mammography Views from Images and DICOM Headers
Lituiev, D. S.
Trivedi, H.
Panahiazar, M.
Norgeot, B.
Seo, Y.
Franc, B.
Harnish, R.
Kawczynski, M.
Hadley, D.
J Digit Imaging2019Journal Article, cited 0 times
CBIS-DDSM
BREAST
Computer Aided Diagnosis (CADx)
Automation
Breast Neoplasms/*diagnostic imaging
Datasets as Topic
Female
Humans
*Machine Learning
Mammography/*classification/*methods
Radiology Information Systems
Sensitivity and Specificity
Convolutional Neural Network (CNN)
DICOM
Machine learning
Mammography
Applying state-of-the-art machine learning techniques to medical images requires a thorough selection and normalization of input data. One of such steps in digital mammography screening for breast cancer is the labeling and removal of special diagnostic views, in which diagnostic tools or magnification are applied to assist in assessment of suspicious initial findings. As a common task in medical informatics is prediction of disease and its stage, these special diagnostic views, which are only enriched among the cohort of diseased cases, will bias machine learning disease predictions. In order to automate this process, here, we develop a machine learning pipeline that utilizes both DICOM headers and images to predict such views in an automatic manner, allowing for their removal and the generation of unbiased datasets. We achieve AUC of 99.72% in predicting special mammogram views when combining both types of models. Finally, we apply these models to clean up a dataset of about 772,000 images with expected sensitivity of 99.0%. The pipeline presented in this paper can be applied to other datasets to obtain high-quality image sets suitable to train algorithms for disease detection.
Brain Tumor Segmentation Network Using Attention-Based Fusion and Spatial Relationship Constraint
Liu, Chenyu
Ding, Wangbin
Li, Lei
Zhang, Zhen
Pei, Chenhao
Huang, Liqin
Zhuang, Xiahai
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Delineating the brain tumor from magnetic resonance (MR) images is critical for the treatment of gliomas. However, automatic delineation is challenging due to the complex appearance and ambiguous outlines of tumors. Considering that multi-modal MR images can reflect different tumor biological properties, we develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images. The MMTSN is composed of three sub-branches and a main branch. Specifically, the sub-branches are used to capture different tumor features from multi-modal images, while in the main branch, we design a spatial-channel fusion block (SCFB) to effectively aggregate multi-modal features. Additionally, inspired by the fact that the spatial relationship between sub-regions of the tumor is relatively fixed, e.g., the enhancing tumor is always in the tumor core, we propose a spatial loss to constrain the relationship between different sub-regions of tumor. We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs2020). The method achieves 0.8764, 0.8243 and 0.773 Dice score for the whole tumor, tumor core and enhancing tumor, respectively.
Application of Chest CT Imaging Feature Model in Distinguishing Squamous Cell Carcinoma and Adenocarcinoma of the Lung
Multiview Self-Supervised Segmentation for OARs Delineation in Radiotherapy
Liu, C.
Zhang, X.
Si, W.
Ni, X.
Evid Based Complement Alternat Med2021Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Segmentation
Machine Learning
Radiotherapy has become a common treatment option for head and neck (H&N) cancer, and organs at risk (OARs) need to be delineated to implement a high conformal dose distribution. Manual drawing of OARs is time consuming and inaccurate, so automatic drawing based on deep learning models has been proposed to accurately delineate the OARs. However, state-of-the-art performance usually requires a decent amount of delineation, but collecting pixel-level manual delineations is labor intensive and may not be necessary for representation learning. Encouraged by the recent progress in self-supervised learning, this study proposes and evaluates a novel multiview contrastive representation learning to boost the models from unlabelled data. The proposed learning architecture leverages three views of CTs (coronal, sagittal, and transverse plane) to collect positive and negative training samples. Specifically, a CT in 3D is first projected into three 2D views (coronal, sagittal, and transverse planes), then a convolutional neural network takes 3 views as inputs and outputs three individual representations in latent space, and finally, a contrastive loss is used to pull representation of different views of the same image closer ("positive pairs") and push representations of views from different images ("negative pairs") apart. To evaluate performance, we collected 220 CT images in H&N cancer patients. The experiment demonstrates that our method significantly improves quantitative performance over the state-of-the-art (from 83% to 86% in absolute Dice scores). Thus, our method provides a powerful and principled means to deal with the label-scarce problem.
Robust phenotyping of highly multiplexed tissue imaging data using pixel-level clustering
Liu, Candace C.
Greenwald, Noah F.
Kong, Alex
McCaffrey, Erin F.
Leow, Ke Xuan
Mrdjen, Dunja
Cannon, Bryan J.
Rumberger, Josef Lorenz
Varra, Sricharan Reddy
Angelo, Michael
Nature Communications2023Journal Article, cited 0 times
CRC_FFPE-CODEX_CellNeighs
While technologies for multiplexed imaging have provided an unprecedented understanding of tissue composition in health and disease, interpreting this data remains a significant computational challenge. To understand the spatial organization of tissue and how it relates to disease processes, imaging studies typically focus on cell-level phenotypes. However, images can capture biologically important objects that are outside of cells, such as the extracellular matrix. Here, we describe a pipeline, Pixie, that achieves robust and quantitative annotation of pixel-level features using unsupervised clustering and show its application across a variety of biological contexts and multiplexed imaging platforms. Furthermore, current cell phenotyping strategies that rely on unsupervised clustering can be labor intensive and require large amounts of manual cluster adjustments. We demonstrate how pixel clusters that lie within cells can be used to improve cell annotations. We comprehensively evaluate pre-processing steps and parameter choices to optimize clustering performance and quantify the reproducibility of our method. Importantly, Pixie is open source and easily customizable through a user-friendly interface.
Radiogenomic associations clear cell renal cell carcinoma: an exploratory study
Liu, D.
Dani, K.
Reddy, S. S.
Lei, X.
Demirjian, N.
Hwang, D.
Varghese, B. A.
Rhie, S. K.
Yap, F. Y.
Quinn, D. I.
Siddiqi, I.
Aron, M.
Vaishampayan, U.
Zahoor, H.
Cen, S. Y.
Gill, I. S.
Duddalwar, V.
Oncology2023Journal Article, cited 0 times
Website
TCGA-KIRC
radiomics
radiogenomics
Machine learning
clear cell renal cell carcinoma
MATLAB
Random Forest
AdaBoost
Elastic Net
OBJECTIVES: This study investigates how quantitative texture analysis can be used to non-invasively identify novel radiogenomic correlations with Clear Cell Renal Cell Carcinoma (ccRCC) biomarkers. METHODS: The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma (TCGA-KIRC) open-source database was used to identify 190 sets of patient genomic data that had corresponding multiphase contrast-enhanced CT images in The Cancer Imaging Archive (TCIA-KIRC). 2824 radiomic features spanning fifteen texture families were extracted from CT images using a custom-built MATLAB software package. Robust radiomic features with strong inter-scanner reproducibility were selected. Random Forest (RF), AdaBoost, and Elastic Net machine learning (ML) algorithms evaluated the ability of the selected radiomic features to predict the presence of 12 clinically relevant molecular biomarkers identified from literature. ML analysis was repeated with cases stratified by stage (I/II vs. III/IV) and grade (1/2 vs. 3/4). 10-fold cross validation was used to evaluate model performance. RESULTS: Before stratification by tumor grade and stage, radiomics predicted the presence of several biomarkers with weak discrimination (AUC 0.60-0.68). Once stratified, radiomics predicted KDM5C, SETD2, PBRM1, and mTOR mutation status with acceptable to excellent predictive discrimination (AUC ranges from 0.70 to 0.86). CONCLUSIONS: Radiomic texture analysis can potentially identify a variety of clinically relevant biomarkers in patients with ccRCC and may have a prognostic implication.
SGEResU-Net for brain tumor segmentation
Liu, D.
Sheng, N.
He, T.
Wang, W.
Zhang, J.
Zhang, J.
Math Biosci Eng2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BRAIN
U-Net
Segmentation
Image denoising
The precise segmentation of tumor regions plays a pivotal role in the diagnosis and treatment of brain tumors. However, due to the variable location, size, and shape of brain tumors, the automatic segmentation of brain tumors is a relatively challenging application. Recently, U-Net related methods, which largely improve the segmentation accuracy of brain tumors, have become the mainstream of this task. Following merits of the 3D U-Net architecture, this work constructs a novel 3D U-Net model called SGEResU-Net to segment brain tumors. SGEResU-Net simultaneously embeds residual blocks and spatial group-wise enhance (SGE) attention blocks into a single 3D U-Net architecture, in which SGE attention blocks are employed to enhance the feature learning of semantic regions and reduce possible noise and interference with almost no extra parameters. Besides, the self-ensemble module is also utilized to improve the segmentation accuracy of brain tumors. Evaluation experiments on the Brain Tumor Segmentation (BraTS) Challenge 2020 and 2021 benchmarks demonstrate the effectiveness of the proposed SGEResU-Net for this medical application. Moreover, it achieves DSC values of 83.31, 91.64 and 86.85%, as well as Hausdorff distances (95%) of 19.278, 5.945 and 7.567 for the enhancing tumor, whole tumor, and tumor core on BraTS 2021 dataset, respectively.
TrEnD: A transformer‐based encoder‐decoder model with adaptive patch embedding for mass segmentation in mammograms
Liu, Dongdong
Wu, Bo
Li, Changbo
Sun, Zheng
Zhang, Nan
Medical Physics2023Journal Article, cited 0 times
CBIS-DDSM
BACKGROUND: Breast cancer is one of the most prevalent malignancies diagnosed in women. Mammogram inspection in the search and delineation of breast tumors is an essential prerequisite for a reliable diagnosis. However, analyzing mammograms by radiologists is time-consuming and prone to errors. Therefore, the development of computer-aided diagnostic (CAD) systems to automate the mass segmentation procedure is greatly expected.
PURPOSE: Accurate breast mass segmentation in mammograms remains challenging in CAD systems due to the low contrast, various shapes, and fuzzy boundaries of masses. In this paper, we propose a fully automatic and effective mass segmentation model based on deep learning for improving segmentation performance.
METHODS: We propose an effective transformer-based encoder-decoder model (TrEnD). Firstly, we introduce a lightweight method for adaptive patch embedding (APE) of the transformer, which utilizes superpixels to adaptively adjust the size and position of each patch. Secondly, we introduce a hierarchical transformer-encoder and attention-gated-decoder structure, which is beneficial for progressively suppressing interference feature activations in irrelevant background areas. Thirdly, a dual-branch design is employed to extract and fuse globally coarse and locally fine features in parallel, which could capture the global contextual information and ensure the relevance and integrity of local information. The model is evaluated on two public datasets CBIS-DDSM and INbreast. To further demonstrate the robustness of TrEnD, different cropping strategies are applied to these datasets, termed tight, loose, maximal, and mix-frame. Finally, ablation analysis is performed to assess the individual contribution of each module to the model performance.
RESULTS: The proposed segmentation model provides a high Dice coefficient and Intersection over Union (IoU) of 92.20% and 85.81% on the mix-frame CBIS-DDSM, while 91.83% and 85.29% for the mix-frame INbreast, respectively. The segmentation performance outperforms the current state-of-the-art approaches. By adding the APE and attention-gated module, the Dice and IoU have improved by 6.54% and 10.07%.
CONCLUSION: According to extensive qualitative and quantitative assessments, the proposed network is effective for automatic breast mass segmentation, and has adequate potential to offer technical assistance for subsequent clinical diagnoses.
Normalized Euclidean Super-Pixels for Medical Image Segmentation
We propose a super-pixel segmentation algorithm based on normalized Euclidean distance for handling the uncertainty and complexity in medical image. Benefited from the statistic characteristics, compactness within super-pixels is described by normalized Euclidean distance. Our algorithm banishes the balance factor of the Simple Linear Iterative Clustering framework. In this way, our algorithm properly responses to the lesion tissues, such as tiny lung nodules, which have a little difference in luminance with their neighbors. The effectiveness of proposed algorithm is verified in The Cancer Imaging Archive (TCIA) database. Compared with Simple Linear Iterative Clustering (SLIC) and Linear Spectral Clustering (LSC), the experiment results show that, the proposed algorithm achieves competitive performance over super-pixel segmentation in the state of art.
The Current Role of Image Compression Standards in Medical Imaging
Liu, Feng
Hernandez-Cabronero, Miguel
Sanchez, Victor
Marcellin, Michael W
Bilgin, Ali
Information2017Journal Article, cited 4 times
Website
LIDC-IDRI
TCGA-BRCA
TCGA-GBM
CT-COLONOGRAPHY
image compression
Accelerated brain tumor dynamic contrast‐enhanced; MRI; using Adaptive; Pharmaco‐Kinetic; Model Constrained method
Liu, Fan
Li, Dongxiao
Jin, Xinyu
Qiu, Wenyuan
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
RIDER Neuro MRI
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
In brain tumor, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) spatiotemporally resolved high-quality reconstruction which is required for quantitative analysis of some physiological characteristics of brain tissue. By exploiting some kind of sparsity priori, compressed sensing methods can achieve high spatiotemporal DCE-MRI image reconstruction from undersampled k-space data. Recently, as a kind of priori information about the contrast agent (CA) concentration dynamics, Pharmacokinetic (PK) models have been explored for undersampled DCE-MRI reconstruction. This paper presents a novel dictionary learning-based reconstruction method with Adaptive Pharmaco-Kinetic Model Constraints (APKMC). In APKMC, the priori knowledge about CA dynamics is incorporated into a novel dictionary, which consists of PK model-based atoms and adaptive atoms. The PK atoms are constructed based on Patlak model and K-SVD dimension reduction algorithm, and the adaptive ones are used to resolve PK model inconsistencies. To solve APKMC, an optimization algorithm based on variable splitting and alternating iterative optimization is presented. The proposed method has been validated on three brain tumor DCE-MRI data sets by comparing with two state-of-the-art methods. As demonstrated by the quantitative and qualitative analysis of results, APKMC achieved substantially better quality in the reconstruction of brain DCE-MRI images, as well as in the reconstruction of PK model parameter maps.
Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.
DL-MRI: A Unified Framework of Deep Learning-Based MRI Super Resolution
Liu, Huanyu
Liu, Jiaqi
Li, Junbao
Pan, Jeng-Shyang
Yu, Xiaqiong
Lu, Hao Chun
Journal of Healthcare Engineering2021Journal Article, cited 0 times
Website
Algorithm Development
BREAST
HEAD
BLADDER
Deep Learning
Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.
Superpixel Region Merging Based on Deep Network for Medical Image Segmentation
Liu, Hui
Wang, Haiou
Wu, Yan
Xing, Lei
2020Journal Article, cited 0 times
RIDER Lung CT
Automatic and accurate semantic segmentation of pathological structures in medical images is challenging because of noisy disturbance, deformable shapes of pathology, and low contrast between soft tissues. Classical superpixel-based classification algorithms suffer from edge leakage due to complexity and heterogeneity inherent in medical images. Therefore, we propose a deep U-Net with superpixel region merging processing incorporated for edge enhancement to facilitate and optimize segmentation. Our approach combines three innovations: (1) different from deep learning--based image segmentation, the segmentation evolved from superpixel region merging via U-Net training getting rich semantic information, in addition to gray similarity; (2) a bilateral filtering module was adopted at the beginning of the network to eliminate external noise and enhance soft tissue contrast at edges of pathogy; and (3) a normalization layer was inserted after the convolutional layer at each feature scale, to prevent overfitting and increase the sensitivity to model parameters. This model was validated on lung CT, brain MR, and coronary CT datasets, respectively. Different superpixel methods and cross validation show the effectiveness of this architecture. The hyperparameter settings were empirically explored to achieve a good trade-off between the performance and efficiency, where a four-layer network achieves the best result in precision, recall, F-measure, and running speed. It was demonstrated that our method outperformed state-of-the-art networks, including FCN-16s, SegNet, PSPNet, DeepLabv3, and traditional U-Net, both quantitatively and qualitatively. Source code for the complete method is available at https://github.com/Leahnawho/Superpixel-network.
Deep learning infers clinically relevant protein levels and drug response in breast cancer from unannotated pathology images
Liu, H.
Xie, X.
Wang, B.
NPJ Breast Cancer2024Journal Article, cited 0 times
Website
CPTAC-BRCA
HER2 tumor ROIs
TCGA-BRCA
BREAST
Breast cancer
Imaging features
Deep Learning
Weakly supervised learning
Whole Slide Imaging (WSI)
Biomarker
Pathomics
Algorithm Development
The computational pathology has been demonstrated to effectively uncover tumor-related genomic alterations and transcriptomic patterns. Although proteomics has indeed shown great potential in the field of precision medicine, few studies have focused on the computational prediction of protein levels from pathology images. In this paper, we assume that deep learning-based pathological features imply the protein levels of tumor biomarkers that are indicative of prognosis and drug response. For this purpose, we propose wsi2rppa, a weakly supervised contrastive learning framework to infer the protein levels of tumor biomarkers from whole slide images (WSIs) in breast cancer. We first conducted contrastive learning-based pre-training on tessellated tiles to extract pathological features, which are then aggregated by attention pooling and adapted to downstream tasks. We conducted extensive evaluation experiments on the TCGA-BRCA cohort (1978 WSIs of 1093 patients with protein levels of 223 biomarkers) and the CPTAC-BRCA cohort (642 WSIs of 134 patients). The results showed that our method achieved state-of-the-art performance in tumor diagnostic tasks, and also performed well in predicting clinically relevant protein levels and drug response. To show the model interpretability, we spatially visualized the WSIs colored the tiles by their attention scores, and found that the regions with high scores were highly consistent with the tumor and necrotic regions annotated by a 10-year experienced pathologist. Moreover, spatial transcriptomic data further verified that the heatmap generated by attention scores agrees greatly with the spatial expression landscape of two typical tumor biomarker genes. In predicting the response to drug trastuzumab treatment, our method achieved a 0.79 AUC value which is much higher than the previous study reported 0.68. These findings showed the remarkable potential of computational pathology in the prediction of clinically relevant protein levels, drug response, and clinical outcomes.
Multi-scale signaling and tumor evolution in high-grade gliomas
Liu, Jingxian
Cao, Song
Imbach, Kathleen J.
Gritsenko, Marina A.
Lih, Tung-Shing M.
Kyle, Jennifer E.
Yaron-Barir, Tomer M.
Binder, Zev A.
Li, Yize
Strunilin, Ilya
Wang, Yi-Ting
Tsai, Chia-Feng
Ma, Weiping
Chen, Lijun
Clark, Natalie M.
Shinkle, Andrew
Naser Al Deen, Nataly
Caravan, Wagma
Houston, Andrew
Simin, Faria Anjum
Wyczalkowski, Matthew A.
Wang, Liang-Bo
Storrs, Erik
Chen, Siqi
Illindala, Ritvik
Li, Yuping D.
Jayasinghe, Reyka G.
Rykunov, Dmitry
Cottingham, Sandra L.
Chu, Rosalie K.
Weitz, Karl K.
Moore, Ronald J.
Sagendorf, Tyler
Petyuk, Vladislav A.
Nestor, Michael
Bramer, Lisa M.
Stratton, Kelly G.
Schepmoes, Athena A.
Couvillion, Sneha P.
Eder, Josie
Kim, Young-Mo
Gao, Yuqian
Fillmore, Thomas L.
Zhao, Rui
Monroe, Matthew E.
Southard-Smith, Austin N.
Li, Yang E.
Jui-Hsien Lu, Rita
Johnson, Jared L.
Wiznerowicz, Maciej
Hostetter, Galen
Newton, Chelsea J.
Ketchum, Karen A.
Thangudu, Ratna R.
Barnholtz-Sloan, Jill S.
Wang, Pei
Fenyö, David
An, Eunkyung
Thiagarajan, Mathangi
Robles, Ana I.
Mani, D. R.
Smith, Richard D.
Porta-Pardo, Eduard
Cantley, Lewis C.
Iavarone, Antonio
Chen, Feng
Mesri, Mehdi
Nasrallah, MacLean P.
Zhang, Hui
Resnick, Adam C.
Chheda, Milan G.
Rodland, Karin D.
Liu, Tao
Ding, Li
Cancer Cell2024Journal Article, cited 0 times
Website
CPTAC-GBM
UPENN-GBM
glioblastoma
glycoproteomics
tumor recurrence
lipidome
metabolome
proteomics
single nuclei RNA-seq
single nuclei ATAC-seq
Summary Although genomic anomalies in glioblastoma (GBM) have been well studied for over a decade, its 5-year survival rate remains lower than 5%. We seek to expand the molecular landscape of high-grade glioma, composed of IDH-wildtype GBM and IDH-mutant grade 4 astrocytoma, by integrating proteomic, metabolomic, lipidomic, and post-translational modifications (PTMs) with genomic and transcriptomic measurements to uncover multi-scale regulatory interactions governing tumor development and evolution. Applying 14 proteogenomic and metabolomic platforms to 228 tumors (212 GBM and 16 grade 4 IDH-mutant astrocytoma), including 28 at recurrence, plus 18 normal brain samples and 14 brain metastases as comparators, reveals heterogeneous upstream alterations converging on common downstream events at the proteomic and metabolomic levels and changes in protein-protein interactions and glycosylation site occupancy at recurrence. Recurrent genetic alterations and phosphorylation events on PTPN11 map to important regulatory domains in three dimensions, suggesting a central role for PTPN11 signaling across high-grade gliomas.
Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model
Liu, J.
Cui, J.
Liu, F.
Yuan, Y.
Guo, F.
Zhang, G.
Med Phys2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Non Small Cell Lung Cancer (NSCLC)
Radiomics
Radiomic feature
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.
AI-Driven Robust Kidney and Renal Mass Segmentation and Classification on 3D CT Images
Liu, Jingya
Yildirim, Onur
Akin, Oguz
Tian, Yingli
Bioengineering (Basel)2023Journal Article, cited 0 times
TCGA-KICH
TCGA-KIRP
TCGA-KIRC
Computed Tomography (CT)
KiTS19
KIDNEY
Segmentation
Classification
weakly supervised learning
Early intervention in kidney cancer helps to improve survival rates. Abdominal computed tomography (CT) is often used to diagnose renal masses. In clinical practice, the manual segmentation and quantification of organs and tumors are expensive and time-consuming. Artificial intelligence (AI) has shown a significant advantage in assisting cancer diagnosis. To reduce the workload of manual segmentation and avoid unnecessary biopsies or surgeries, in this paper, we propose a novel end-to-end AI-driven automatic kidney and renal mass diagnosis framework to identify the abnormal areas of the kidney and diagnose the histological subtypes of renal cell carcinoma (RCC). The proposed framework first segments the kidney and renal mass regions by a 3D deep learning architecture (Res-UNet), followed by a dual-path classification network utilizing local and global features for the subtype prediction of the most common RCCs: clear cell, chromophobe, oncocytoma, papillary, and other RCC subtypes. To improve the robustness of the proposed framework on the dataset collected from various institutions, a weakly supervised learning schema is proposed to leverage the domain gap between various vendors via very few CT slice annotations. Our proposed diagnosis system can accurately segment the kidney and renal mass regions and predict tumor subtypes, outperforming existing methods on the KiTs19 dataset. Furthermore, cross-dataset validation results demonstrate the robustness of datasets collected from different institutions trained via the weakly supervised learning schema.
Material composition characterization from computed tomography via self-supervised learning promotes pulmonary disease diagnosis
Liu, Jiachen
Zhao, Wei
Liu, Yuxuan
Chen, Yang
Bai, Xiangzhi
Cell Reports Physical Science2024Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
Self-supervised learning
Deep Learning
Dual energy computed tomography
Computed tomography (CT) images primarily provide tissue morphological information, while material composition analysis may enable a more fundamental body assessment. However, existing methods for this suffer from low accuracy and severe degradation. Furthermore, the complex composition of bodies and the absence of labels constrain the potential use of deep learning. Here, we present a self-supervised learning approach, generating multiple basis material images with no labels (NoL-MBMI), for analyzing material composition without labels. Results from phantom and patient experiments demonstrate that NoL-MBMI can provide results with superior visual quality and accuracy. Notably, to extend the clinical usage of NoL-MBMI, we construct an automated system to extract material composition information directly from standard single-energy CT (SECT) data for diagnosis. We evaluate the system on two pulmonary diagnosis tasks and observe that deep-learning models using material composition features significantly outperform those using morphological features, suggesting the clinical effectiveness of diagnosing utilizing material composition and its potential for advancing medical imaging technology.
Image Classification Algorithm Based on Deep Learning-Kernel Function
Liu, Jun-e
An, Feng-Ping
Scientific Programming2020Journal Article, cited 11 times
Website
COLON
CT
Classification
deep learning
Although the existing traditional image classification methods have been widely applied in practical problems, there are some problems in the application process, such as unsatisfactory effects, low classification accuracy, and weak adaptive ability. This method separates image feature extraction and classification into two steps for classification operation. The deep learning model has a powerful learning ability, which integrates the feature extraction and classification process into a whole to complete the image classification test, which can effectively improve the image classification accuracy. However, this method has the following problems in the application process: first, it is impossible to effectively approximate the complex functions in the deep learning model. Second, the deep learning model comes with a low classifier with low accuracy. So, this paper introduces the idea of sparse representation into the architecture of the deep learning network and comprehensively utilizes the sparse representation of well multidimensional data linear decomposition ability and the deep structural advantages of multilayer nonlinear mapping to complete the complex function approximation in the deep learning model. And a sparse representation classification method based on the optimized kernel function is proposed to replace the classifier in the deep learning model, thereby improving the image classification effect. Therefore, this paper proposes an image classification algorithm based on the stacked sparse coding depth learning model-optimized kernel function nonnegative sparse representation. The experimental results show that the proposed method not only has a higher average accuracy than other mainstream methods but also can be good adapted to various image databases. Compared with other deep learning methods, it can better solve the problems of complex function approximation and poor classifier effect, thus further improving image classification accuracy.
A Postoperative Displacement Measurement Method for Femoral Neck Fracture Internal Fixation Implants Based on Femoral Segmentation and Multi-Resolution Frame Registration
Liu, Kaifeng
Nagamune, Kouki
Oe, Keisuke
Kuroda, Ryosuke
Niikura, Takahiro
Symmetry2021Journal Article, cited 0 times
Website
Pelvic-Reference-Data
PELVIS
Machine Learning
Computed Tomography (CT)
Segmentation
Femoral neck fractures have a high incidence in the geriatric population and are associatedwith high mortality and disability rates. With the minimally invasive nature, internal fixation iswidely used as a treatment option to stabilize femoral neck fractures. The fixation effectiveness andstability of the implant is an essential guide for the surgeon. However, there is no long-term reliableevaluation method to quantify the implant’s fixation effect without affecting the patient’s behaviorand synthesizing long-term treatment data. For the femur’s symmetrical structure, this study used3D convolutional networks for biomedical image segmentation (3D-UNet) to segment the injuredfemur as a mask, aligned computerized tomography (CT) scans of the patient at different times aftersurgery and quantified the displacement in the specified direction using the generated 3D point cloud.In the experimental part, we used 10 groups containing two CT images scanned at the one-yearinterval after surgery. By comparing manual segmentation of femur and segmentation of femur as amask using neural network, the mask obtained by segmentation using the 3D-UNet network withsymmetric structure fully meets the requirements of image registration. The data obtained fromthe 3D point cloud calculation is within the error tolerance, and the calculated displacement of theimplant can be visualized in 3D space.
Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
Liu, K. L.
Wu, T.
Chen, P. T.
Tsai, Y. M.
Roth, H.
Wu, M. S.
Liao, W. C.
Wang, W.
Lancet Digit Health2020Journal Article, cited 141 times
Website
Pancreas-CT
Medical Segmentation Decathlon 2021
Convolutional Neural Network (CNN)
Contrast Media
*Deep Learning
Diagnosis
Differential
Pancreas/diagnostic imaging
Pancreatic Neoplasms/*diagnostic imaging
Racial Groups
Radiographic Image Enhancement/methods
Radiographic Image Interpretation
Computer-Assisted/*methods
Reproducibility of Results
Retrospective Studies
Sensitivity and Specificity
Taiwan
Tomography
X-Ray Computed/*methods
BACKGROUND: The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks' potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation. METHODS: In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison. FINDINGS: Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0.973, specificity of 1.000, and accuracy of 0.986 (area under the curve [AUC] 0.997 (95% CI 0.992-1.000). In local test set 2, CNN-based analysis had a sensitivity of 0.990, specificity of 0.989, and accuracy of 0.989 (AUC 0.999 [0.998-1.000]). In the US test set, CNN-based analysis had a sensitivity of 0.790, specificity of 0.976, and accuracy of 0.832 (AUC 0.920 [0.891-0.948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0.983 vs 0.929, difference 0.054 [95% CI 0.011-0.098]; p=0.014) in the two local test sets combined. CNN missed three (1.7%) of 176 pancreatic cancers (1.1-1.2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1.0-3.3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92.1% in the local test sets and 63.1% in the US test set. INTERPRETATION: CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation. FUNDING: Taiwan Ministry of Science and Technology.
Deep learning for magnetic resonance imaging-genomic mapping of invasive breast carcinoma
To identify MRI-based radiomic features that could be obtained automatically by a deep learning (DL) model and could predict the clinical characteristics of breast cancer (BC). Also, to explain the potential underlying genomic mechanisms of the predictive radiomic features. A denoising autoencoder (DA) was developed to retrospectively extract 4,096 phenotypes from the MRI of 110 BC patients collected by The Cancer Imaging Archive (TCIA). The associations of these phenotypes with genomic features (commercialized gene signatures, expression of risk genes, and biological pathways activities extracted from the same patients’ mRNA expression collected by The Cancer Genome Atlas (TCGA)) were tested based on linear mixed effect (LME) models. A least absolute shrinkage and selection operator (LASSO) model was used to identify the most predictive MRI phenotypes for each clinical phenotype (tumor size (T), lymph node metastasis(N), status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2)). More than 1,000 of the 4,096 MRI phenotypes were associated with the activities of risk genes, gene signatures, and biological pathways (adjusted P-value < 0.05). High performances are obtained in the prediction of the status of T, N, ER, PR, HER2 (AUC>0.9). These identified MRI phenotypes also show significant power to stratify the BC tumors. DL based auto MRI features performed very well in predicting clinical characteristics of BC and these phenotypes were identified to have genomic significance.
Synthetic minority image over-sampling technique: How to improve AUC for glioblastoma patient survival prediction
Real-world datasets are often imbalanced, with an important class having many fewer examples than other classes. In medical data, normal examples typically greatly outnumber disease examples. A classifier learned from imbalanced data, will tend to be very good at the predicting examples in the larger (normal) class, yet the smaller (disease) class is typically of more interest. Imbalance is dealt with at the feature vector level (create synthetic feature vectors or discard some examples from the larger class) or by assigning differential costs to errors. Here, we introduce a novel method for over-sampling minority class examples at the image level, rather than the feature vector level. Our method was applied to the problem of Glioblastoma patient survival group prediction. Synthetic minority class examples were created by adding Gaussian noise to original medical images from the minority class. Uniform local binary patterns (LBP) histogram features were then extracted from the original and synthetic image examples with a random forests classifier. Experimental results show the new method (Image SMOTE) increased minority class predictive accuracy and also the AUC (area under the receiver operating characteristic curve), compared to using the imbalanced dataset directly or to creating synthetic feature vectors.
LSW-Net: A Learning Scattering Wavelet Network for Brain Tumor and Retinal Image Segmentation
Liu, Ruihua
Nan, Haoyu
Zou, Yangyang
Xie, Ting
Ye, Zhiyong
Electronics2022Journal Article, cited 0 times
BraTS 2020
Algorithm Development
Segmentation
Wavelet
loss function
active contour
Convolutional network models have been widely used in image segmentation. However, there are many types of boundary contour features in medical images which seriously affect the stability and accuracy of image segmentation models, such as the ambiguity of tumors, the variability of lesions, and the weak boundaries of fine blood vessels. In this paper, in order to solve these problems we first introduce the dual-tree complex wavelet scattering transform module, and then innovatively propose a learning scattering wavelet network model. In addition, a new improved active contour loss function is further constructed to deal with complex segmentation. Finally, the equilibrium coefficient of our model is discussed. Experiments on the BraTS2020 dataset show that the LSW-Net model has improved the Dice coefficient, accuracy, and sensitivity of the classic FCN, SegNet, and At-Unet models by at least 3.51%, 2.11%, and 0.46%, respectively. In addition, the LSW-Net model still has an advantage in the average measure of Dice coefficients compared with some advanced segmentation models. Experiments on the DRIVE dataset prove that our model outperforms the other 14 algorithms in both Dice coefficient and specificity measures. In particular, the sensitivity of our model provides a 3.39% improvement when compared with the Unet model, and the model's effect is obvious.
The impact of variance in carnitine palmitoyltransferase-1 expression on breast cancer prognosis is stratified by clinical and anthropometric factors
Liu, R.
Ospanova, S.
Perry, R. J.
PLoS One2023Journal Article, cited 0 times
Website
CPT1A is a rate-limiting enzyme in fatty acid oxidation and is upregulated in high-risk breast cancer. Obesity and menopausal status' relationship with breast cancer prognosis is well established, but its connection with fatty acid metabolism is not. We utilized RNA sequencing data in the Xena Functional Genomics Explorer, to explore CPT1A's effect on breast cancer patients' survival probability. Using [18F]-fluorothymidine positron emission tomography-computed tomography images from The Cancer Imaging Archive, we segmented these analyses by obesity and menopausal status. In 1214 patients, higher CPT1A expression is associated with lower breast cancer survivability. We confirmed a previously observed protective relationship between obesity and breast cancer in pre-menopausal patients and supported this data using two-sided Pearson correlations. Taken together, these analyses using open-access databases bolster the potential role of CPT1A-dependent fatty acid metabolism as a pathogenic factor in breast cancer.
Improving Brain Tumor Segmentation with Multi-direction Fusion and Fine Class Prediction
Liu, Sun’ao
Guo, Xiaonan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional neural networks have been broadly used for medical image analysis. Due to its characteristics, segmentation of glioma is considered to be one of the most challenging tasks. In this paper, we propose a novel Multi-direction Fusion Network (MFNet) for brain tumor segmentation with 3D multimodal MRI data. Unlike conventional 3D networks, the feature-extracting process is decomposed and fused in the proposed network. Furthermore, we design an additional task called Fine Class Prediction to reinforce the encoder and prevent over-segmentation. The proposed methods finally obtain dice scores of 0.81796, 0.8227, 0.88459 for enhancing tumor, tumor core and whole tumor respectively on BraTS 2019 test set.
Integrating clinicopathologic information and dynamic contrast-enhanced MRI for augmented prediction of neoadjuvant chemotherapy response in breast cancer
Liu, Tianyu
Wang, Hong
Feng, Feiyan
Li, Wei
Zheng, Fulin
Wu, Kai
Yu, Shengpeng
Sun, Yanshen
Biomedical Signal Processing and Control2025Journal Article, cited 0 times
Website
ISPY1
Neoadjuvant chemotherapy
Treatment response prediction
Clinicopathologic information
Breast cancer
DCE-MRI
Neoadjuvant chemotherapy (NACT) represents a non-invasive treatment paradigm for both locally advanced and early-stage breast cancer patients. Precise prediction of NACT treatment response is pivotal in postoperative treatment planning and prognostic enhancement. While pathologic puncture remains a central method for evaluating NACT response, its invasive nature not only induces adverse effects but also elevates the risk of delayed surgery. To this end, this paper proposes CITR-Net, an innovative multi-modality fusion framework which ingeniously combines Clinicopathologic Information with Tumor Region characteristics to achieve precise prediction of NACT treatment response. Leveraging a dual-stream encoder with paralleled parameter-shared attention (P2SA) gates, CITR-Net adeptly captures both texture and intensity features from tumor regions before and after contrast agent injection. Simultaneously, clinicopathologic information extracted from clinical and pathology tabular data is tokenized to facilitate feature representation. A novel clinicopathologic information-enhanced feature fusion (CIF-Fusion) strategy seamlessly merges these feature representations, enabling a comprehensive exploration of tumor-related features across image and tabular data domains. Furthermore, CITR-Net employs a tumor region-based decoder, guided by a tumor region-supervised loss, to ensure intricate details of tumor areas are preserved during the encoding phase. Experimental evaluations conducted on two publicly available datasets demonstrate CITR-Net’s superior performance compared to existing state-of-the-art methods. In summary, CITR-Net emerges as a efficient and effective tool for accurately predicting NACT treatment response, paving the way for improved patient care and prognostic outcomes in breast cancer treatment. The code will be released on GitHub (https://github.com/LTYUnique/CITR-Net).
Computational Identification of Tumor Anatomic Location Associated with Survival in 2 Large Cohorts of Human Primary Glioblastomas
Liu, T T
Achrol, A S
Mitchell, L A
Du, W A
Loya, J J
Rodriguez, S A
Feroze, A
Westbroek, E M
Yeom, K W
Stuart, J M
Chang, S D
Harsh, G R 4th
Rubin, D L
American Journal of Neuroradiology2016Journal Article, cited 6 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Classification
BACKGROUND AND PURPOSE: Tumor location has been shown to be a significant prognostic factor in patients with glioblastoma. The purpose of this study was to characterize glioblastoma lesions by identifying MR imaging voxel-based tumor location features that are associated with tumor molecular profiles, patient characteristics, and clinical outcomes. MATERIALS AND METHODS: Preoperative T1 anatomic MR images of 384 patients with glioblastomas were obtained from 2 independent cohorts (n = 253 from the Stanford University Medical Center for training and n = 131 from The Cancer Genome Atlas for validation). An automated computational image-analysis pipeline was developed to determine the anatomic locations of tumor in each patient. Voxel-based differences in tumor location between good (overall survival of >17 months) and poor (overall survival of <11 months) survival groups identified in the training cohort were used to classify patients in The Cancer Genome Atlas cohort into 2 brain-location groups, for which clinical features, messenger RNA expression, and copy number changes were compared to elucidate the biologic basis of tumors located in different brain regions. RESULTS: Tumors in the right occipitotemporal periventricular white matter were significantly associated with poor survival in both training and test cohorts (both, log-rank P < .05) and had larger tumor volume compared with tumors in other locations. Tumors in the right periatrial location were associated with hypoxia pathway enrichment and PDGFRA amplification, making them potential targets for subgroup-specific therapies. CONCLUSIONS: Voxel-based location in glioblastoma is associated with patient outcome and may have a potential role for guiding personalized treatment.;
Magnetic resonance perfusion image features uncover an angiogenic subgroup of glioblastoma patients with poor survival and better response to antiangiogenic treatment
Liu, Tiffany T.
Achrol, Achal S.
Mitchell, Lex A.
Rodriguez, Scott A.
Feroze, Abdullah
Michael Iv
Kim, Christine
Chaudhary, Navjot
Gevaert, Olivier
Stuart, Josh M.
Harsh, Griffith R.
Chang, Steven D.
Rubin, Daniel L.
Neuro-Oncology2016Journal Article, cited 15 times
Website
Radiogenomics
TCGA-GBM
Background. In previous clinical trials, antiangiogenic therapies such as bevacizumab did not show efficacy in patients with newly diagnosed glioblastoma (GBM). This may be a result of the heterogeneity of GBM, which has a variety of imaging-based phenotypes and gene expression patterns. In this study, we sought to identify a phenotypic subtype of GBM patients who have distinct tumor-image features and molecular activities and who may benefit from antiangiogenic therapies.Methods. Quantitative image features characterizing subregions of tumors and the whole tumor were extracted from preoperative and pretherapy perfusion magnetic resonance (MR) images of 117 GBM patients in 2 independent cohorts. Unsupervised consensus clustering was performed to identify robust clusters of GBM in each cohort. Cox survival and gene set enrichment analyses were conducted to characterize the clinical significance and molecular pathway activities of the clusters. The differential treatment efficacy of antiangiogenic therapy between the clusters was evaluated.Results. A subgroup of patients with elevated perfusion features was identified and was significantly associated with poor patient survival after accounting for other clinical covariates (P values <.01; hazard ratios > 3) consistently found in both cohorts. Angiogenesis and hypoxia pathways were enriched in this subgroup of patients, suggesting the potential efficacy of antiangiogenic therapy. Patients of the angiogenic subgroups pooled from both cohorts, who had chemotherapy information available, had significantly longer survival when treated with antiangiogenic therapy (log-rank P=.022).Conclusions. Our findings suggest that an angiogenic subtype of GBM patients may benefit from antiangiogenic therapy with improved overall survival.
A CADe system for nodule detection in thoracic CT images based on artificial neural network
Liu, Xinglong
Hou, Fei
Qin, Hong
Hao, Aimin
Science China Information Sciences2017Journal Article, cited 11 times
Website
LIDC-IDRI
Artificial neural network (ANN)
LUNG
Computed Tomography (CT)
computer aided detection (CADe)
A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas
Liu, Xing
Li, Yiming
Qian, Zenghui
Sun, Zhiyan
Xu, Kaibin
Wang, Kai
Liu, Shuai
Fan, Xing
Li, Shaowu
Zhang, Zhong
NeuroImage: Clinical2018Journal Article, cited 0 times
Website
Radiomics
lower-grade glioma (LGG)
Progression-free survival
Radiogenomics
Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas
Liu, Xing
Li, Yiming
Sun, Zhiyan
Li, Shaowu
Wang, Kai
Fan, Xing
Liu, Yuqing
Wang, Lei
Wang, Yinyan
Jiang, Tao
Cancer medicine2018Journal Article, cited 0 times
Website
glioma
radiogenomics
gene set enrichment analysis (GSEA)
Molecular Signatures Database v5.1 (MSigDB)
radiomic features
Unsupervised Sparse-View Backprojection via Convolutional and Spatial Transformer Networks
Liu, Xueqing
Sajda, Paul
Brain Informatics2023Book Section, cited 0 times
QIN-LungCT-Seg
Convolutional Neural Network (CNN)
Unsupervised learning
Computed Tomography (CT)
Sparse-view CT
Algorithm Development
Imaging technologies heavily rely on tomographic reconstruction, which involves solving a multidimensional inverse problem given a limited number of projections. Building upon our prior research [14], we have ascertained that the integration of the predicted source space derived from electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) can be effectively approached as a backprojection problem involving sensor non-uniformity. Although backprojection is a commonly used algorithm for tomographic reconstruction, it often produces subpar image reconstructions when the projection angles are sparse or the sensor characteristics are non-uniform. To address this issue, various deep learning-based algorithms have been developed to solve the inverse problem and reconstruct images using a reduced number of projections. However, these algorithms typically require ground-truth examples, i.e., reconstructed images, to achieve satisfactory performance. In this paper, we present an unsupervised sparse-view backprojection algorithm that does not rely on ground-truth examples. Our algorithm comprises two modules within a generator-projector framework: a convolutional neural network and a spatial transformer network. We evaluate the effectiveness of our algorithm using computed tomography (CT) images of the human chest. The results demonstrate that our algorithm outperforms filtered backprojection significantly in scenarios with very sparse projection angles or varying sensor characteristics for different angles. Our proposed approach holds practical implications for medical imaging and other imaging modalities (e.g., radar) where sparse and/or non-uniform projections may arise due to time or sampling constraints.
A Genetic Polymorphism in CTLA-4 Is Associated with Overall Survival in Sunitinib-Treated Patients with Clear Cell Metastatic Renal Cell Carcinoma
Liu, X.
Swen, J. J.
Diekstra, M. H. M.
Boven, E.
Castellano, D.
Gelderblom, H.
Mathijssen, R. H. J.
Vermeulen, S. H.
Oosterwijk, E.
Junker, K.
Roessler, M.
Alexiusdottir, K.
Sverrisdottir, A.
Radu, M. T.
Ambert, V.
Eisen, T.
Warren, A.
Rodriguez-Antona, C.
Garcia-Donas, J.
Bohringer, S.
Koudijs, K. K. M.
Kiemeney, Lalm
Rini, B. I.
Guchelaar, H. J.
Clin Cancer Res2018Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
tyrosine kinase inhibitors (TKI)
clear cell renal cell carcinoma (ccRCC)
Purpose: The survival of patients with clear cell metastatic renal cell carcinoma (cc-mRCC) has improved substantially since the introduction of tyrosine kinase inhibitors (TKI). With the fact that TKIs interact with immune responses, we investigated whether polymorphisms of genes involved in immune checkpoints are related to the clinical outcome of cc-mRCC patients treated with sunitinib as first TKI.Experimental Design: Twenty-seven single-nucleotide polymorphisms (SNP) in CD274 (PD-L1), PDCD1 (PD-1), and CTLA-4 were tested for a possible association with progression-free survival (PFS) and overall survival (OS) in a discovery cohort of 550 sunitinib-treated cc-mRCC patients. SNPs with a significant association (P < 0.05) were tested in an independent validation cohort of 138 sunitinib-treated cc-mRCC patients. Finally, data of the discovery and validation cohort were pooled for meta-analysis.Results:CTLA-4 rs231775 and CD274 rs7866740 showed significant associations with OS in the discovery cohort after correction for age, gender, and Heng prognostic risk group [HR, 0.84; 95% confidence interval (CI), 0.72-0.98; P = 0.028, and HR, 0.73; 95% CI, 0.54-0.99; P = 0.047, respectively]. In the validation cohort, the associations of both SNPs with OS did not meet the significance threshold of P < 0.05. After meta-analysis, CTLA-4 rs231775 showed a significant association with OS (HR, 0.83; 95% CI, 0.72-0.95; P = 0.008). Patients with the GG genotype had longer OS (35.1 months) compared with patients with an AG (30.3 months) or AA genotype (24.3 months). No significant associations with PFS were found.Conclusions: The G-allele of rs231775 in the CTLA-4 gene is associated with an improved OS in sunitinib-treated cc-mRCC patients and could potentially be used as a prognostic biomarker. Clin Cancer Res; 1-7. (c)2018 AACR.
Deep unregistered multi-contrast MRI reconstruction
Liu, X.
Wang, J.
Jin, J.
Li, M.
Tang, F.
Crozier, S.
Liu, F.
Magn Reson Imaging2021Journal Article, cited 0 times
BraTS-TCGA-GBM
Algorithm Development
BRAIN
*Magnetic Resonance Imaging
*Neural Networks
Computer
Deep learning
Image reconstruction
Image registration
Magnetic resonance imaging (MRI)
Multi-contrast
Multiple magnetic resonance images of different contrasts are normally acquired for clinical diagnosis. Recently, research has shown that the previously acquired multi-contrast (MC) images of the same patient can be used as anatomical prior to accelerating magnetic resonance imaging (MRI). However, current MC-MRI networks are based on the assumption that the images are perfectly registered, which is rarely the case in real-world applications. In this paper, we propose an end-to-end deep neural network to reconstruct highly accelerated images by exploiting the shareable information from potentially misaligned reference images of an arbitrary contrast. Specifically, a spatial transformation (ST) module is designed and integrated into the reconstruction network to align the pre-acquired reference images with the images to be reconstructed. The misalignment is further alleviated by maximizing the normalized cross-correlation (NCC) between the MC images. The visualization of feature maps demonstrates that the proposed method effectively reduces the misalignment between the images for shareable information extraction when applied to the publicly available brain datasets. Additionally, the experimental results on these datasets show the proposed network allows the robust exploitation of shareable information across the misaligned MC images, leading to improved reconstruction results.
Symmetric-Constrained Irregular Structure Inpainting for Brain MRI Registration with Tumor Pathology
Liu, X.
Xing, F.
Yang, C.
Jay Kuo, C. C.
El Fakhri, G.
Woo, J.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BraTS 2018
BRAIN
Segmentation
Algorithm Development
Brain Tumor
Contextual Learning
Deep Learning
Image Inpainting
Irregular Structure
Registration
Symmetry
Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis. Since tumor region does not match with any ordinary brain tissue, it has been difficult to deformably register a patient's brain to a normal one. Many patient images are associated with irregularly distributed lesions, resulting in further distortion of normal tissue structures and complicating registration's similarity measure. In this work, we follow a multi-step context-aware image inpainting framework to generate synthetic tissue intensities in the tumor region. The coarse image-to-image translation is applied to make a rough inference of the missing parts. Then, a feature-level patch-match refinement module is applied to refine the details by modeling the semantic relevance between patch-wise features. A symmetry constraint reflecting a large degree of anatomical symmetry in the brain is further proposed to achieve better structure understanding. Deformable registration is applied between inpainted patient images and normal brains, and the resulting deformation field is eventually used to deform original patient data for the final alignment. The method was applied to the Multimodal Brain Tumor Segmentation (BraTS) 2018 challenge database and compared against three existing inpainting methods. The proposed method yielded results with increased peak signal-to-noise ratio, structural similarity index, inception score, and reduced L1 error, leading to successful patient-to-normal brain image registration.
Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans
Liu, Yucheng
Khosravan, Naji
Liu, Yulin
Stember, Joseph
Shoag, Jonathan
Bagci, Ulas
Jambawalikar, Sachin
2019Book Section, cited 0 times
PROSTATEx
Segmentation
3D Isotropic Super-resolution Prostate MRI Using Generative Adversarial Networks and Unpaired Multiplane Slices
Liu, Y.
Liu, Y.
Vanguri, R.
Litwiller, D.
Liu, M.
Hsu, H. Y.
Ha, R.
Shaish, H.
Jambawalikar, S.
J Digit Imaging2021Journal Article, cited 0 times
Website
PROSTATEx
Magnetic Resonance Imaging (MRI)
Generative Adversarial Network (GAN)
Image Enhancement/methods
PROSTATE
Super-resolution
We developed a deep learning-based super-resolution model for prostate MRI. 2D T2-weighted turbo spin echo (T2w-TSE) images are the core anatomical sequences in a multiparametric MRI (mpMRI) protocol. These images have coarse through-plane resolution, are non-isotropic, and have long acquisition times (approximately 10-15 min). The model we developed aims to preserve high-frequency details that are normally lost after 3D reconstruction. We propose a novel framework for generating isotropic volumes using generative adversarial networks (GAN) from anisotropic 2D T2w-TSE and single-shot fast spin echo (ssFSE) images. The CycleGAN model used in this study allows the unpaired dataset mapping to reconstruct super-resolution (SR) volumes. Fivefold cross-validation was performed. The improvements from patch-to-volume reconstruction (PVR) to SR are 80.17%, 63.77%, and 186% for perceptual index (PI), RMSE, and SSIM, respectively; the improvements from slice-to-volume reconstruction (SVR) to SR are 72.41%, 17.44%, and 7.5% for PI, RMSE, and SSIM, respectively. Five ssFSE cases were used to test for generalizability; the perceptual quality of SR images surpasses the in-plane ssFSE images by 37.5%, with 3.26% improvement in SSIM and a higher RMSE by 7.92%. SR images were quantitatively assessed with radiologist Likert scores. Our isotropic SR volumes are able to reproduce high-frequency detail, maintaining comparable image quality to in-plane TSE images in all planes without sacrificing perceptual accuracy. The SR reconstruction networks were also successfully applied to the ssFSE images, demonstrating that high-quality isotropic volume achieved from ultra-fast acquisition is feasible.
Relationship between Glioblastoma Heterogeneity and Survival Time: An MR Imaging Texture Analysis
Liu, Y
Xu, X
Yin, L
Zhang, X
Li, L
Lu, H
American Journal of Neuroradiology2017Journal Article, cited 8 times
Website
TCGA-GBM
Radiomics
postcontrast TI-weighted imaging
co-occurrence matrix
run-length matrix
histogram
global spatial variations
cancer genome atlas
recursive feature-elimination–based support vector machine classifier (SVM)
3D medical image encryption algorithm using biometric key and cubic S-box
Liu, Yunhao
Xue, Ru
Physica Scripta2024Journal Article, cited 0 times
Website
QIN BREAST
StageII-Colorectal-CT
TCGA-CESC
Security
Simulation
Considering the scarcity of research on 3D medical image encryption, this paper proposes a novel 3D medical image encryption scheme based on biometric key and cubic S-box. To enhance the data security, biometric keys are utilized to overcome the limitations of traditional methods where secret keys with no practical meaning, fixed length, and finite key space, while cubic S-box is constructed to increase the nonlinearity of image cryptosystem. The proposed cryptosystem mainly consists of four phases: pseudo-random sequence generation, confusion, substitution, and diffusion. Firstly, the stepwise iterative algorithm based on coupled chaotic systems is utilized for generating pseudo-random sequences for confusion and diffusion. Secondly, the confusion algorithm based on multiple sorting can scramble pixel positions in 3D images. Thirdly, guided by the designed cubic S-box, pixel substitution is executed sequentially. Lastly, the diffusion algorithm based on ECA and finite field multiplication is capable of increasing the plaintext sensitivity of cryptosystem by concealing the statistical characteristics of plaintext. Simulation experiments performed on multiple 3D medical images demonstrate that the proposed encryption scheme exhibits favorable statistical performance, sufficiently large key space, strong system sensitivity and robustness, and can resist various typical cryptographic attacks.
Radiomics-based prediction of survival in patients with head and neck squamous cell carcinoma based on pre- and post-treatment (18)F-PET/CT
Liu, Z.
Cao, Y.
Diao, W.
Cheng, Y.
Jia, Z.
Peng, X.
Aging (Albany NY)2020Journal Article, cited 0 times
Website
HNSCC
Radiomics
HEAD AND NECK
Positron Emission Tomography (PET)
Computed Tomography (CT)
Classification
BACKGROUND: 18-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-PET/CT) has been widely applied for the imaging of head and neck squamous cell carcinoma (HNSCC). This study examined whether pre- and post-treatment (18)F-PET/CT features can help predict the survival of HNSCC patients. RESULTS: Three radiomics features were identified as prognostic factors. Radiomics score calculated from these features significantly predicted overall survival (OS) and disease-free disease (DFS). The clinicopathological characteristics combined with pre- or post-treatment nomograms showed better ROC curves and decision curves than the nomogram based only on clinicopathological characteristics. CONCLUSIONS: Combining clinicopathological characteristics with radiomics features of pre-treatment PET/CT or post-treatment PET/CT assessment of primary tumor sites as positive or negative may substantially improve prediction of OS and DFS of HNSCC patients. METHODS: 171 patients who received pre-treatment (18)F-PET/CT scans and 154 patients who received post-treatment (18)F-PET/CT scans with HNSCC in the Cancer Imaging Achieve (TCIA) were included. Nomograms that combined clinicopathological features with either pre-treatment PET/CT radiomics features or post-treatment assessment of primary tumor sites were constructed using data from 154 HNSCC patients. Receiver operating characteristic (ROC) curves and decision curves were used to compare the predictions of these models with those of a model incorporating only clinicopathological features.
Automatic Segmentation of Non-tumor Tissues in Glioma MR Brain Images Using Deformable Registration with Partial Convolutional Networks
Liu, Zhongqiang
Gu, Dongdong
Zhang, Yu
Cao, Xiaohuan
Xue, Zhong
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BraTS 2018
BRAIN
Segmentation
Image Registration
Algorithm Development
In brain tumor diagnosis and surgical planning, segmentation of tumor regions and accurate analysis of surrounding normal tissues are necessary for physicians. Pathological variability often renders difficulty to register a well-labeled normal atlas to such images and to automatic segment/label surrounding normal brain tissues. In this paper, we propose a new registration approach that first segments brain tumor using a U-Net and then simulates missed normal tissues within the tumor region using a partial convolutional network. Then, a standard normal brain atlas image is registered onto such tumor-removed images in order to segment/label the normal brain tissues. In this way, our new approach greatly reduces the effects of pathological variability in deformable registration and segments the normal tissues surrounding brain tumor well. In experiments, we used MICCAI BraTS2018 T1 and FLAIR images to evaluate the proposed algorithm. By comparing direct registration with the proposed algorithm, the results showed that the Dice coefficient for gray matters was significantly improved for surrounding normal brain tissues.
Oligodendroglial tumours: subventricular zone involvement and seizure history are associated with CIC mutation status
Liu, Zhenyin
Liu, Hongsheng
Liu, Zhenqing
Zhang, Jing
BMC Neurol2019Journal Article, cited 1 times
Website
TCGA-LGG
Radiogenomics
BACKGROUND: CIC-mutant oligodendroglial tumours linked to better prognosis. We aim to investigate associations between CIC gene mutation status, MR characteristics and clinical features. METHODS: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive (TCGA/TCIA) for 59 patients with oligodendroglial tumours were used. Differences betwe